Sample records for iterative process consisting

  1. Self-consistent hybrid functionals for solids: a fully-automated implementation

    NASA Astrophysics Data System (ADS)

    Erba, A.

    2017-08-01

    A fully-automated algorithm for the determination of the system-specific optimal fraction of exact exchange in self-consistent hybrid functionals of the density-functional-theory is illustrated, as implemented into the public Crystal program. The exchange fraction of this new class of functionals is self-consistently updated proportionally to the inverse of the dielectric response of the system within an iterative procedure (Skone et al 2014 Phys. Rev. B 89, 195112). Each iteration of the present scheme, in turn, implies convergence of a self-consistent-field (SCF) and a coupled-perturbed-Hartree-Fock/Kohn-Sham (CPHF/KS) procedure. The present implementation, beside improving the user-friendliness of self-consistent hybrids, exploits the unperturbed and electric-field perturbed density matrices from previous iterations as guesses for subsequent SCF and CPHF/KS iterations, which is documented to reduce the overall computational cost of the whole process by a factor of 2.

  2. Iterative Authoring Using Story Generation Feedback: Debugging or Co-creation?

    NASA Astrophysics Data System (ADS)

    Swartjes, Ivo; Theune, Mariët

    We explore the role that story generation feedback may play within the creative process of interactive story authoring. While such feedback is often used as 'debugging' information, we explore here a 'co-creation' view, in which the outcome of the story generator influences authorial intent. We illustrate an iterative authoring approach in which each iteration consists of idea generation, implementation and simulation. We find that the tension between authorial intent and the partially uncontrollable story generation outcome may be relieved by taking such a co-creation approach.

  3. Defense Advanced Research Projects Agency (DARPA) Network Archive (DNA)

    DTIC Science & Technology

    2008-12-01

    therefore decided for an iterative development process even within such a small project. The first iteration consisted of conducting specific...Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Washington

  4. Development of the Nuclear-Electronic Orbital Approach and Applications to Ionic Liquids and Tunneling Processes

    DTIC Science & Technology

    2010-02-24

    electronic Schrodinger equation . In previous grant cycles, we implemented the NEO approach at the Hartree-Fock (NEO-HF),13 configuration interaction...electronic and nuclear molecular orbitals. The resulting electronic and nuclear Hartree-Fock-Roothaan equations are solved iteratively until self...directly into the standard Hartree- Fock-Roothaan equations , which are solved iteratively to self-consistency. The density matrix representation

  5. Mixed Material Plasma-Surface Interactions in ITER: Recent Results from the PISCES Group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tynan, George R.; Baldwin, Matthew; Doerner, Russell

    This paper summarizes recent PISCES studies focused on the effects associated with mixed species plasmas that are similar in composition to what one might expect in ITER. Formation of nanometer scale whiskerlike features occurs in W surfaces exposed to pure He and mixed D/He plasmas and appears to be associated with the formation of He nanometer-scaled bubbles in the W surface. Studies of Be-W alloy formation in Be-seeded D plasmas suggest that this process may be important in ITER all metal wall operational scenarios. Studies also suggest that BeD formation via chemical sputtering of Be walls may be an importantmore » first wall erosion mechanism. D retention in ITER mixed materials has also been studied. The D release behavior from beryllium co-deposits does not appear to be a diffusion dominated process, but instead is consistent with thermal release from a number of variable trapping energy sites. As a result, the amount of tritium remaining in codeposits in ITER after baking will be determined by the maximum temperature achieved, rather than by the duration of the baking cycle.« less

  6. Approximate techniques of structural reanalysis

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Lowder, H. E.

    1974-01-01

    A study is made of two approximate techniques for structural reanalysis. These include Taylor series expansions for response variables in terms of design variables and the reduced-basis method. In addition, modifications to these techniques are proposed to overcome some of their major drawbacks. The modifications include a rational approach to the selection of the reduced-basis vectors and the use of Taylor series approximation in an iterative process. For the reduced basis a normalized set of vectors is chosen which consists of the original analyzed design and the first-order sensitivity analysis vectors. The use of the Taylor series approximation as a first (initial) estimate in an iterative process, can lead to significant improvements in accuracy, even with one iteration cycle. Therefore, the range of applicability of the reanalysis technique can be extended. Numerical examples are presented which demonstrate the gain in accuracy obtained by using the proposed modification techniques, for a wide range of variations in the design variables.

  7. On iterative processes in the Krylov-Sonneveld subspaces

    NASA Astrophysics Data System (ADS)

    Ilin, Valery P.

    2016-10-01

    The iterative Induced Dimension Reduction (IDR) methods are considered for solving large systems of linear algebraic equations (SLAEs) with nonsingular nonsymmetric matrices. These approaches are investigated by many authors and are charachterized sometimes as the alternative to the classical processes of Krylov type. The key moments of the IDR algorithms consist in the construction of the embedded Sonneveld subspaces, which have the decreasing dimensions and use the orthogonalization to some fixed subspace. Other independent approaches for research and optimization of the iterations are based on the augmented and modified Krylov subspaces by using the aggregation and deflation procedures with present various low rank approximations of the original matrices. The goal of this paper is to show, that IDR method in Sonneveld subspaces present an original interpretation of the modified algorithms in the Krylov subspaces. In particular, such description is given for the multi-preconditioned semi-conjugate direction methods which are actual for the parallel algebraic domain decomposition approaches.

  8. Iterated reaction graphs: simulating complex Maillard reaction pathways.

    PubMed

    Patel, S; Rabone, J; Russell, S; Tissen, J; Klaffke, W

    2001-01-01

    This study investigates a new method of simulating a complex chemical system including feedback loops and parallel reactions. The practical purpose of this approach is to model the actual reactions that take place in the Maillard process, a set of food browning reactions, in sufficient detail to be able to predict the volatile composition of the Maillard products. The developed framework, called iterated reaction graphs, consists of two main elements: a soup of molecules and a reaction base of Maillard reactions. An iterative process loops through the reaction base, taking reactants from and feeding products back to the soup. This produces a reaction graph, with molecules as nodes and reactions as arcs. The iterated reaction graph is updated and validated by comparing output with the main products found by classical gas-chromatographic/mass spectrometric analysis. To ensure a realistic output and convergence to desired volatiles only, the approach contains a number of novel elements: rate kinetics are treated as reaction probabilities; only a subset of the true chemistry is modeled; and the reactions are blocked into groups.

  9. VIMOS Instrument Control Software Design: an Object Oriented Approach

    NASA Astrophysics Data System (ADS)

    Brau-Nogué, Sylvie; Lucuix, Christian

    2002-12-01

    The Franco-Italian VIMOS instrument is a VIsible imaging Multi-Object Spectrograph with outstanding multiplex capabilities, allowing to take spectra of more than 800 objects simultaneously, or integral field spectroscopy mode in a 54x54 arcsec area. VIMOS is being installed at the Nasmyth focus of the third Unit Telescope of the European Southern Observatory Very Large Telescope (VLT) at Mount Paranal in Chile. This paper will describe the analysis, the design and the implementation of the VIMOS Instrument Control System, using UML notation. Our Control group followed an Object Oriented software process while keeping in mind the ESO VLT standard control concepts. At ESO VLT a complete software library is available. Rather than applying waterfall lifecycle, ICS project used iterative development, a lifecycle consisting of several iterations. Each iteration consisted in : capture and evaluate the requirements, visual modeling for analysis and design, implementation, test, and deployment. Depending of the project phases, iterations focused more or less on specific activity. The result is an object model (the design model), including use-case realizations. An implementation view and a deployment view complement this product. An extract of VIMOS ICS UML model will be presented and some implementation, integration and test issues will be discussed.

  10. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    PubMed

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  11. Layout compliance for triple patterning lithography: an iterative approach

    NASA Astrophysics Data System (ADS)

    Yu, Bei; Garreton, Gilda; Pan, David Z.

    2014-10-01

    As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.

  12. Periodic Pulay method for robust and efficient convergence acceleration of self-consistent field iterations

    DOE PAGES

    Banerjee, Amartya S.; Suryanarayana, Phanish; Pask, John E.

    2016-01-21

    Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of the most widely used mixing schemes for accelerating the self-consistent solution of electronic structure problems. In this work, we propose a simple generalization of DIIS in which Pulay extrapolation is performed at periodic intervals rather than on every self-consistent field iteration, and linear mixing is performed on all other iterations. Lastly, we demonstrate through numerical tests on a wide variety of materials systems in the framework of density functional theory that the proposed generalization of Pulay's method significantly improves its robustness and efficiency.

  13. Block Gauss elimination followed by a classical iterative method for the solution of linear systems

    NASA Astrophysics Data System (ADS)

    Alanelli, Maria; Hadjidimos, Apostolos

    2004-02-01

    In the last two decades many papers have appeared in which the application of an iterative method for the solution of a linear system is preceded by a step of the Gauss elimination process in the hope that this will increase the rates of convergence of the iterative method. This combination of methods has been proven successful especially when the matrix A of the system is an M-matrix. The purpose of this paper is to extend the idea of one to more Gauss elimination steps, consider other classes of matrices A, e.g., p-cyclic consistently ordered, and generalize and improve the asymptotic convergence rates of some of the methods known so far.

  14. Improved Savitzky-Golay-method-based fluorescence subtraction algorithm for rapid recovery of Raman spectra.

    PubMed

    Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan

    2014-08-20

    In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.

  15. Low-temperature tensile strength of the ITER-TF model coil insulation system after reactor irradiation

    NASA Astrophysics Data System (ADS)

    Bittner-Rohrhofer, K.; Humer, K.; Weber, H. W.

    The windings of the superconducting magnet coils for the ITER-FEAT fusion device are affected by high mechanical stresses at cryogenic temperatures and by a radiation environment, which impose certain constraints especially on the insulating materials. A glass fiber reinforced plastic (GFRP) laminate, which consists of Kapton/R-glass-fiber reinforcement tapes, vacuum-impregnated in a DGEBA epoxy system, was used for the European toroidal field model coil turn insulation of ITER. In order to assess its mechanical properties under the actual operating conditions of ITER-FEAT, cryogenic (77 K) static tensile tests and tension-tension fatigue measurements were done before and after irradiation to a fast neutron fluence of 1×10 22 m -2 ( E>0.1 MeV), i.e. the ITER-FEAT design fluence level. We find that the mechanical strength and the fracture behavior of this GFRP are strongly influenced by the winding direction of the tape and by the radiation induced delamination process. In addition, the composite swells by 3%, forming bubbles inside the laminate, and loses weight (1.4%) at the design fluence.

  16. Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks

    PubMed Central

    Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng

    2017-01-01

    High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328

  17. Tunable output-frequency filter algorithm for imaging through scattering media under LED illumination

    NASA Astrophysics Data System (ADS)

    Zhou, Meiling; Singh, Alok Kumar; Pedrini, Giancarlo; Osten, Wolfgang; Min, Junwei; Yao, Baoli

    2018-03-01

    We present a tunable output-frequency filter (TOF) algorithm to reconstruct the object from noisy experimental data under low-power partially coherent illumination, such as LED, when imaging through scattering media. In the iterative algorithm, we employ Gaussian functions with different filter windows at different stages of iteration process to reduce corruption from experimental noise to search for a global minimum in the reconstruction. In comparison with the conventional iterative phase retrieval algorithm, we demonstrate that the proposed TOF algorithm achieves consistent and reliable reconstruction in the presence of experimental noise. Moreover, the spatial resolution and distinctive features are retained in the reconstruction since the filter is applied only to the region outside the object. The feasibility of the proposed method is proved by experimental results.

  18. Getting Results: Small Changes, Big Cohorts and Technology

    ERIC Educational Resources Information Center

    Kenney, Jacqueline L.

    2012-01-01

    This paper presents an example of constructive alignment in practice. Integrated technology supports were deployed to increase the consistency between learning objectives, activities and assessment and to foster student-centred, higher-order learning processes in the unit. Modifications took place over nine iterations of a second-year Marketing…

  19. Iterative LQG Controller Design Through Closed-Loop Identification

    NASA Technical Reports Server (NTRS)

    Hsiao, Min-Hung; Huang, Jen-Kuang; Cox, David E.

    1996-01-01

    This paper presents an iterative Linear Quadratic Gaussian (LQG) controller design approach for a linear stochastic system with an uncertain open-loop model and unknown noise statistics. This approach consists of closed-loop identification and controller redesign cycles. In each cycle, the closed-loop identification method is used to identify an open-loop model and a steady-state Kalman filter gain from closed-loop input/output test data obtained by using a feedback LQG controller designed from the previous cycle. Then the identified open-loop model is used to redesign the state feedback. The state feedback and the identified Kalman filter gain are used to form an updated LQC controller for the next cycle. This iterative process continues until the updated controller converges. The proposed controller design is demonstrated by numerical simulations and experiments on a highly unstable large-gap magnetic suspension system.

  20. Cold Test and Performance Evaluation of Prototype Cryoline-X

    NASA Astrophysics Data System (ADS)

    Shah, N.; Choukekar, K.; Kapoor, H.; Muralidhara, S.; Garg, A.; Kumar, U.; Jadon, M.; Dash, B.; Bhattachrya, R.; Badgujar, S.; Billot, V.; Bravais, P.; Cadeau, P.

    2017-12-01

    The multi-process pipe vacuum jacketed cryolines for the ITER project are probably world’s most complex cryolines in terms of layout, load cases, quality, safety and regulatory requirements. As a risk mitigation plan, design, manufacturing and testing of prototype cryoline (PTCL) was planned before the approval of final design of ITER cryolines. The 29 meter long PTCL consist of 6 process pipes encased by thermal shield inside Outer Vacuum Jacket of DN 600 size and carries cold helium at 4.5 K and 80 K. The global heat load limit was defined as 1.2 W/m at 4.5 K and 4.5 W/m at 80 K. The PTCL-X (PTCL for Group-X cryolines) was specified in detail by ITER-India and designed as well as manufactured by Air Liquide. PTCL-X was installed and tested at cryogenic temperature at ITER-India Cryogenic Laboratory in 2016. The heat load at 4.5 K and 80 K, estimated using enthalpy difference method, was found to be approximately 0.8 W/m at 4.5 K, 4.2 W/m at 80 K, which is well within the defined limits. Thermal shield temperature profile was also found to be satisfactory. Paper summarizes the cold test results of PTCL-X

  1. A biological phantom for evaluation of CT image reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Cammin, J.; Fung, G. S. K.; Fishman, E. K.; Siewerdsen, J. H.; Stayman, J. W.; Taguchi, K.

    2014-03-01

    In recent years, iterative algorithms have become popular in diagnostic CT imaging to reduce noise or radiation dose to the patient. The non-linear nature of these algorithms leads to non-linearities in the imaging chain. However, the methods to assess the performance of CT imaging systems were developed assuming the linear process of filtered backprojection (FBP). Those methods may not be suitable any longer when applied to non-linear systems. In order to evaluate the imaging performance, a phantom is typically scanned and the image quality is measured using various indices. For reasons of practicality, cost, and durability, those phantoms often consist of simple water containers with uniform cylinder inserts. However, these phantoms do not represent the rich structure and patterns of real tissue accurately. As a result, the measured image quality or detectability performance for lesions may not reflect the performance on clinical images. The discrepancy between estimated and real performance may be even larger for iterative methods which sometimes produce "plastic-like", patchy images with homogeneous patterns. Consequently, more realistic phantoms should be used to assess the performance of iterative algorithms. We designed and constructed a biological phantom consisting of porcine organs and tissue that models a human abdomen, including liver lesions. We scanned the phantom on a clinical CT scanner and compared basic image quality indices between filtered backprojection and an iterative reconstruction algorithm.

  2. Manufacture and mechanical characterisation of high voltage insulation for superconducting busbars - (Part 1) Materials selection and development

    NASA Astrophysics Data System (ADS)

    Clayton, N.; Crouchen, M.; Devred, A.; Evans, D.; Gung, C.-Y.; Lathwell, I.

    2017-04-01

    It is planned that the high voltage electrical insulation on the ITER feeder busbars will consist of interleaved layers of epoxy resin pre-impregnated glass tapes ('pre-preg') and polyimide. In addition to its electrical insulation function, the busbar insulation must have adequate mechanical properties to sustain the loads imposed on it during ITER magnet operation. This paper reports an investigation into suitable materials to manufacture the high voltage insulation for the ITER superconducting busbars and pipework. An R&D programme was undertaken in order to identify suitable pre-preg and polyimide materials from a range of suppliers. Pre-preg materials were obtained from 3 suppliers and used with Kapton HN, to make mouldings using the desired insulation architecture. Two main processing routes for pre-pregs have been investigated, namely vacuum bag processing (out of autoclave processing) and processing using a material with a high coefficient of thermal expansion (silicone rubber), to apply the compaction pressure on the insulation. Insulation should have adequate mechanical properties to cope with the stresses induced by the operating environment and a low void content necessary in a high voltage application. The quality of the mouldings was assessed by mechanical testing at 77 K and by the measurement of the void content.

  3. Parallel processing of real-time dynamic systems simulation on OSCAR (Optimally SCheduled Advanced multiprocessoR)

    NASA Technical Reports Server (NTRS)

    Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke

    1989-01-01

    Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.

  4. Measures of Instruction for Creative Engagement: Making Metacognition, Modeling and Creative Thinking Visible

    ERIC Educational Resources Information Center

    Pitts, Christine; Anderson, Ross; Haney, Michele

    2018-01-01

    The purpose of the current study was to estimate reliability, internal consistency and construct validity of the Measure of Instruction for Creative Engagement (MICE) instrument. The MICE uses an iterative process of evidence collection and scoring through teacher observations to determine instructional domain ratings and overall scores. The…

  5. Comparisons of Observed Process Quality in German and American Infant/Toddler Programs

    ERIC Educational Resources Information Center

    Tietze, Wolfgang; Cryer, Debby

    2004-01-01

    Observed process quality in infant/toddler classrooms was compared in Germany (n = 75) and the USA (n = 219). Process quality was assessed with the Infant/Toddler Environment Rating Scale(ITERS) and parent attitudes about ITERS content with the ITERS Parent Questionnaire (ITERSPQ). The ITERS had comparable reliabilities in the two countries and…

  6. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles.

    PubMed

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-08-13

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well.

  7. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles

    PubMed Central

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-01-01

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well. PMID:26287194

  8. Systems and methods for optimal power flow on a radial network

    DOEpatents

    Low, Steven H.; Peng, Qiuyu

    2018-04-24

    Node controllers and power distribution networks in accordance with embodiments of the invention enable distributed power control. One embodiment includes a node controller including a distributed power control application; a plurality of node operating parameters describing the operating parameter of a node and a set of at least one node selected from the group consisting of an ancestor node and at least one child node; wherein send node operating parameters to nodes in the set of at least one node; receive operating parameters from the nodes in the set of at least one node; calculate a plurality of updated node operating parameters using an iterative process to determine the updated node operating parameters using the node operating parameters that describe the operating parameters of the node and the set of at least one node, where the iterative process involves evaluation of a closed form solution; and adjust node operating parameters.

  9. A Least-Squares Commutator in the Iterative Subspace Method for Accelerating Self-Consistent Field Convergence.

    PubMed

    Li, Haichen; Yaron, David J

    2016-11-08

    A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.

  10. Developing a medication communication framework across continuums of care using the Circle of Care Modeling approach.

    PubMed

    Kitson, Nicole A; Price, Morgan; Lau, Francis Y; Showler, Grey

    2013-10-17

    Medication errors are a common type of preventable errors in health care causing unnecessary patient harm, hospitalization, and even fatality. Improving communication between providers and between providers and patients is a key aspect of decreasing medication errors and improving patient safety. Medication management requires extensive collaboration and communication across roles and care settings, which can reduce (or contribute to) medication-related errors. Medication management involves key recurrent activities (determine need, prescribe, dispense, administer, and monitor/evaluate) with information communicated within and between each. Despite its importance, there is a lack of conceptual models that explore medication communication specifically across roles and settings. This research seeks to address that gap. The Circle of Care Modeling (CCM) approach was used to build a model of medication communication activities across the circle of care. CCM positions the patient in the centre of his or her own healthcare system; providers and other roles are then modeled around the patient as a web of relationships. Recurrent medication communication activities were mapped to the medication management framework. The research occurred in three iterations, to test and revise the model: Iteration 1 consisted of a literature review and internal team discussion, Iteration 2 consisted of interviews, observation, and a discussion group at a Community Health Centre, and Iteration 3 consisted of interviews and a discussion group in the larger community. Each iteration provided further detail to the Circle of Care medication communication model. Specific medication communication activities were mapped along each communication pathway between roles and to the medication management framework. We could not map all medication communication activities to the medication management framework; we added Coordinate as a separate and distinct recurrent activity. We saw many examples of coordination activities, for instance, Medical Office Assistants acting as a liaison between pharmacists and family physicians to clarify prescription details. Through the use of CCM we were able to unearth tacitly held knowledge to expand our understanding of medication communication. Drawing out the coordination activities could be a missing piece for us to better understand how to streamline and improve multi-step communication processes with a goal of improving patient safety.

  11. Developing a medication communication framework across continuums of care using the Circle of Care Modeling approach

    PubMed Central

    2013-01-01

    Background Medication errors are a common type of preventable errors in health care causing unnecessary patient harm, hospitalization, and even fatality. Improving communication between providers and between providers and patients is a key aspect of decreasing medication errors and improving patient safety. Medication management requires extensive collaboration and communication across roles and care settings, which can reduce (or contribute to) medication-related errors. Medication management involves key recurrent activities (determine need, prescribe, dispense, administer, and monitor/evaluate) with information communicated within and between each. Despite its importance, there is a lack of conceptual models that explore medication communication specifically across roles and settings. This research seeks to address that gap. Methods The Circle of Care Modeling (CCM) approach was used to build a model of medication communication activities across the circle of care. CCM positions the patient in the centre of his or her own healthcare system; providers and other roles are then modeled around the patient as a web of relationships. Recurrent medication communication activities were mapped to the medication management framework. The research occurred in three iterations, to test and revise the model: Iteration 1 consisted of a literature review and internal team discussion, Iteration 2 consisted of interviews, observation, and a discussion group at a Community Health Centre, and Iteration 3 consisted of interviews and a discussion group in the larger community. Results Each iteration provided further detail to the Circle of Care medication communication model. Specific medication communication activities were mapped along each communication pathway between roles and to the medication management framework. We could not map all medication communication activities to the medication management framework; we added Coordinate as a separate and distinct recurrent activity. We saw many examples of coordination activities, for instance, Medical Office Assistants acting as a liaison between pharmacists and family physicians to clarify prescription details. Conclusions Through the use of CCM we were able to unearth tacitly held knowledge to expand our understanding of medication communication. Drawing out the coordination activities could be a missing piece for us to better understand how to streamline and improve multi-step communication processes with a goal of improving patient safety. PMID:24134454

  12. Gaussian mixed model in support of semiglobal matching leveraged by ground control points

    NASA Astrophysics Data System (ADS)

    Ma, Hao; Zheng, Shunyi; Li, Chang; Li, Yingsong; Gui, Li

    2017-04-01

    Semiglobal matching (SGM) has been widely applied in large aerial images because of its good tradeoff between complexity and robustness. The concept of ground control points (GCPs) is adopted to make SGM more robust. We model the effect of GCPs as two data terms for stereo matching between high-resolution aerial epipolar images in an iterative scheme. One term based on GCPs is formulated by Gaussian mixture model, which strengths the relation between GCPs and the pixels to be estimated and encodes some degree of consistency between them with respect to disparity values. Another term depends on pixel-wise confidence, and we further design a confidence updating equation based on three rules. With this confidence-based term, the assignment of disparity can be heuristically selected among disparity search ranges during the iteration process. Several iterations are sufficient to bring out satisfactory results according to our experiments. Experimental results validate that the proposed method outperforms surface reconstruction, which is a representative variant of SGM and behaves excellently on aerial images.

  13. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    PubMed

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the superresolution iterations. A quantitative evaluation of the performance of these algorithms for restoring and superresolving various imagery data captured by diffraction-limited sensing operations are also presented.

  14. Iterative deep convolutional encoder-decoder network for medical image segmentation.

    PubMed

    Jung Uk Kim; Hak Gu Kim; Yong Man Ro

    2017-07-01

    In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.

  15. Parabolized Navier-Stokes Code for Computing Magneto-Hydrodynamic Flowfields

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel B. (Technical Monitor); Tannehill, J. C.

    2003-01-01

    This report consists of two published papers, 'Computation of Magnetohydrodynamic Flows Using an Iterative PNS Algorithm' and 'Numerical Simulation of Turbulent MHD Flows Using an Iterative PNS Algorithm'.

  16. The Effect of Iteration on the Design Performance of Primary School Children

    ERIC Educational Resources Information Center

    Looijenga, Annemarie; Klapwijk, Remke; de Vries, Marc J.

    2015-01-01

    Iteration during the design process is an essential element. Engineers optimize their design by iteration. Research on iteration in Primary Design Education is however scarce; possibly teachers believe they do not have enough time for iteration in daily classroom practices. Spontaneous playing behavior of children indicates that iteration fits in…

  17. ITER Central Solenoid Module Fabrication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, John

    The fabrication of the modules for the ITER Central Solenoid (CS) has started in a dedicated production facility located in Poway, California, USA. The necessary tools have been designed, built, installed, and tested in the facility to enable the start of production. The current schedule has first module fabrication completed in 2017, followed by testing and subsequent shipment to ITER. The Central Solenoid is a key component of the ITER tokamak providing the inductive voltage to initiate and sustain the plasma current and to position and shape the plasma. The design of the CS has been a collaborative effort betweenmore » the US ITER Project Office (US ITER), the international ITER Organization (IO) and General Atomics (GA). GA’s responsibility includes: completing the fabrication design, developing and qualifying the fabrication processes and tools, and then completing the fabrication of the seven 110 tonne CS modules. The modules will be shipped separately to the ITER site, and then stacked and aligned in the Assembly Hall prior to insertion in the core of the ITER tokamak. A dedicated facility in Poway, California, USA has been established by GA to complete the fabrication of the seven modules. Infrastructure improvements included thick reinforced concrete floors, a diesel generator for backup power, along with, cranes for moving the tooling within the facility. The fabrication process for a single module requires approximately 22 months followed by five months of testing, which includes preliminary electrical testing followed by high current (48.5 kA) tests at 4.7K. The production of the seven modules is completed in a parallel fashion through ten process stations. The process stations have been designed and built with most stations having completed testing and qualification for carrying out the required fabrication processes. The final qualification step for each process station is achieved by the successful production of a prototype coil. Fabrication of the first ITER module is in progress. The seven modules will be individually shipped to Cadarache, France upon their completion. This paper describes the processes and status of the fabrication of the CS Modules for ITER.« less

  18. Iterative Methods for the Non-LTE Transfer of Polarized Radiation: Resonance Line Polarization in One-dimensional Atmospheres

    NASA Astrophysics Data System (ADS)

    Trujillo Bueno, Javier; Manso Sainz, Rafael

    1999-05-01

    This paper shows how to generalize to non-LTE polarization transfer some operator splitting methods that were originally developed for solving unpolarized transfer problems. These are the Jacobi-based accelerated Λ-iteration (ALI) method of Olson, Auer, & Buchler and the iterative schemes based on Gauss-Seidel and successive overrelaxation (SOR) iteration of Trujillo Bueno and Fabiani Bendicho. The theoretical framework chosen for the formulation of polarization transfer problems is the quantum electrodynamics (QED) theory of Landi Degl'Innocenti, which specifies the excitation state of the atoms in terms of the irreducible tensor components of the atomic density matrix. This first paper establishes the grounds of our numerical approach to non-LTE polarization transfer by concentrating on the standard case of scattering line polarization in a gas of two-level atoms, including the Hanle effect due to a weak microturbulent and isotropic magnetic field. We begin demonstrating that the well-known Λ-iteration method leads to the self-consistent solution of this type of problem if one initializes using the ``exact'' solution corresponding to the unpolarized case. We show then how the above-mentioned splitting methods can be easily derived from this simple Λ-iteration scheme. We show that our SOR method is 10 times faster than the Jacobi-based ALI method, while our implementation of the Gauss-Seidel method is 4 times faster. These iterative schemes lead to the self-consistent solution independently of the chosen initialization. The convergence rate of these iterative methods is very high; they do not require either the construction or the inversion of any matrix, and the computing time per iteration is similar to that of the Λ-iteration method.

  19. Perl Modules for Constructing Iterators

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt

    2009-01-01

    The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.

  20. Unsupervised iterative detection of land mines in highly cluttered environments.

    PubMed

    Batman, Sinan; Goutsias, John

    2003-01-01

    An unsupervised iterative scheme is proposed for land mine detection in heavily cluttered scenes. This scheme is based on iterating hybrid multispectral filters that consist of a decorrelating linear transform coupled with a nonlinear morphological detector. Detections extracted from the first pass are used to improve results in subsequent iterations. The procedure stops after a predetermined number of iterations. The proposed scheme addresses several weaknesses associated with previous adaptations of morphological approaches to land mine detection. Improvement in detection performance, robustness with respect to clutter inhomogeneities, a completely unsupervised operation, and computational efficiency are the main highlights of the method. Experimental results reveal excellent performance.

  1. Automating Rule Strengths in Expert Systems.

    DTIC Science & Technology

    1987-05-01

    systems were designed in an incremental, iterative way. One of the most easily identifiable phases in this process, sometimes called tuning, consists...attenuators. The designer of the knowledge-based system must determine (synthesize) or adjust (xfine, if estimates of the values are given) these...values. We consider two ways in which the designer can learn the values. We call the first model of learning the complete case and the second model the

  2. Morphing Aircraft Structures: Research in AFRL/RB

    DTIC Science & Technology

    2008-09-01

    various iterative steps in the process, etc. The solver also internally controls the step size for integration, as this is independent of the step...Coupling of Substructures for Dynamic Analyses,” AIAA Journal , Vol. 6, No. 7, 1968, pp. 1313-1319. 2“Using the State-Dependent Modal Force (MFORCE),” AFL...an actuation system consisting of multiple internal actuators, centrally computer controlled to implement any commanded morphing configuration; and

  3. Successive Over-Relaxation Technique for High-Performance Blind Image Deconvolution

    DTIC Science & Technology

    2015-06-08

    deconvolution, space surveillance, Gauss - Seidel iteration 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18, NUMBER OF PAGES 5...sensible approximate solutions to the ill-posed nonlinear inverse problem. These solutions are addresses as fixed points of the iteration which consists in...alternating approximations (AA) for the object and for the PSF performed with a prescribed number of inner iterative descents from trivial (zero

  4. Iterative learning-based decentralized adaptive tracker for large-scale systems: a digital redesign approach.

    PubMed

    Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua

    2011-07-01

    In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Modern Workflow Full Waveform Inversion Applied to North America and the Northern Atlantic

    NASA Astrophysics Data System (ADS)

    Krischer, Lion; Fichtner, Andreas; Igel, Heiner

    2015-04-01

    We present the current state of a new seismic tomography model obtained using full waveform inversion of the crustal and upper mantle structure beneath North America and the Northern Atlantic, including the westernmost part of Europe. Parts of the eastern portion of the initial model consists of previous models by Fichtner et al. (2013) and Rickers et al. (2013). The final results of this study will contribute to the 'Comprehensive Earth Model' being developed by the Computational Seismology group at ETH Zurich. Significant challenges include the size of the domain, the uneven event and station coverage, and the strong east-west alignment of seismic ray paths across the North Atlantic. We use as much data as feasible, resulting in several thousand recordings per event depending on the receivers deployed at the earthquakes' origin times. To manage such projects in a reproducible and collaborative manner, we, as tomographers, should abandon ad-hoc scripts and one-time programs, and adopt sustainable and reusable solutions. Therefore we developed the LArge-scale Seismic Inversion Framework (LASIF - http://lasif.net), an open-source toolbox for managing seismic data in the context of non-linear iterative inversions that greatly reduces the time to research. Information on the applied processing, modelling, iterative model updating, what happened during each iteration, and so on are systematically archived. This results in a provenance record of the final model which in the end significantly enhances the reproducibility of iterative inversions. Additionally, tools for automated data download across different data centers, window selection, misfit measurements, parallel data processing, and input file generation for various forward solvers are provided.

  6. The Iterative Design Process in Research and Development: A Work Experience Paper

    NASA Technical Reports Server (NTRS)

    Sullivan, George F. III

    2013-01-01

    The iterative design process is one of many strategies used in new product development. Top-down development strategies, like waterfall development, place a heavy emphasis on planning and simulation. The iterative process, on the other hand, is better suited to the management of small to medium scale projects. Over the past four months, I have worked with engineers at Johnson Space Center on a multitude of electronics projects. By describing the work I have done these last few months, analyzing the factors that have driven design decisions, and examining the testing and verification process, I will demonstrate that iterative design is the obvious choice for research and development projects.

  7. The ATLAS Public Web Pages: Online Management of HEP External Communication Content

    NASA Astrophysics Data System (ADS)

    Goldfarb, S.; Marcelloni, C.; Eli Phoboo, A.; Shaw, K.

    2015-12-01

    The ATLAS Education and Outreach Group is in the process of migrating its public online content to a professionally designed set of web pages built on the Drupal [1] content management system. Development of the front-end design passed through several key stages, including audience surveys, stakeholder interviews, usage analytics, and a series of fast design iterations, called sprints. Implementation of the web site involves application of the html design using Drupal templates, refined development iterations, and the overall population of the site with content. We present the design and development processes and share the lessons learned along the way, including the results of the data-driven discovery studies. We also demonstrate the advantages of selecting a back-end supported by content management, with a focus on workflow. Finally, we discuss usage of the new public web pages to implement outreach strategy through implementation of clearly presented themes, consistent audience targeting and messaging, and the enforcement of a well-defined visual identity.

  8. Automatic Synthesis of UML Designs from Requirements in an Iterative Process

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Whittle, Jon; Clancy, Daniel (Technical Monitor)

    2001-01-01

    The Unified Modeling Language (UML) is gaining wide popularity for the design of object-oriented systems. UML combines various object-oriented graphical design notations under one common framework. A major factor for the broad acceptance of UML is that it can be conveniently used in a highly iterative, Use Case (or scenario-based) process (although the process is not a part of UML). Here, the (pre-) requirements for the software are specified rather informally as Use Cases and a set of scenarios. A scenario can be seen as an individual trace of a software artifact. Besides first sketches of a class diagram to illustrate the static system breakdown, scenarios are a favorite way of communication with the customer, because scenarios describe concrete interactions between entities and are thus easy to understand. Scenarios with a high level of detail are often expressed as sequence diagrams. Later in the design and implementation stage (elaboration and implementation phases), a design of the system's behavior is often developed as a set of statecharts. From there (and the full-fledged class diagram), actual code development is started. Current commercial UML tools support this phase by providing code generators for class diagrams and statecharts. In practice, it can be observed that the transition from requirements to design to code is a highly iterative process. In this talk, a set of algorithms is presented which perform reasonable synthesis and transformations between different UML notations (sequence diagrams, Object Constraint Language (OCL) constraints, statecharts). More specifically, we will discuss the following transformations: Statechart synthesis, introduction of hierarchy, consistency of modifications, and "design-debugging".

  9. Model for Simulating a Spiral Software-Development Process

    NASA Technical Reports Server (NTRS)

    Mizell, Carolyn; Curley, Charles; Nayak, Umanath

    2010-01-01

    A discrete-event simulation model, and a computer program that implements the model, have been developed as means of analyzing a spiral software-development process. This model can be tailored to specific development environments for use by software project managers in making quantitative cases for deciding among different software-development processes, courses of action, and cost estimates. A spiral process can be contrasted with a waterfall process, which is a traditional process that consists of a sequence of activities that include analysis of requirements, design, coding, testing, and support. A spiral process is an iterative process that can be regarded as a repeating modified waterfall process. Each iteration includes assessment of risk, analysis of requirements, design, coding, testing, delivery, and evaluation. A key difference between a spiral and a waterfall process is that a spiral process can accommodate changes in requirements at each iteration, whereas in a waterfall process, requirements are considered to be fixed from the beginning and, therefore, a waterfall process is not flexible enough for some projects, especially those in which requirements are not known at the beginning or may change during development. For a given project, a spiral process may cost more and take more time than does a waterfall process, but may better satisfy a customer's expectations and needs. Models for simulating various waterfall processes have been developed previously, but until now, there have been no models for simulating spiral processes. The present spiral-process-simulating model and the software that implements it were developed by extending a discrete-event simulation process model of the IEEE 12207 Software Development Process, which was built using commercially available software known as the Process Analysis Tradeoff Tool (PATT). Typical inputs to PATT models include industry-average values of product size (expressed as number of lines of code), productivity (number of lines of code per hour), and number of defects per source line of code. The user provides the number of resources, the overall percent of effort that should be allocated to each process step, and the number of desired staff members for each step. The output of PATT includes the size of the product, a measure of effort, a measure of rework effort, the duration of the entire process, and the numbers of injected, detected, and corrected defects as well as a number of other interesting features. In the development of the present model, steps were added to the IEEE 12207 waterfall process, and this model and its implementing software were made to run repeatedly through the sequence of steps, each repetition representing an iteration in a spiral process. Because the IEEE 12207 model is founded on a waterfall paradigm, it enables direct comparison of spiral and waterfall processes. The model can be used throughout a software-development project to analyze the project as more information becomes available. For instance, data from early iterations can be used as inputs to the model, and the model can be used to estimate the time and cost of carrying the project to completion.

  10. The development of Drink Less: an alcohol reduction smartphone app for excessive drinkers.

    PubMed

    Garnett, Claire; Crane, David; West, Robert; Brown, Jamie; Michie, Susan

    2018-05-04

    Excessive alcohol consumption poses a serious problem for public health. Digital behavior change interventions have the potential to help users reduce their drinking. In accordance with Open Science principles, this paper describes the development of a smartphone app to help individuals who drink excessively to reduce their alcohol consumption. Following the UK Medical Research Council's guidance and the Multiphase Optimization Strategy, development consisted of two phases: (i) selection of intervention components and (ii) design and development work to implement the chosen components into modules to be evaluated further for inclusion in the app. Phase 1 involved a scoping literature review, expert consensus study and content analysis of existing alcohol apps. Findings were integrated within a broad model of behavior change (Capability, Opportunity, Motivation-Behavior). Phase 2 involved a highly iterative process and used the "Person-Based" approach to promote engagement. From Phase 1, five intervention components were selected: (i) Normative Feedback, (ii) Cognitive Bias Re-training, (iii) Self-monitoring and Feedback, (iv) Action Planning, and (v) Identity Change. Phase 2 indicated that each of these components presented different challenges for implementation as app modules; all required multiple iterations and design changes to arrive at versions that would be suitable for inclusion in a subsequent evaluation study. The development of the Drink Less app involved a thorough process of component identification with a scoping literature review, expert consensus, and review of other apps. Translation of the components into app modules required a highly iterative process involving user testing and design modification.

  11. Iterated intracochlear reflection shapes the envelopes of basilar-membrane click responses

    PubMed Central

    Shera, Christopher A.

    2015-01-01

    Multiple internal reflection of cochlear traveling waves has been argued to provide a plausible explanation for the waxing and waning and other temporal structures often exhibited by the envelopes of basilar-membrane (BM) and auditory-nerve responses to acoustic clicks. However, a recent theoretical analysis of a BM click response measured in chinchilla concludes that the waveform cannot have arisen via any equal, repetitive process, such as iterated intracochlear reflection [Wit and Bell (2015), J. Acoust. Soc. Am. 138, 94–96]. Reanalysis of the waveform contradicts this conclusion. The measured BM click response is used to derive the frequency-domain transfer function characterizing every iteration of the loop. The selfsame transfer function that yields waxing and waning of the BM click response also captures the spectral features of ear-canal stimulus-frequency otoacoustic emissions measured in the same animal, consistent with the predictions of multiple internal reflection. Small shifts in transfer-function phase simulate results at different measurement locations and reproduce the heterogeneity of BM click response envelopes observed experimentally. PMID:26723327

  12. Varying-energy CT imaging method based on EM-TV

    NASA Astrophysics Data System (ADS)

    Chen, Ping; Han, Yan

    2016-11-01

    For complicated structural components with wide x-ray attenuation ranges, conventional fixed-energy computed tomography (CT) imaging cannot obtain all the structural information. This limitation results in a shortage of CT information because the effective thickness of the components along the direction of x-ray penetration exceeds the limit of the dynamic range of the x-ray imaging system. To address this problem, a varying-energy x-ray CT imaging method is proposed. In this new method, the tube voltage is adjusted several times with the fixed lesser interval. Next, the fusion of grey consistency and logarithm demodulation are applied to obtain full and lower noise projection with a high dynamic range (HDR). In addition, for the noise suppression problem of the analytical method, EM-TV (expectation maximization-total Jvariation) iteration reconstruction is used. In the process of iteration, the reconstruction result obtained at one x-ray energy is used as the initial condition of the next iteration. An accompanying experiment demonstrates that this EM-TV reconstruction can also extend the dynamic range of x-ray imaging systems and provide a higher reconstruction quality relative to the fusion reconstruction method.

  13. FENDL: International reference nuclear data library for fusion applications

    NASA Astrophysics Data System (ADS)

    Pashchenko, A. B.; Wienke, H.; Ganesan, S.

    1996-10-01

    The IAEA Nuclear Data Section, in co-operation with several national nuclear data centres and research groups, has created the first version of an internationally available Fusion Evaluated Nuclear Data Library (FENDL-1). The FENDL library has been selected to serve as a comprehensive source of processed and tested nuclear data tailored to the requirements of the engineering design activity (EDA) of the ITER project and other fusion-related development projects. The present version of FENDL consists of the following sublibraries covering the necessary nuclear input for all physics and engineering aspects of the material development, design, operation and safety of the ITER project in its current EDA phase: FENDL/A-1.1: neutron activation cross-sections, selected from different available sources, for 636 nuclides, FENDL/D-1.0: nuclear decay data for 2900 nuclides in ENDF-6 format, FENDL/DS-1.0: neutron activation data for dosimetry by foil activation, FENDL/C-1.0: data for the fusion reactions D(d,n), D(d,p), T(d,n), T(t,2n), He-3(d,p) extracted from ENDF/B-6 and processed, FENDL/E-1.0:data for coupled neutron—photon transport calculations, including a data library for neutron interaction and photon production for 63 elements or isotopes, selected from ENDF/B-6, JENDL-3, or BROND-2, and a photon—atom interaction data library for 34 elements. The benchmark validation of FENDL-1 as required by the customer, i.e. the ITER team, is considered to be a task of high priority in the coming months. The well tested and validated nuclear data libraries in processed form of the FENDL-2 are expected to be ready by mid 1996 for use by the ITER team in the final phase of ITER EDA after extensive benchmarking and integral validation studies in the 1995-1996 period. The FENDL data files can be electronically transferred to users from the IAEA nuclear data section online system through INTERNET. A grand total of 54 (sub)directories with 845 files with total size of about 2 million blocks or about 1 Gigabyte (1 block = 512 bytes) of numerical data is currently available on-line.

  14. Superconductivity modelling: Homogenization of Bean`s model in three dimensions, and the problem of transverse conductivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bossavit, A.

    The authors show how to pass from the local Bean`s model, assumed to be valid as a behavior law for a homogeneous superconductor, to a model of similar form, valid on a larger space scale. The process, which can be iterated to higher and higher space scales, consists in solving for the fields e and j over a ``periodicity cell`` with periodic boundary conditions.

  15. The closure approximation in the hierarchy equations.

    NASA Technical Reports Server (NTRS)

    Adomian, G.

    1971-01-01

    The expectation of the solution process in a stochastic operator equation can be obtained from averaged equations only under very special circumstances. Conditions for validity are given and the significance and validity of the approximation in widely used hierarchy methods and the ?self-consistent field' approximation in nonequilibrium statistical mechanics are clarified. The error at any level of the hierarchy can be given and can be avoided by the use of the iterative method.

  16. JWST Wavefront Control Toolbox

    NASA Technical Reports Server (NTRS)

    Shin, Shahram Ron; Aronstein, David L.

    2011-01-01

    A Matlab-based toolbox has been developed for the wavefront control and optimization of segmented optical surfaces to correct for possible misalignments of James Webb Space Telescope (JWST) using influence functions. The toolbox employs both iterative and non-iterative methods to converge to an optimal solution by minimizing the cost function. The toolbox could be used in either of constrained and unconstrained optimizations. The control process involves 1 to 7 degrees-of-freedom perturbations per segment of primary mirror in addition to the 5 degrees of freedom of secondary mirror. The toolbox consists of a series of Matlab/Simulink functions and modules, developed based on a "wrapper" approach, that handles the interface and data flow between existing commercial optical modeling software packages such as Zemax and Code V. The limitations of the algorithm are dictated by the constraints of the moving parts in the mirrors.

  17. User input in iterative design for prevention product development: leveraging interdisciplinary methods to optimize effectiveness.

    PubMed

    Guthrie, Kate M; Rosen, Rochelle K; Vargas, Sara E; Guillen, Melissa; Steger, Arielle L; Getz, Melissa L; Smith, Kelley A; Ramirez, Jaime J; Kojic, Erna M

    2017-10-01

    The development of HIV-preventive topical vaginal microbicides has been challenged by a lack of sufficient adherence in later stage clinical trials to confidently evaluate effectiveness. This dilemma has highlighted the need to integrate translational research earlier in the drug development process, essentially applying behavioral science to facilitate the advances of basic science with respect to the uptake and use of biomedical prevention technologies. In the last several years, there has been an increasing recognition that the user experience, specifically the sensory experience, as well as the role of meaning-making elicited by those sensations, may play a more substantive role than previously thought. Importantly, the role of the user-their sensory perceptions, their judgements of those experiences, and their willingness to use a product-is critical in product uptake and consistent use post-marketing, ultimately realizing gains in global public health. Specifically, a successful prevention product requires an efficacious drug, an efficient drug delivery system, and an effective user. We present an integrated iterative drug development and user experience evaluation method to illustrate how user-centered formulation design can be iterated from the early stages of preclinical development to leverage the user experience. Integrating the user and their product experiences into the formulation design process may help optimize both the efficiency of drug delivery and the effectiveness of the user.

  18. Implementing partnership-driven clinical federated electronic health record data sharing networks.

    PubMed

    Stephens, Kari A; Anderson, Nicholas; Lin, Ching-Ping; Estiri, Hossein

    2016-09-01

    Building federated data sharing architectures requires supporting a range of data owners, effective and validated semantic alignment between data resources, and consistent focus on end-users. Establishing these resources requires development methodologies that support internal validation of data extraction and translation processes, sustaining meaningful partnerships, and delivering clear and measurable system utility. We describe findings from two federated data sharing case examples that detail critical factors, shared outcomes, and production environment results. Two federated data sharing pilot architectures developed to support network-based research associated with the University of Washington's Institute of Translational Health Sciences provided the basis for the findings. A spiral model for implementation and evaluation was used to structure iterations of development and support knowledge share between the two network development teams, which cross collaborated to support and manage common stages. We found that using a spiral model of software development and multiple cycles of iteration was effective in achieving early network design goals. Both networks required time and resource intensive efforts to establish a trusted environment to create the data sharing architectures. Both networks were challenged by the need for adaptive use cases to define and test utility. An iterative cyclical model of development provided a process for developing trust with data partners and refining the design, and supported measureable success in the development of new federated data sharing architectures. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. High resolution x-ray CMT: Reconstruction methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, J.K.

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less

  20. Achievements in the development of the Water Cooled Solid Breeder Test Blanket Module of Japan to the milestones for installation in ITER

    NASA Astrophysics Data System (ADS)

    Tsuru, Daigo; Tanigawa, Hisashi; Hirose, Takanori; Mohri, Kensuke; Seki, Yohji; Enoeda, Mikio; Ezato, Koichiro; Suzuki, Satoshi; Nishi, Hiroshi; Akiba, Masato

    2009-06-01

    As the primary candidate of ITER Test Blanket Module (TBM) to be tested under the leadership of Japan, a water cooled solid breeder (WCSB) TBM is being developed. This paper shows the recent achievements towards the milestones of ITER TBMs prior to the installation, which consist of design integration in ITER, module qualification and safety assessment. With respect to the design integration, targeting the detailed design final report in 2012, structure designs of the WCSB TBM and the interfacing components (common frame and backside shielding) that are placed in a test port of ITER and the layout of the cooling system are presented. As for the module qualification, a real-scale first wall mock-up fabricated by using the hot isostatic pressing method by structural material of reduced activation martensitic ferritic steel, F82H, and flow and irradiation test of the mock-up are presented. As for safety milestones, the contents of the preliminary safety report in 2008 consisting of source term identification, failure mode and effect analysis (FMEA) and identification of postulated initiating events (PIEs) and safety analyses are presented.

  1. Research on material removal accuracy analysis and correction of removal function during ion beam figuring

    NASA Astrophysics Data System (ADS)

    Wu, Weibin; Dai, Yifan; Zhou, Lin; Xu, Mingjin

    2016-09-01

    Material removal accuracy has a direct impact on the machining precision and efficiency of ion beam figuring. By analyzing the factors suppressing the improvement of material removal accuracy, we conclude that correcting the removal function deviation and reducing the removal material amount during each iterative process could help to improve material removal accuracy. Removal function correcting principle can effectively compensate removal function deviation between actual figuring and simulated processes, while experiments indicate that material removal accuracy decreases with a long machining time, so a small amount of removal material in each iterative process is suggested. However, more clamping and measuring steps will be introduced in this way, which will also generate machining errors and suppress the improvement of material removal accuracy. On this account, a free-measurement iterative process method is put forward to improve material removal accuracy and figuring efficiency by using less measuring and clamping steps. Finally, an experiment on a φ 100-mm Zerodur planar is preformed, which shows that, in similar figuring time, three free-measurement iterative processes could improve the material removal accuracy and the surface error convergence rate by 62.5% and 17.6%, respectively, compared with a single iterative process.

  2. Accurate Micro-Tool Manufacturing by Iterative Pulsed-Laser Ablation

    NASA Astrophysics Data System (ADS)

    Warhanek, Maximilian; Mayr, Josef; Dörig, Christian; Wegener, Konrad

    2017-12-01

    Iterative processing solutions, including multiple cycles of material removal and measurement, are capable of achieving higher geometric accuracy by compensating for most deviations manifesting directly on the workpiece. Remaining error sources are the measurement uncertainty and the repeatability of the material-removal process including clamping errors. Due to the lack of processing forces, process fluids and wear, pulsed-laser ablation has proven high repeatability and can be realized directly on a measuring machine. This work takes advantage of this possibility by implementing an iterative, laser-based correction process for profile deviations registered directly on an optical measurement machine. This way efficient iterative processing is enabled, which is precise, applicable for all tool materials including diamond and eliminates clamping errors. The concept is proven by a prototypical implementation on an industrial tool measurement machine and a nanosecond fibre laser. A number of measurements are performed on both the machine and the processed workpieces. Results show production deviations within 2 μm diameter tolerance.

  3. A viscous flow analysis for the tip vortex generation process

    NASA Technical Reports Server (NTRS)

    Shamroth, S. J.; Briley, W. R.

    1979-01-01

    A three dimensional, forward-marching, viscous flow analysis is applied to the tip vortex generation problem. The equations include a streamwise momentum equation, a streamwise vorticity equation, a continuity equation, and a secondary flow stream function equation. The numerical method used combines a consistently split linearized scheme for parabolic equations with a scalar iterative ADI scheme for elliptic equations. The analysis is used to identify the source of the tip vortex generation process, as well as to obtain detailed flow results for a rectangular planform wing immersed in a high Reynolds number free stream at 6 degree incidence.

  4. A new iterative triclass thresholding technique in image segmentation.

    PubMed

    Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin

    2014-03-01

    We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.

  5. Analytic approximation for random muffin-tin alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, R.; Gray, L.J.; Kaplan, T.

    1983-03-15

    The methods introduced in a previous paper under the name of ''traveling-cluster approximation'' (TCA) are applied, in a multiple-scattering approach, to the case of a random muffin-tin substitutional alloy. This permits the iterative part of a self-consistent calculation to be carried out entirely in terms of on-the-energy-shell scattering amplitudes. Off-shell components of the mean resolvent, needed for the calculation of spectral functions, are obtained by standard methods involving single-site scattering wave functions. The single-site TCA is just the usual coherent-potential approximation, expressed in a form particularly suited for iteration. A fixed-point theorem is proved for the general t-matrix TCA, ensuringmore » convergence upon iteration to a unique self-consistent solution with the physically essential Herglotz properties.« less

  6. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  7. Networking Theories by Iterative Unpacking

    ERIC Educational Resources Information Center

    Koichu, Boris

    2014-01-01

    An iterative unpacking strategy consists of sequencing empirically-based theoretical developments so that at each step of theorizing one theory serves as an overarching conceptual framework, in which another theory, either existing or emerging, is embedded in order to elaborate on the chosen element(s) of the overarching theory. The strategy is…

  8. MO-B-BRB-01: Optimize Treatment Planning Process in Clinical Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, W.

    The radiotherapy treatment planning process has evolved over the years with innovations in treatment planning, treatment delivery and imaging systems. Treatment modality and simulation technologies are also rapidly improving and affecting the planning process. For example, Image-guided-radiation-therapy has been widely adopted for patient setup, leading to margin reduction and isocenter repositioning after simulation. Stereotactic Body radiation therapy (SBRT) and Radiosurgery (SRS) have gradually become the standard of care for many treatment sites, which demand a higher throughput for the treatment plans even if the number of treatments per day remains the same. Finally, simulation, planning and treatment are traditionally sequentialmore » events. However, with emerging adaptive radiotherapy, they are becoming more tightly intertwined, leading to iterative processes. Enhanced efficiency of planning is therefore becoming more critical and poses serious challenge to the treatment planning process; Lean Six Sigma approaches are being utilized increasingly to balance the competing needs for speed and quality. In this symposium we will discuss the treatment planning process and illustrate effective techniques for managing workflow. Topics will include: Planning techniques: (a) beam placement, (b) dose optimization, (c) plan evaluation (d) export to RVS. Planning workflow: (a) import images, (b) Image fusion, (c) contouring, (d) plan approval (e) plan check (f) chart check, (g) sequential and iterative process Influence of upstream and downstream operations: (a) simulation, (b) immobilization, (c) motion management, (d) QA, (e) IGRT, (f) Treatment delivery, (g) SBRT/SRS (h) adaptive planning Reduction of delay between planning steps with Lean systems due to (a) communication, (b) limited resource, (b) contour, (c) plan approval, (d) treatment. Optimizing planning processes: (a) contour validation (b) consistent planning protocol, (c) protocol/template sharing, (d) semi-automatic plan evaluation, (e) quality checklist for error prevention, (f) iterative process, (g) balance of speed and quality Learning Objectives: Gain familiarity with the workflow of modern treatment planning process. Understand the scope and challenges of managing modern treatment planning processes. Gain familiarity with Lean Six Sigma approaches and their implementation in the treatment planning workflow.« less

  9. MO-B-BRB-00: Optimizing the Treatment Planning Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The radiotherapy treatment planning process has evolved over the years with innovations in treatment planning, treatment delivery and imaging systems. Treatment modality and simulation technologies are also rapidly improving and affecting the planning process. For example, Image-guided-radiation-therapy has been widely adopted for patient setup, leading to margin reduction and isocenter repositioning after simulation. Stereotactic Body radiation therapy (SBRT) and Radiosurgery (SRS) have gradually become the standard of care for many treatment sites, which demand a higher throughput for the treatment plans even if the number of treatments per day remains the same. Finally, simulation, planning and treatment are traditionally sequentialmore » events. However, with emerging adaptive radiotherapy, they are becoming more tightly intertwined, leading to iterative processes. Enhanced efficiency of planning is therefore becoming more critical and poses serious challenge to the treatment planning process; Lean Six Sigma approaches are being utilized increasingly to balance the competing needs for speed and quality. In this symposium we will discuss the treatment planning process and illustrate effective techniques for managing workflow. Topics will include: Planning techniques: (a) beam placement, (b) dose optimization, (c) plan evaluation (d) export to RVS. Planning workflow: (a) import images, (b) Image fusion, (c) contouring, (d) plan approval (e) plan check (f) chart check, (g) sequential and iterative process Influence of upstream and downstream operations: (a) simulation, (b) immobilization, (c) motion management, (d) QA, (e) IGRT, (f) Treatment delivery, (g) SBRT/SRS (h) adaptive planning Reduction of delay between planning steps with Lean systems due to (a) communication, (b) limited resource, (b) contour, (c) plan approval, (d) treatment. Optimizing planning processes: (a) contour validation (b) consistent planning protocol, (c) protocol/template sharing, (d) semi-automatic plan evaluation, (e) quality checklist for error prevention, (f) iterative process, (g) balance of speed and quality Learning Objectives: Gain familiarity with the workflow of modern treatment planning process. Understand the scope and challenges of managing modern treatment planning processes. Gain familiarity with Lean Six Sigma approaches and their implementation in the treatment planning workflow.« less

  10. MO-B-BRB-03: Systems Engineering Tools for Treatment Planning Process Optimization in Radiation Medicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kapur, A.

    The radiotherapy treatment planning process has evolved over the years with innovations in treatment planning, treatment delivery and imaging systems. Treatment modality and simulation technologies are also rapidly improving and affecting the planning process. For example, Image-guided-radiation-therapy has been widely adopted for patient setup, leading to margin reduction and isocenter repositioning after simulation. Stereotactic Body radiation therapy (SBRT) and Radiosurgery (SRS) have gradually become the standard of care for many treatment sites, which demand a higher throughput for the treatment plans even if the number of treatments per day remains the same. Finally, simulation, planning and treatment are traditionally sequentialmore » events. However, with emerging adaptive radiotherapy, they are becoming more tightly intertwined, leading to iterative processes. Enhanced efficiency of planning is therefore becoming more critical and poses serious challenge to the treatment planning process; Lean Six Sigma approaches are being utilized increasingly to balance the competing needs for speed and quality. In this symposium we will discuss the treatment planning process and illustrate effective techniques for managing workflow. Topics will include: Planning techniques: (a) beam placement, (b) dose optimization, (c) plan evaluation (d) export to RVS. Planning workflow: (a) import images, (b) Image fusion, (c) contouring, (d) plan approval (e) plan check (f) chart check, (g) sequential and iterative process Influence of upstream and downstream operations: (a) simulation, (b) immobilization, (c) motion management, (d) QA, (e) IGRT, (f) Treatment delivery, (g) SBRT/SRS (h) adaptive planning Reduction of delay between planning steps with Lean systems due to (a) communication, (b) limited resource, (b) contour, (c) plan approval, (d) treatment. Optimizing planning processes: (a) contour validation (b) consistent planning protocol, (c) protocol/template sharing, (d) semi-automatic plan evaluation, (e) quality checklist for error prevention, (f) iterative process, (g) balance of speed and quality Learning Objectives: Gain familiarity with the workflow of modern treatment planning process. Understand the scope and challenges of managing modern treatment planning processes. Gain familiarity with Lean Six Sigma approaches and their implementation in the treatment planning workflow.« less

  11. Iteration and Prototyping in Creating Technical Specifications.

    ERIC Educational Resources Information Center

    Flynt, John P.

    1994-01-01

    Claims that the development process for computer software can be greatly aided by the writers of specifications if they employ basic iteration and prototyping techniques. Asserts that computer software configuration management practices provide ready models for iteration and prototyping. (HB)

  12. Iterative development of visual control systems in a research vivarium.

    PubMed

    Bassuk, James A; Washington, Ida M

    2014-01-01

    The goal of this study was to test the hypothesis that reintroduction of Continuous Performance Improvement (CPI) methodology, a lean approach to management at Seattle Children's (Hospital, Research Institute, Foundation), would facilitate engagement of vivarium employees in the development and sustainment of a daily management system and a work-in-process board. Such engagement was implemented through reintroduction of aspects of the Toyota Production System. Iterations of a Work-In-Process Board were generated using Shewhart's Plan-Do-Check-Act process improvement cycle. Specific attention was given to the importance of detecting and preventing errors through assessment of the following 5 levels of quality: Level 1, customer inspects; Level 2, company inspects; Level 3, work unit inspects; Level 4, self-inspection; Level 5, mistake proofing. A functioning iteration of a Mouse Cage Work-In-Process Board was eventually established using electronic data entry, an improvement that increased the quality level from 1 to 3 while reducing wasteful steps, handoffs and queues. A visual workplace was realized via a daily management system that included a Work-In-Process Board, a problem solving board and two Heijunka boards. One Heijunka board tracked cage changing as a function of a biological kanban, which was validated via ammonia levels. A 17% reduction in cage changing frequency provided vivarium staff with additional time to support Institute researchers in their mutual goal of advancing cures for pediatric diseases. Cage washing metrics demonstrated an improvement in the flow continuum in which a traditional batch and queue push system was replaced with a supermarket-type pull system. Staff engagement during the improvement process was challenging and is discussed. The collective data indicate that the hypothesis was found to be true. The reintroduction of CPI into daily work in the vivarium is consistent with the 4P Model of the Toyota Way and selected Principles that guide implementation of the Toyota Production System.

  13. Iterative Development of Visual Control Systems in a Research Vivarium

    PubMed Central

    Bassuk, James A.; Washington, Ida M.

    2014-01-01

    The goal of this study was to test the hypothesis that reintroduction of Continuous Performance Improvement (CPI) methodology, a lean approach to management at Seattle Children’s (Hospital, Research Institute, Foundation), would facilitate engagement of vivarium employees in the development and sustainment of a daily management system and a work-in-process board. Such engagement was implemented through reintroduction of aspects of the Toyota Production System. Iterations of a Work-In-Process Board were generated using Shewhart’s Plan-Do-Check-Act process improvement cycle. Specific attention was given to the importance of detecting and preventing errors through assessment of the following 5 levels of quality: Level 1, customer inspects; Level 2, company inspects; Level 3, work unit inspects; Level 4, self-inspection; Level 5, mistake proofing. A functioning iteration of a Mouse Cage Work-In-Process Board was eventually established using electronic data entry, an improvement that increased the quality level from 1 to 3 while reducing wasteful steps, handoffs and queues. A visual workplace was realized via a daily management system that included a Work-In-Process Board, a problem solving board and two Heijunka boards. One Heijunka board tracked cage changing as a function of a biological kanban, which was validated via ammonia levels. A 17% reduction in cage changing frequency provided vivarium staff with additional time to support Institute researchers in their mutual goal of advancing cures for pediatric diseases. Cage washing metrics demonstrated an improvement in the flow continuum in which a traditional batch and queue push system was replaced with a supermarket-type pull system. Staff engagement during the improvement process was challenging and is discussed. The collective data indicate that the hypothesis was found to be true. The reintroduction of CPI into daily work in the vivarium is consistent with the 4P Model of the Toyota Way and selected Principles that guide implementation of the Toyota Production System. PMID:24736460

  14. DART system analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boggs, Paul T.; Althsuler, Alan; Larzelere, Alex R.

    2005-08-01

    The Design-through-Analysis Realization Team (DART) is chartered with reducing the time Sandia analysts require to complete the engineering analysis process. The DART system analysis team studied the engineering analysis processes employed by analysts in Centers 9100 and 8700 at Sandia to identify opportunities for reducing overall design-through-analysis process time. The team created and implemented a rigorous analysis methodology based on a generic process flow model parameterized by information obtained from analysts. They also collected data from analysis department managers to quantify the problem type and complexity distribution throughout Sandia's analyst community. They then used this information to develop a communitymore » model, which enables a simple characterization of processes that span the analyst community. The results indicate that equal opportunity for reducing analysis process time is available both by reducing the ''once-through'' time required to complete a process step and by reducing the probability of backward iteration. In addition, reducing the rework fraction (i.e., improving the engineering efficiency of subsequent iterations) offers approximately 40% to 80% of the benefit of reducing the ''once-through'' time or iteration probability, depending upon the process step being considered. Further, the results indicate that geometry manipulation and meshing is the largest portion of an analyst's effort, especially for structural problems, and offers significant opportunity for overall time reduction. Iteration loops initiated late in the process are more costly than others because they increase ''inner loop'' iterations. Identifying and correcting problems as early as possible in the process offers significant opportunity for time savings.« less

  15. Mechanical Characterization of the Iter Mock-Up Insulation after Reactor Irradiation

    NASA Astrophysics Data System (ADS)

    Prokopec, R.; Humer, K.; Fillunger, H.; Maix, R. K.; Weber, H. W.

    2010-04-01

    The ITER mock-up project was launched in order to demonstrate the feasibility of an industrial impregnation process using the new cyanate ester/epoxy blend. The mock-up simulates the TF winding pack cross section by a stainless steel structure with the same dimensions as the TF winding pack at a length of 1 m. It consists of 7 plates simulating the double pancakes, each of them is wrapped with glass fiber/Kapton sandwich tapes. After stacking the 7 plates, additional insulation layers are wrapped to simulate the ground insulation. This paper presents the results of the mechanical quality tests on the mock-up pancake insulation. Tensile and short beam shear specimens were cut from the plates extracted from the mock-up and tested at 77 K using a servo-hydraulic material testing device. All tests were repeated after reactor irradiation to a fast neutron fluence of 1×1022 m-2 (E>0.1 MeV). In order to simulate the pulsed operation of ITER, tension-tension fatigue measurements were performed in the load controlled mode. Initial results show a high mechanical strength as expected from the high number of thin glass fiber layers, and an excellent homogeneity of the material.

  16. Computing eigenfunctions and eigenvalues of boundary-value problems with the orthogonal spectral renormalization method

    NASA Astrophysics Data System (ADS)

    Cartarius, Holger; Musslimani, Ziad H.; Schwarz, Lukas; Wunner, Günter

    2018-03-01

    The spectral renormalization method was introduced in 2005 as an effective way to compute ground states of nonlinear Schrödinger and Gross-Pitaevskii type equations. In this paper, we introduce an orthogonal spectral renormalization (OSR) method to compute ground and excited states (and their respective eigenvalues) of linear and nonlinear eigenvalue problems. The implementation of the algorithm follows four simple steps: (i) reformulate the underlying eigenvalue problem as a fixed-point equation, (ii) introduce a renormalization factor that controls the convergence properties of the iteration, (iii) perform a Gram-Schmidt orthogonalization process in order to prevent the iteration from converging to an unwanted mode, and (iv) compute the solution sought using a fixed-point iteration. The advantages of the OSR scheme over other known methods (such as Newton's and self-consistency) are (i) it allows the flexibility to choose large varieties of initial guesses without diverging, (ii) it is easy to implement especially at higher dimensions, and (iii) it can easily handle problems with complex and random potentials. The OSR method is implemented on benchmark Hermitian linear and nonlinear eigenvalue problems as well as linear and nonlinear non-Hermitian PT -symmetric models.

  17. Seismic waveform tomography with shot-encoding using a restarted L-BFGS algorithm.

    PubMed

    Rao, Ying; Wang, Yanghua

    2017-08-17

    In seismic waveform tomography, or full-waveform inversion (FWI), one effective strategy used to reduce the computational cost is shot-encoding, which encodes all shots randomly and sums them into one super shot to significantly reduce the number of wavefield simulations in the inversion. However, this process will induce instability in the iterative inversion regardless of whether it uses a robust limited-memory BFGS (L-BFGS) algorithm. The restarted L-BFGS algorithm proposed here is both stable and efficient. This breakthrough ensures, for the first time, the applicability of advanced FWI methods to three-dimensional seismic field data. In a standard L-BFGS algorithm, if the shot-encoding remains unchanged, it will generate a crosstalk effect between different shots. This crosstalk effect can only be suppressed by employing sufficient randomness in the shot-encoding. Therefore, the implementation of the L-BFGS algorithm is restarted at every segment. Each segment consists of a number of iterations; the first few iterations use an invariant encoding, while the remainder use random re-coding. This restarted L-BFGS algorithm balances the computational efficiency of shot-encoding, the convergence stability of the L-BFGS algorithm, and the inversion quality characteristic of random encoding in FWI.

  18. An iterative learning strategy for the auto-tuning of the feedforward and feedback controller in type-1 diabetes.

    PubMed

    Fravolini, M L; Fabietti, P G

    2014-01-01

    This paper proposes a scheme for the control of the blood glucose in subjects with type-1 diabetes mellitus based on the subcutaneous (s.c.) glucose measurement and s.c. insulin administration. The tuning of the controller is based on an iterative learning strategy that exploits the repetitiveness of the daily feeding habit of a patient. The control consists of a mixed feedback and feedforward contribution whose parameters are tuned through an iterative learning process that is based on the day-by-day automated analysis of the glucose response to the infusion of exogenous insulin. The scheme does not require any a priori information on the patient insulin/glucose response, on the meal times and on the amount of ingested carbohydrates (CHOs). Thanks to the learning mechanism the scheme is able to improve its performance over time. A specific logic is also introduced for the detection and prevention of possible hypoglycaemia events. The effectiveness of the methodology has been validated using long-term simulation studies applied to a set of nine in silico patients considering realistic uncertainties on the meal times and on the quantities of ingested CHOs.

  19. Iterative non-sequential protein structural alignment.

    PubMed

    Salem, Saeed; Zaki, Mohammed J; Bystroff, Christopher

    2009-06-01

    Structural similarity between proteins gives us insights into their evolutionary relationships when there is low sequence similarity. In this paper, we present a novel approach called SNAP for non-sequential pair-wise structural alignment. Starting from an initial alignment, our approach iterates over a two-step process consisting of a superposition step and an alignment step, until convergence. We propose a novel greedy algorithm to construct both sequential and non-sequential alignments. The quality of SNAP alignments were assessed by comparing against the manually curated reference alignments in the challenging SISY and RIPC datasets. Moreover, when applied to a dataset of 4410 protein pairs selected from the CATH database, SNAP produced longer alignments with lower rmsd than several state-of-the-art alignment methods. Classification of folds using SNAP alignments was both highly sensitive and highly selective. The SNAP software along with the datasets are available online at http://www.cs.rpi.edu/~zaki/software/SNAP.

  20. Strategies for the coupling of global and local crystal growth models

    NASA Astrophysics Data System (ADS)

    Derby, Jeffrey J.; Lun, Lisa; Yeckel, Andrew

    2007-05-01

    The modular coupling of existing numerical codes to model crystal growth processes will provide for maximum effectiveness, capability, and flexibility. However, significant challenges are posed to make these coupled models mathematically self-consistent and algorithmically robust. This paper presents sample results from a coupling of the CrysVUn code, used here to compute furnace-scale heat transfer, and Cats2D, used to calculate melt fluid dynamics and phase-change phenomena, to form a global model for a Bridgman crystal growth system. However, the strategy used to implement the CrysVUn-Cats2D coupling is unreliable and inefficient. The implementation of under-relaxation within a block Gauss-Seidel iteration is shown to be ineffective for improving the coupling performance in a model one-dimensional problem representative of a melt crystal growth model. Ideas to overcome current convergence limitations using approximations to a full Newton iteration method are discussed.

  1. Composition of web services using Markov decision processes and dynamic programming.

    PubMed

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.

  2. MO-B-BRB-02: Maintain the Quality of Treatment Planning for Time-Constraint Cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, J.

    The radiotherapy treatment planning process has evolved over the years with innovations in treatment planning, treatment delivery and imaging systems. Treatment modality and simulation technologies are also rapidly improving and affecting the planning process. For example, Image-guided-radiation-therapy has been widely adopted for patient setup, leading to margin reduction and isocenter repositioning after simulation. Stereotactic Body radiation therapy (SBRT) and Radiosurgery (SRS) have gradually become the standard of care for many treatment sites, which demand a higher throughput for the treatment plans even if the number of treatments per day remains the same. Finally, simulation, planning and treatment are traditionally sequentialmore » events. However, with emerging adaptive radiotherapy, they are becoming more tightly intertwined, leading to iterative processes. Enhanced efficiency of planning is therefore becoming more critical and poses serious challenge to the treatment planning process; Lean Six Sigma approaches are being utilized increasingly to balance the competing needs for speed and quality. In this symposium we will discuss the treatment planning process and illustrate effective techniques for managing workflow. Topics will include: Planning techniques: (a) beam placement, (b) dose optimization, (c) plan evaluation (d) export to RVS. Planning workflow: (a) import images, (b) Image fusion, (c) contouring, (d) plan approval (e) plan check (f) chart check, (g) sequential and iterative process Influence of upstream and downstream operations: (a) simulation, (b) immobilization, (c) motion management, (d) QA, (e) IGRT, (f) Treatment delivery, (g) SBRT/SRS (h) adaptive planning Reduction of delay between planning steps with Lean systems due to (a) communication, (b) limited resource, (b) contour, (c) plan approval, (d) treatment. Optimizing planning processes: (a) contour validation (b) consistent planning protocol, (c) protocol/template sharing, (d) semi-automatic plan evaluation, (e) quality checklist for error prevention, (f) iterative process, (g) balance of speed and quality Learning Objectives: Gain familiarity with the workflow of modern treatment planning process. Understand the scope and challenges of managing modern treatment planning processes. Gain familiarity with Lean Six Sigma approaches and their implementation in the treatment planning workflow.« less

  3. Applicability of Kerker preconditioning scheme to the self-consistent density functional theory calculations of inhomogeneous systems

    NASA Astrophysics Data System (ADS)

    Zhou, Yuzhi; Wang, Han; Liu, Yu; Gao, Xingyu; Song, Haifeng

    2018-03-01

    The Kerker preconditioner, based on the dielectric function of homogeneous electron gas, is designed to accelerate the self-consistent field (SCF) iteration in the density functional theory calculations. However, a question still remains regarding its applicability to the inhomogeneous systems. We develop a modified Kerker preconditioning scheme which captures the long-range screening behavior of inhomogeneous systems and thus improves the SCF convergence. The effectiveness and efficiency is shown by the tests on long-z slabs of metals, insulators, and metal-insulator contacts. For situations without a priori knowledge of the system, we design the a posteriori indicator to monitor if the preconditioner has suppressed charge sloshing during the iterations. Based on the a posteriori indicator, we demonstrate two schemes of the self-adaptive configuration for the SCF iteration.

  4. Marginal Consistency: Upper-Bounding Partition Functions over Commutative Semirings.

    PubMed

    Werner, Tomás

    2015-07-01

    Many inference tasks in pattern recognition and artificial intelligence lead to partition functions in which addition and multiplication are abstract binary operations forming a commutative semiring. By generalizing max-sum diffusion (one of convergent message passing algorithms for approximate MAP inference in graphical models), we propose an iterative algorithm to upper bound such partition functions over commutative semirings. The iteration of the algorithm is remarkably simple: change any two factors of the partition function such that their product remains the same and their overlapping marginals become equal. In many commutative semirings, repeating this iteration for different pairs of factors converges to a fixed point when the overlapping marginals of every pair of factors coincide. We call this state marginal consistency. During that, an upper bound on the partition function monotonically decreases. This abstract algorithm unifies several existing algorithms, including max-sum diffusion and basic constraint propagation (or local consistency) algorithms in constraint programming. We further construct a hierarchy of marginal consistencies of increasingly higher levels and show than any such level can be enforced by adding identity factors of higher arity (order). Finally, we discuss instances of the framework for several semirings, including the distributive lattice and the max-sum and sum-product semirings.

  5. Evaluation of inter-laminar shear strength of GFRP composed of bonded glass/polyimide tapes and cyanate-ester/epoxy blended resin for ITER TF coils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemmi, T.; Matsui, K.; Koizumi, N.

    2014-01-27

    The insulation system of the ITER TF coils consists of multi-layer glass/polyimide tapes impregnated a cyanate-ester/epoxy resin. The ITER TF coils are required to withstand an irradiation of 10 MGy from gamma-ray and neutrons since the ITER TF coils is exposed by fast neutron (>0.1 MeV) of 10{sup 22} n/m{sup 2} during the ITER operation. Cyanate-ester/epoxy blended resins and bonded glass/polyimide tapes are developed as insulation materials to realize the required radiation-hardness for the insulation of the ITER TF coils. To evaluate the radiation-hardness of the developed insulation materials, the inter-laminar shear strength (ILSS) of glass-fiber reinforced plastics (GFRP) fabricatedmore » using developed insulation materials is measured as one of most important mechanical properties before/after the irradiation in a fission reactor of JRR-3M. As a result, it is demonstrated that the GFRPs using the developed insulation materials have a sufficient performance to apply for the ITER TF coil insulation.« less

  6. Strong Convergence of Iteration Processes for Infinite Family of General Extended Mappings

    NASA Astrophysics Data System (ADS)

    Hussein Maibed, Zena

    2018-05-01

    The aim of this paper, we introduce a concept of general extended mapping which is independent of nonexpansive mapping and give an iteration process of families of quasi nonexpansive and of general extended mappings. Also, the existence of common fixed point are studied for these process in the Hilbert spaces.

  7. Composition of Web Services Using Markov Decision Processes and Dynamic Programming

    PubMed Central

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. PMID:25874247

  8. Programmable Iterative Optical Image And Data Processing

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    1995-01-01

    Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

  9. Solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators.

    PubMed

    Zhao, Jing; Zong, Haili

    2018-01-01

    In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.

  10. Low Quality of Basic Caregiving Environments in Child Care: Actual Reality or Artifact of Scoring?

    ERIC Educational Resources Information Center

    Norris, Deborah J.; Guss, Shannon

    2016-01-01

    Quality Rating Improvement Systems (QRIS) frequently include the Infant-Toddler Environment Rating Scale-Revised (ITERS-R) as part of rating and improving child care quality. However, studies utilizing the ITERS-R consistently report low quality, especially for basic caregiving items. This research examined whether the low scores reflected the…

  11. Prefixation of Simplex Pairs in Czech: An Analysis of Spatial Semantics, Distributive Verbs, and Procedural Meanings

    ERIC Educational Resources Information Center

    Hilchey, Christian Thomas

    2014-01-01

    This dissertation examines prefixation of simplex pairs. A simplex pair consists of an iterative imperfective and a semelfactive perfective verb. When prefixed, both of these verbs are perfective. The prefixed forms derived from semelfactives are labeled single act verbs, while the prefixed forms derived from iterative imperfective simplex verbs…

  12. Using an Iterative Mixed-Methods Research Design to Investigate Schools Facing Exceptionally Challenging Circumstances within Trinidad and Tobago

    ERIC Educational Resources Information Center

    De Lisle, Jerome; Seunarinesingh, Krishna; Mohammed, Rhoda; Lee-Piggott, Rinnelle

    2017-01-01

    In this study, methodology and theory were linked to explicate the nature of education practice within schools facing exceptionally challenging circumstances (SFECC) in Trinidad and Tobago. The research design was an iterative quan>QUAL-quan>qual multi-method research programme, consisting of 3 independent projects linked together by overall…

  13. Language Evolution by Iterated Learning with Bayesian Agents

    ERIC Educational Resources Information Center

    Griffiths, Thomas L.; Kalish, Michael L.

    2007-01-01

    Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute…

  14. Iterative Neighbour-Information Gathering for Ranking Nodes in Complex Networks

    NASA Astrophysics Data System (ADS)

    Xu, Shuang; Wang, Pei; Lü, Jinhu

    2017-01-01

    Designing node influence ranking algorithms can provide insights into network dynamics, functions and structures. Increasingly evidences reveal that node’s spreading ability largely depends on its neighbours. We introduce an iterative neighbourinformation gathering (Ing) process with three parameters, including a transformation matrix, a priori information and an iteration time. The Ing process iteratively combines priori information from neighbours via the transformation matrix, and iteratively assigns an Ing score to each node to evaluate its influence. The algorithm appropriates for any types of networks, and includes some traditional centralities as special cases, such as degree, semi-local, LeaderRank. The Ing process converges in strongly connected networks with speed relying on the first two largest eigenvalues of the transformation matrix. Interestingly, the eigenvector centrality corresponds to a limit case of the algorithm. By comparing with eight renowned centralities, simulations of susceptible-infected-removed (SIR) model on real-world networks reveal that the Ing can offer more exact rankings, even without a priori information. We also observe that an optimal iteration time is always in existence to realize best characterizing of node influence. The proposed algorithms bridge the gaps among some existing measures, and may have potential applications in infectious disease control, designing of optimal information spreading strategies.

  15. Development and Evaluation of an Intuitive Operations Planning Process

    DTIC Science & Technology

    2006-03-01

    designed to be iterative and also prescribes the way in which iterations should occur. On the other hand, participants’ perceived level of trust and...16 4. DESIGN AND METHOD OF THE EXPERIMENTAL EVALUATION OF THE INTUITIVE PLANNING PROCESS...20 4.1.3 Design

  16. A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2011-01-01

    An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.

  17. Performance of multi-aperture grid extraction systems for an ITER-relevant RF-driven negative hydrogen ion source

    NASA Astrophysics Data System (ADS)

    Franzen, P.; Gutser, R.; Fantz, U.; Kraus, W.; Falter, H.; Fröschle, M.; Heinemann, B.; McNeely, P.; Nocentini, R.; Riedl, R.; Stäbler, A.; Wünderlich, D.

    2011-07-01

    The ITER neutral beam system requires a negative hydrogen ion beam of 48 A with an energy of 0.87 MeV, and a negative deuterium beam of 40 A with an energy of 1 MeV. The beam is extracted from a large ion source of dimension 1.9 × 0.9 m2 by an acceleration system consisting of seven grids with 1280 apertures each. Currently, apertures with a diameter of 14 mm in the first grid are foreseen. In 2007, the IPP RF source was chosen as the ITER reference source due to its reduced maintenance compared with arc-driven sources and the successful development at the BATMAN test facility of being equipped with the small IPP prototype RF source ( {\\sim}\\frac{1}{8} of the area of the ITER NBI source). These results, however, were obtained with an extraction system with 8 mm diameter apertures. This paper reports on the comparison of the source performance at BATMAN of an ITER-relevant extraction system equipped with chamfered apertures with a 14 mm diameter and 8 mm diameter aperture extraction system. The most important result is that there is almost no difference in the achieved current density—being consistent with ion trajectory calculations—and the amount of co-extracted electrons. Furthermore, some aspects of the beam optics of both extraction systems are discussed.

  18. Calibration and compensation method of three-axis geomagnetic sensor based on pre-processing total least square iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Zhang, X.; Xiao, W.

    2018-04-01

    As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.

  19. Solving coupled groundwater flow systems using a Jacobian Free Newton Krylov method

    NASA Astrophysics Data System (ADS)

    Mehl, S.

    2012-12-01

    Jacobian Free Newton Kyrlov (JFNK) methods can have several advantages for simulating coupled groundwater flow processes versus conventional methods. Conventional methods are defined here as those based on an iterative coupling (rather than a direct coupling) and/or that use Picard iteration rather than Newton iteration. In an iterative coupling, the systems are solved separately, coupling information is updated and exchanged between the systems, and the systems are re-solved, etc., until convergence is achieved. Trusted simulators, such as Modflow, are based on these conventional methods of coupling and work well in many cases. An advantage of the JFNK method is that it only requires calculation of the residual vector of the system of equations and thus can make use of existing simulators regardless of how the equations are formulated. This opens the possibility of coupling different process models via augmentation of a residual vector by each separate process, which often requires substantially fewer changes to the existing source code than if the processes were directly coupled. However, appropriate perturbation sizes need to be determined for accurate approximations of the Frechet derivative, which is not always straightforward. Furthermore, preconditioning is necessary for reasonable convergence of the linear solution required at each Kyrlov iteration. Existing preconditioners can be used and applied separately to each process which maximizes use of existing code and robust preconditioners. In this work, iteratively coupled parent-child local grid refinement models of groundwater flow and groundwater flow models with nonlinear exchanges to streams are used to demonstrate the utility of the JFNK approach for Modflow models. Use of incomplete Cholesky preconditioners with various levels of fill are examined on a suite of nonlinear and linear models to analyze the effect of the preconditioner. Comparisons of convergence and computer simulation time are made using conventional iteratively coupled methods and those based on Picard iteration to those formulated with JFNK to gain insights on the types of nonlinearities and system features that make one approach advantageous. Results indicate that nonlinearities associated with stream/aquifer exchanges are more problematic than those resulting from unconfined flow.

  20. Nested Krylov methods and preserving the orthogonality

    NASA Technical Reports Server (NTRS)

    Desturler, Eric; Fokkema, Diederik R.

    1993-01-01

    Recently the GMRESR inner-outer iteraction scheme for the solution of linear systems of equations was proposed by Van der Vorst and Vuik. Similar methods have been proposed by Axelsson and Vassilevski and Saad (FGMRES). The outer iteration is GCR, which minimizes the residual over a given set of direction vectors. The inner iteration is GMRES, which at each step computes a new direction vector by approximately solving the residual equation. However, the optimality of the approximation over the space of outer search directions is ignored in the inner GMRES iteration. This leads to suboptimal corrections to the solution in the outer iteration, as components of the outer iteration directions may reenter in the inner iteration process. Therefore we propose to preserve the orthogonality relations of GCR in the inner GMRES iteration. This gives optimal corrections; however, it involves working with a singular, non-symmetric operator. We will discuss some important properties, and we will show by experiments that, in terms of matrix vector products, this modification (almost) always leads to better convergence. However, because we do more orthogonalizations, it does not always give an improved performance in CPU-time. Furthermore, we will discuss efficient implementations as well as the truncation possibilities of the outer GCR process. The experimental results indicate that for such methods it is advantageous to preserve the orthogonality in the inner iteration. Of course we can also use iteration schemes other than GMRES as the inner method; methods with short recurrences like GICGSTAB are of interest.

  1. Not so Complex: Iteration in the Complex Plane

    ERIC Educational Resources Information Center

    O'Dell, Robin S.

    2014-01-01

    The simple process of iteration can produce complex and beautiful figures. In this article, Robin O'Dell presents a set of tasks requiring students to use the geometric interpretation of complex number multiplication to construct linear iteration rules. When the outputs are plotted in the complex plane, the graphs trace pleasing designs…

  2. Developing Conceptual Understanding and Procedural Skill in Mathematics: An Iterative Process.

    ERIC Educational Resources Information Center

    Rittle-Johnson, Bethany; Siegler, Robert S.; Alibali, Martha Wagner

    2001-01-01

    Proposes that conceptual and procedural knowledge develop in an iterative fashion and improved problem representation is one mechanism underlying the relations between them. Two experiments were conducted with 5th and 6th grade students learning about decimal fractions. Results indicate conceptual and procedural knowledge do develop, iteratively,…

  3. An Atlas of Peroxiredoxins Created Using an Active Site Profile-Based Approach to Functionally Relevant Clustering of Proteins.

    PubMed

    Harper, Angela F; Leuthaeuser, Janelle B; Babbitt, Patricia C; Morris, John H; Ferrin, Thomas E; Poole, Leslie B; Fetrow, Jacquelyn S

    2017-02-01

    Peroxiredoxins (Prxs or Prdxs) are a large protein superfamily of antioxidant enzymes that rapidly detoxify damaging peroxides and/or affect signal transduction and, thus, have roles in proliferation, differentiation, and apoptosis. Prx superfamily members are widespread across phylogeny and multiple methods have been developed to classify them. Here we present an updated atlas of the Prx superfamily identified using a novel method called MISST (Multi-level Iterative Sequence Searching Technique). MISST is an iterative search process developed to be both agglomerative, to add sequences containing similar functional site features, and divisive, to split groups when functional site features suggest distinct functionally-relevant clusters. Superfamily members need not be identified initially-MISST begins with a minimal representative set of known structures and searches GenBank iteratively. Further, the method's novelty lies in the manner in which isofunctional groups are selected; rather than use a single or shifting threshold to identify clusters, the groups are deemed isofunctional when they pass a self-identification criterion, such that the group identifies itself and nothing else in a search of GenBank. The method was preliminarily validated on the Prxs, as the Prxs presented challenges of both agglomeration and division. For example, previous sequence analysis clustered the Prx functional families Prx1 and Prx6 into one group. Subsequent expert analysis clearly identified Prx6 as a distinct functionally relevant group. The MISST process distinguishes these two closely related, though functionally distinct, families. Through MISST search iterations, over 38,000 Prx sequences were identified, which the method divided into six isofunctional clusters, consistent with previous expert analysis. The results represent the most complete computational functional analysis of proteins comprising the Prx superfamily. The feasibility of this novel method is demonstrated by the Prx superfamily results, laying the foundation for potential functionally relevant clustering of the universe of protein sequences.

  4. An Atlas of Peroxiredoxins Created Using an Active Site Profile-Based Approach to Functionally Relevant Clustering of Proteins

    PubMed Central

    Babbitt, Patricia C.; Ferrin, Thomas E.

    2017-01-01

    Peroxiredoxins (Prxs or Prdxs) are a large protein superfamily of antioxidant enzymes that rapidly detoxify damaging peroxides and/or affect signal transduction and, thus, have roles in proliferation, differentiation, and apoptosis. Prx superfamily members are widespread across phylogeny and multiple methods have been developed to classify them. Here we present an updated atlas of the Prx superfamily identified using a novel method called MISST (Multi-level Iterative Sequence Searching Technique). MISST is an iterative search process developed to be both agglomerative, to add sequences containing similar functional site features, and divisive, to split groups when functional site features suggest distinct functionally-relevant clusters. Superfamily members need not be identified initially—MISST begins with a minimal representative set of known structures and searches GenBank iteratively. Further, the method’s novelty lies in the manner in which isofunctional groups are selected; rather than use a single or shifting threshold to identify clusters, the groups are deemed isofunctional when they pass a self-identification criterion, such that the group identifies itself and nothing else in a search of GenBank. The method was preliminarily validated on the Prxs, as the Prxs presented challenges of both agglomeration and division. For example, previous sequence analysis clustered the Prx functional families Prx1 and Prx6 into one group. Subsequent expert analysis clearly identified Prx6 as a distinct functionally relevant group. The MISST process distinguishes these two closely related, though functionally distinct, families. Through MISST search iterations, over 38,000 Prx sequences were identified, which the method divided into six isofunctional clusters, consistent with previous expert analysis. The results represent the most complete computational functional analysis of proteins comprising the Prx superfamily. The feasibility of this novel method is demonstrated by the Prx superfamily results, laying the foundation for potential functionally relevant clustering of the universe of protein sequences. PMID:28187133

  5. A Bee Evolutionary Guiding Nondominated Sorting Genetic Algorithm II for Multiobjective Flexible Job-Shop Scheduling.

    PubMed

    Deng, Qianwang; Gong, Guiliang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua

    2017-01-01

    Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N , in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed.

  6. A Bee Evolutionary Guiding Nondominated Sorting Genetic Algorithm II for Multiobjective Flexible Job-Shop Scheduling

    PubMed Central

    Deng, Qianwang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua

    2017-01-01

    Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N, in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed. PMID:28458687

  7. Linear-scaling implementation of molecular response theory in self-consistent field electronic-structure theory.

    PubMed

    Coriani, Sonia; Høst, Stinne; Jansík, Branislav; Thøgersen, Lea; Olsen, Jeppe; Jørgensen, Poul; Reine, Simen; Pawłowski, Filip; Helgaker, Trygve; Sałek, Paweł

    2007-04-21

    A linear-scaling implementation of Hartree-Fock and Kohn-Sham self-consistent field theories for the calculation of frequency-dependent molecular response properties and excitation energies is presented, based on a nonredundant exponential parametrization of the one-electron density matrix in the atomic-orbital basis, avoiding the use of canonical orbitals. The response equations are solved iteratively, by an atomic-orbital subspace method equivalent to that of molecular-orbital theory. Important features of the subspace method are the use of paired trial vectors (to preserve the algebraic structure of the response equations), a nondiagonal preconditioner (for rapid convergence), and the generation of good initial guesses (for robust solution). As a result, the performance of the iterative method is the same as in canonical molecular-orbital theory, with five to ten iterations needed for convergence. As in traditional direct Hartree-Fock and Kohn-Sham theories, the calculations are dominated by the construction of the effective Fock/Kohn-Sham matrix, once in each iteration. Linear complexity is achieved by using sparse-matrix algebra, as illustrated in calculations of excitation energies and frequency-dependent polarizabilities of polyalanine peptides containing up to 1400 atoms.

  8. Convergence of an iterative procedure for large-scale static analysis of structural components

    NASA Technical Reports Server (NTRS)

    Austin, F.; Ojalvo, I. U.

    1976-01-01

    The paper proves convergence of an iterative procedure for calculating the deflections of built-up component structures which can be represented as consisting of a dominant, relatively stiff primary structure and a less stiff secondary structure, which may be composed of one or more substructures that are not connected to one another but are all connected to the primary structure. The iteration consists in estimating the deformation of the primary structure in the absence of the secondary structure on the assumption that all mechanical loads are applied directly to the primary structure. The j-th iterate primary structure deflections at the interface are imposed on the secondary structure, and the boundary loads required to produce these deflections are computed. The cycle is completed by applying the interface reaction to the primary structure and computing its updated deflections. It is shown that the mathematical condition for convergence of this procedure is that the maximum eigenvalue of the equation relating primary-structure deflection to imposed secondary-structure deflection be less than unity, which is shown to correspond with the physical requirement that the secondary structure be more flexible at the interface boundary.

  9. Finite Volume Element (FVE) discretization and multilevel solution of the axisymmetric heat equation

    NASA Astrophysics Data System (ADS)

    Litaker, Eric T.

    1994-12-01

    The axisymmetric heat equation, resulting from a point-source of heat applied to a metal block, is solved numerically; both iterative and multilevel solutions are computed in order to compare the two processes. The continuum problem is discretized in two stages: finite differences are used to discretize the time derivatives, resulting is a fully implicit backward time-stepping scheme, and the Finite Volume Element (FVE) method is used to discretize the spatial derivatives. The application of the FVE method to a problem in cylindrical coordinates is new, and results in stencils which are analyzed extensively. Several iteration schemes are considered, including both Jacobi and Gauss-Seidel; a thorough analysis of these schemes is done, using both the spectral radii of the iteration matrices and local mode analysis. Using this discretization, a Gauss-Seidel relaxation scheme is used to solve the heat equation iteratively. A multilevel solution process is then constructed, including the development of intergrid transfer and coarse grid operators. Local mode analysis is performed on the components of the amplification matrix, resulting in the two-level convergence factors for various combinations of the operators. A multilevel solution process is implemented by using multigrid V-cycles; the iterative and multilevel results are compared and discussed in detail. The computational savings resulting from the multilevel process are then discussed.

  10. The ITER bolometer diagnostic: Status and plansa)

    NASA Astrophysics Data System (ADS)

    Meister, H.; Giannone, L.; Horton, L. D.; Raupp, G.; Zeidner, W.; Grunda, G.; Kalvin, S.; Fischer, U.; Serikov, A.; Stickel, S.; Reichle, R.

    2008-10-01

    A consortium consisting of four EURATOM Associations has been set up to develop the project plan for the full development of the ITER bolometer diagnostic and to continue urgent R&D activities. An overview of the current status is given, including detector development, line-of-sight optimization, performance analysis as well as the design of the diagnostic components and their integration in ITER. This is complemented by the presentation of plans for future activities required to successfully implement the bolometer diagnostic, ranging from the detector development over diagnostic design and prototype testing to RH tools for calibration.

  11. Quantification of Chemical Erosion in the DIII-D Divertor

    NASA Astrophysics Data System (ADS)

    McLean, Adam

    2009-11-01

    Chemical erosion (CE) yield at the graphite divertor target in DIII-D was measured to be substantially lower in cold near-detached plasma conditions compared to well-attached ones, with major implications for ITER. Current estimates of tritium retention by co-deposition with hydrocarbons (HCs) in ITER place potentially severe restrictions on operation. However, calculations done to date have been based on excessively conservative assumptions, due to limited understanding of cold divertor plasmas (1-5eV) which bridge energy thresholds for complex atomic and molecular processes not present in attached conditions. Hydrocarbon injection through a unique porous graphite plate which realistically simulates secondary reactions of HCs with a graphite surface has been used to measure CE in-situ. For the first time in a divertor, measurements were made at extrinsic CH4 injection rates comparable to the expected intrinsic CE rate of C, with the resulting spectroscopic emissions separated from those of the intrinsic sources. Under cold plasma conditions the contribution of CE-produced C relative to total C sources in the divertor declined dramatically from ˜50% to <15%. Photon efficiencies for products from the breakup of injected CH4 were greater than previous measurements at higher puff rates, indicating the importance of minimizing perturbation to the local plasma. At 350K, the measured CE yield near the outer strike point was ˜2.6% in attachment dropping to only ˜0.5% in cold plasma; results are consistent with some theoretical predications and lab studies. Under full detachment, near total extinction of the CD band occurred, consistent with suppression of net C erosion. These findings have potentially major impact on projected target lifetime and tritium retention in future reactors, and for the PFC choice in ITER.

  12. Hydrologic Process Parameterization of Electrical Resistivity Imaging of Solute Plumes Using POD McMC

    NASA Astrophysics Data System (ADS)

    Awatey, M. T.; Irving, J.; Oware, E. K.

    2016-12-01

    Markov chain Monte Carlo (McMC) inversion frameworks are becoming increasingly popular in geophysics due to their ability to recover multiple equally plausible geologic features that honor the limited noisy measurements. Standard McMC methods, however, become computationally intractable with increasing dimensionality of the problem, for example, when working with spatially distributed geophysical parameter fields. We present a McMC approach based on a sparse proper orthogonal decomposition (POD) model parameterization that implicitly incorporates the physics of the underlying process. First, we generate training images (TIs) via Monte Carlo simulations of the target process constrained to a conceptual model. We then apply POD to construct basis vectors from the TIs. A small number of basis vectors can represent most of the variability in the TIs, leading to dimensionality reduction. A projection of the starting model into the reduced basis space generates the starting POD coefficients. At each iteration, only coefficients within a specified sampling window are resimulated assuming a Gaussian prior. The sampling window grows at a specified rate as the number of iteration progresses starting from the coefficients corresponding to the highest ranked basis to those of the least informative basis. We found this gradual increment in the sampling window to be more stable compared to resampling all the coefficients right from the first iteration. We demonstrate the performance of the algorithm with both synthetic and lab-scale electrical resistivity imaging of saline tracer experiments, employing the same set of basis vectors for all inversions. We consider two scenarios of unimodal and bimodal plumes. The unimodal plume is consistent with the hypothesis underlying the generation of the TIs whereas bimodality in plume morphology was not theorized. We show that uncertainty quantification using McMC can proceed in the reduced dimensionality space while accounting for the physics of the underlying process.

  13. Experiment of low resistance joints for the ITER correction coil.

    PubMed

    Liu, Huajun; Wu, Yu; Wu, Weiyue; Liu, Bo; Shi, Yi; Guo, Shuai

    2013-01-01

    A test method was designed and performed to measure joint resistance of the ITER correction coil (CC) in liquid helium (LHe) temperature. A 10 kA superconducting transformer was manufactured to provide the joints current. The transformer consisted of two concentric layer-wound superconducting solenoids. NbTi superconducting wire was wound in the primary coil and the ITER CC conductor was wound in the secondary coil. The primary and the secondary coils were both immersed in liquid helium of a 300 mm useful bore diameter cryostat. Two ITER CC joints were assembled in the secondary loop and tested. The current of the secondary loop was ramped to 9 kA in several steps. The two joint resistances were measured to be 1.2 nΩ and 1.65 nΩ, respectively.

  14. Efficient full-chip SRAF placement using machine learning for best accuracy and improved consistency

    NASA Astrophysics Data System (ADS)

    Wang, Shibing; Baron, Stanislas; Kachwala, Nishrin; Kallingal, Chidam; Sun, Dezheng; Shu, Vincent; Fong, Weichun; Li, Zero; Elsaid, Ahmad; Gao, Jin-Wei; Su, Jing; Ser, Jung-Hoon; Zhang, Quan; Chen, Been-Der; Howell, Rafael; Hsu, Stephen; Luo, Larry; Zou, Yi; Zhang, Gary; Lu, Yen-Wen; Cao, Yu

    2018-03-01

    Various computational approaches from rule-based to model-based methods exist to place Sub-Resolution Assist Features (SRAF) in order to increase process window for lithography. Each method has its advantages and drawbacks, and typically requires the user to make a trade-off between time of development, accuracy, consistency and cycle time. Rule-based methods, used since the 90 nm node, require long development time and struggle to achieve good process window performance for complex patterns. Heuristically driven, their development is often iterative and involves significant engineering time from multiple disciplines (Litho, OPC and DTCO). Model-based approaches have been widely adopted since the 20 nm node. While the development of model-driven placement methods is relatively straightforward, they often become computationally expensive when high accuracy is required. Furthermore these methods tend to yield less consistent SRAFs due to the nature of the approach: they rely on a model which is sensitive to the pattern placement on the native simulation grid, and can be impacted by such related grid dependency effects. Those undesirable effects tend to become stronger when more iterations or complexity are needed in the algorithm to achieve required accuracy. ASML Brion has developed a new SRAF placement technique on the Tachyon platform that is assisted by machine learning and significantly improves the accuracy of full chip SRAF placement while keeping consistency and runtime under control. A Deep Convolutional Neural Network (DCNN) is trained using the target wafer layout and corresponding Continuous Transmission Mask (CTM) images. These CTM images have been fully optimized using the Tachyon inverse mask optimization engine. The neural network generated SRAF guidance map is then used to place SRAF on full-chip. This is different from our existing full-chip MB-SRAF approach which utilizes a SRAF guidance map (SGM) of mask sensitivity to improve the contrast of optical image at the target pattern edges. In this paper, we demonstrate that machine learning assisted SRAF placement can achieve a superior process window compared to the SGM model-based SRAF method, while keeping the full-chip runtime affordable, and maintain consistency of SRAF placement . We describe the current status of this machine learning assisted SRAF technique and demonstrate its application to full chip mask synthesis and discuss how it can extend the computational lithography roadmap.

  15. Baseline Architecture of ITER Control System

    NASA Astrophysics Data System (ADS)

    Wallander, A.; Di Maio, F.; Journeaux, J.-Y.; Klotz, W.-D.; Makijarvi, P.; Yonekawa, I.

    2011-08-01

    The control system of ITER consists of thousands of computers processing hundreds of thousands of signals. The control system, being the primary tool for operating the machine, shall integrate, control and coordinate all these computers and signals and allow a limited number of staff to operate the machine from a central location with minimum human intervention. The primary functions of the ITER control system are plant control, supervision and coordination, both during experimental pulses and 24/7 continuous operation. The former can be split in three phases; preparation of the experiment by defining all parameters; executing the experiment including distributed feed-back control and finally collecting, archiving, analyzing and presenting all data produced by the experiment. We define the control system as a set of hardware and software components with well defined characteristics. The architecture addresses the organization of these components and their relationship to each other. We distinguish between physical and functional architecture, where the former defines the physical connections and the latter the data flow between components. In this paper, we identify the ITER control system based on the plant breakdown structure. Then, the control system is partitioned into a workable set of bounded subsystems. This partition considers at the same time the completeness and the integration of the subsystems. The components making up subsystems are identified and defined, a naming convention is introduced and the physical networks defined. Special attention is given to timing and real-time communication for distributed control. Finally we discuss baseline technologies for implementing the proposed architecture based on analysis, market surveys, prototyping and benchmarking carried out during the last year.

  16. Status of the ITER Cryodistribution

    NASA Astrophysics Data System (ADS)

    Chang, H.-S.; Vaghela, H.; Patel, P.; Rizzato, A.; Cursan, M.; Henry, D.; Forgeas, A.; Grillot, D.; Sarkar, B.; Muralidhara, S.; Das, J.; Shukla, V.; Adler, E.

    2017-12-01

    Since the conceptual design of the ITER Cryodistribution many modifications have been applied due to both system optimization and improved knowledge of the clients’ requirements. Process optimizations in the Cryoplant resulted in component simplifications whereas increased heat load in some of the superconducting magnet systems required more complicated process configuration but also the removal of a cold box was possible due to component arrangement standardization. Another cold box, planned for redundancy, has been removed due to the Tokamak in-Cryostat piping layout modification. In this proceeding we will summarize the present design status and component configuration of the ITER Cryodistribution with all changes implemented which aim at process optimization and simplification as well as operational reliability, stability and flexibility.

  17. MWR3C physical retrievals of precipitable water vapor and cloud liquid water path

    DOE Data Explorer

    Cadeddu, Maria

    2016-10-12

    The data set contains physical retrievals of PWV and cloud LWP retrieved from MWR3C measurements during the MAGIC campaign. Additional data used in the retrieval process include radiosondes and ceilometer. The retrieval is based on an optimal estimation technique that starts from a first guess and iteratively repeats the forward model calculations until a predefined convergence criterion is satisfied. The first guess is a vector of [PWV,LWP] from the neural network retrieval fields in the netcdf file. When convergence is achieved the 'a posteriori' covariance is computed and its square root is expressed in the file as the retrieval 1-sigma uncertainty. The closest radiosonde profile is used for the radiative transfer calculations and ceilometer data are used to constrain the cloud base height. The RMS error between the brightness temperatures is computed at the last iterations as a consistency check and is written in the last column of the output file.

  18. Exploiting parallel computing with limited program changes using a network of microcomputers

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.; Sobieszczanski-Sobieski, J.

    1985-01-01

    Network computing and multiprocessor computers are two discernible trends in parallel processing. The computational behavior of an iterative distributed process in which some subtasks are completed later than others because of an imbalance in computational requirements is of significant interest. The effects of asynchronus processing was studied. A small existing program was converted to perform finite element analysis by distributing substructure analysis over a network of four Apple IIe microcomputers connected to a shared disk, simulating a parallel computer. The substructure analysis uses an iterative, fully stressed, structural resizing procedure. A framework of beams divided into three substructures is used as the finite element model. The effects of asynchronous processing on the convergence of the design variables are determined by not resizing particular substructures on various iterations.

  19. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    PubMed

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Simultaneous and iterative weighted regression analysis of toxicity tests using a microplate reader.

    PubMed

    Galgani, F; Cadiou, Y; Gilbert, F

    1992-04-01

    A system is described for determination of LC50 or IC50 by an iterative process based on data obtained from a plate reader using a marine unicellular alga as a target species. The esterase activity of Tetraselmis suesica on fluorescein diacetate as a substrate was measured using a fluorescence titerplate. Simultaneous analysis of results was performed using an iterative process adopting the sigmoid function Y = y/1 (dose of toxicant/IC50)slope for dose-response relationships. IC50 (+/- SEM) was estimated (P less than 0.05). An application with phosalone as a toxicant is presented.

  1. Experiments on water detritiation and cryogenic distillation at TLK; Impact on ITER fuel cycle subsystems interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cristescu, I.; Cristescu, I. R.; Doerr, L.

    2008-07-15

    The ITER Isotope Separation System (ISS) and Water Detritiation System (WDS) should be integrated in order to reduce potential chronic tritium emissions from the ISS. This is achieved by routing the top (protium) product from the ISS to a feed point near the bottom end of the WDS Liquid Phase Catalytic Exchange (LPCE) column. This provides an additional barrier against ISS emissions and should mitigate the memory effects due to process parameter fluctuations in the ISS. To support the research activities needed to characterize the performances of various components for WDS and ISS processes under various working conditions and configurationsmore » as needed for ITER design, an experimental facility called TRENTA representative of the ITER WDS and ISS protium separation column, has been commissioned and is in operation at TLK The experimental program on TRENTA facility is conducted to provide the necessary design data related to the relevant ITER operating modes. The operation availability and performances of ISS-WDS have impact on ITER fuel cycle subsystems with consequences on the design integration. The preliminary experimental data on TRENTA facility are presented. (authors)« less

  2. Self-consistent Non-LTE Model of Infrared Molecular Emissions and Oxygen Dayglows in the Mesosphere and Lower Thermosphere

    NASA Technical Reports Server (NTRS)

    Feofilov, Artem G.; Yankovsky, Valentine A.; Pesnell, William D.; Kutepov, Alexander A.; Goldberg, Richard A.; Mauilova, Rada O.

    2007-01-01

    We present the new version of the ALI-ARMS (for Accelerated Lambda Iterations for Atmospheric Radiation and Molecular Spectra) model. The model allows simultaneous self-consistent calculating the non-LTE populations of the electronic-vibrational levels of the O3 and O2 photolysis products and vibrational level populations of CO2, N2,O2, O3, H2O, CO and other molecules with detailed accounting for the variety of the electronic-vibrational, vibrational-vibrational and vibrational-translational energy exchange processes. The model was used as the reference one for modeling the O2 dayglows and infrared molecular emissions for self-consistent diagnostics of the multi-channel space observations of MLT in the SABER experiment It also allows reevaluating the thermalization efficiency of the absorbed solar ultraviolet energy and infrared radiative cooling/heating of MLT by detailed accounting of the electronic-vibrational relaxation of excited photolysis products via the complex chain of collisional energy conversion processes down to the vibrational energy of optically active trace gas molecules.

  3. 3D shape reconstruction of specular surfaces by using phase measuring deflectometry

    NASA Astrophysics Data System (ADS)

    Zhou, Tian; Chen, Kun; Wei, Haoyun; Li, Yan

    2016-10-01

    The existing estimation methods for recovering height information from surface gradient are mainly divided into Modal and Zonal techniques. Since specular surfaces used in the industry always have complex and large areas, considerations must be given to both the improvement of measurement accuracy and the acceleration of on-line processing speed, which beyond the capacity of existing estimations. Incorporating the Modal and Zonal approaches into a unifying scheme, we introduce an improved 3D shape reconstruction version of specular surfaces based on Phase Measuring Deflectometry in this paper. The Modal estimation is firstly implemented to derive the coarse height information of the measured surface as initial iteration values. Then the real shape can be recovered utilizing a modified Zonal wave-front reconstruction algorithm. By combining the advantages of Modal and Zonal estimations, the proposed method simultaneously achieves consistently high accuracy and dramatically rapid convergence. Moreover, the iterative process based on an advanced successive overrelaxation technique shows a consistent rejection of measurement errors, guaranteeing the stability and robustness in practical applications. Both simulation and experimentally measurement demonstrate the validity and efficiency of the proposed improved method. According to the experimental result, the computation time decreases approximately 74.92% in contrast to the Zonal estimation and the surface error is about 6.68 μm with reconstruction points of 391×529 pixels of an experimentally measured sphere mirror. In general, this method can be conducted with fast convergence speed and high accuracy, providing an efficient, stable and real-time approach for the shape reconstruction of specular surfaces in practical situations.

  4. Test of prototype ITER vacuum ultraviolet spectrometer and its application to impurity study in KSTAR plasmas.

    PubMed

    Seon, C R; Hong, J H; Jang, J; Lee, S H; Choe, W; Lee, H H; Cheon, M S; Pak, S; Lee, H G; Biel, W; Barnsley, R

    2014-11-01

    To optimize the design of ITER vacuum ultraviolet (VUV) spectrometer, a prototype VUV spectrometer was developed. The sensitivity calibration curve of the spectrometer was calculated from the mirror reflectivity, the grating efficiency, and the detector efficiency. The calibration curve was consistent with the calibration points derived in the experiment using the calibrated hollow cathode lamp. For the application of the prototype ITER VUV spectrometer, the prototype spectrometer was installed at KSTAR, and various impurity emission lines could be measured. By analyzing about 100 shots, strong positive correlation between the O VI and the C IV emission intensities could be found.

  5. Convergence of quasiparticle self-consistent GW calculations of transition metal monoxides

    NASA Astrophysics Data System (ADS)

    Das, Suvadip; Coulter, John E.; Manousakis, Efstratios

    2015-03-01

    We have investigated the electronic structure of the transition metal monoxides MnO, CoO, and NiO in their undistorted rock-salt structure within a fully iterated quasiparticle self-consistent GW (QPscGW) scheme. We have studied the convergence of the QPscGW method, i.e., how the quasiparticle energy eigenvalues and wavefunctions converge as a function of the QPscGW iterations, and compared the converged outputs obtained from different starting wavefunctions. We found that the convergence is slow and that a one-shot G0W0 calculation does not significantly improve the initial eigenvalues and states. In some cases the ``path'' to convergence may go through energy band reordering which cannot be captured by the simple initial unperturbed Hamiltonian. When a fully iterated solution is reached, the converged density of states, band-gaps and magnetic moments of these oxides are found to be only weakly dependent on the choice of the starting wavefunctions and in reasonable agreement with the experiment. National High Magnetic Field Laboratory.

  6. An automatic iterative decision-making method for intuitionistic fuzzy linguistic preference relations

    NASA Astrophysics Data System (ADS)

    Pei, Lidan; Jin, Feifei; Ni, Zhiwei; Chen, Huayou; Tao, Zhifu

    2017-10-01

    As a new preference structure, the intuitionistic fuzzy linguistic preference relation (IFLPR) was recently introduced to efficiently deal with situations in which the membership and non-membership are represented as linguistic terms. In this paper, we study the issues of additive consistency and the derivation of the intuitionistic fuzzy weight vector of an IFLPR. First, the new concepts of order consistency, additive consistency and weak transitivity for IFLPRs are introduced, and followed by a discussion of the characterisation about additive consistent IFLPRs. Then, a parameterised transformation approach is investigated to convert the normalised intuitionistic fuzzy weight vector into additive consistent IFLPRs. After that, a linear optimisation model is established to derive the normalised intuitionistic fuzzy weights for IFLPRs, and a consistency index is defined to measure the deviation degree between an IFLPR and its additive consistent IFLPR. Furthermore, we develop an automatic iterative decision-making method to improve the IFLPRs with unacceptable additive consistency until the adjusted IFLPRs are acceptable additive consistent, and it helps the decision-maker to obtain the reasonable and reliable decision-making results. Finally, an illustrative example is provided to demonstrate the validity and applicability of the proposed method.

  7. Iterative-Transform Phase Retrieval Using Adaptive Diversity

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein multiple intensity images are processed, each using a different defocus value. The processing is done by an iterative-transform method, yielding individual phase estimates corresponding to each image of the defocus-diversity data set. These individual phase estimates are combined in a weighted average to form a new phase estimate, which serves as the initial phase estimate for either the next iteration of the iterative-transform method or, if the maximum number of iterations has been reached, for the next several steps, which constitute the outerloop portion of the algorithm. The details of the next several steps must be omitted here for the sake of brevity. The overall effect of these steps is to adaptively update the diversity defocus values according to recovery of global defocus in the phase estimate. Aberration recovery varies with differing amounts as the amount of diversity defocus is updated in each image; thus, feedback is incorporated into the recovery process. This process is iterated until the global defocus error is driven to zero during the recovery process. The amplitude of aberration may far exceed one wavelength after completion of the inner-loop portion of the algorithm, and the classical iterative transform method does not, by itself, enable recovery of multi-wavelength aberrations. Hence, in the absence of a means of off-loading the multi-wavelength portion of the aberration, the algorithm would produce a wrapped phase map. However, a special aberration-fitting procedure can be applied to the wrapped phase data to transfer at least some portion of the multi-wavelength aberration to the diversity function, wherein the data are treated as known phase values. In this way, a multiwavelength aberration can be recovered incrementally by successively applying the aberration-fitting procedure to intermediate wrapped phase maps. During recovery, as more of the aberration is transferred to the diversity function following successive iterations around the ter loop, the estimated phase ceases to wrap in places where the aberration values become incorporated as part of the diversity function. As a result, as the aberration content is transferred to the diversity function, the phase estimate resembles that of a reference flat.

  8. Modeling the dynamics of evaluation: a multilevel neural network implementation of the iterative reprocessing model.

    PubMed

    Ehret, Phillip J; Monroe, Brian M; Read, Stephen J

    2015-05-01

    We present a neural network implementation of central components of the iterative reprocessing (IR) model. The IR model argues that the evaluation of social stimuli (attitudes, stereotypes) is the result of the IR of stimuli in a hierarchy of neural systems: The evaluation of social stimuli develops and changes over processing. The network has a multilevel, bidirectional feedback evaluation system that integrates initial perceptual processing and later developing semantic processing. The network processes stimuli (e.g., an individual's appearance) over repeated iterations, with increasingly higher levels of semantic processing over time. As a result, the network's evaluations of stimuli evolve. We discuss the implications of the network for a number of different issues involved in attitudes and social evaluation. The success of the network supports the IR model framework and provides new insights into attitude theory. © 2014 by the Society for Personality and Social Psychology, Inc.

  9. US NDC Modernization Iteration E1 Prototyping Report: Processing Control Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prescott, Ryan; Hamlet, Benjamin R.

    2014-12-01

    During the first iteration of the US NDC Modernization Elaboration phase (E1), the SNL US NDC modernization project team developed an initial survey of applicable COTS solutions, and established exploratory prototyping related to the processing control framework in support of system architecture definition. This report summarizes these activities and discusses planned follow-on work.

  10. Foucauldian Iterative Learning Conversations--An Example of Organisational Change: Developing Conjoint-Work between EPS and Social Workers

    ERIC Educational Resources Information Center

    Apter, Brian

    2014-01-01

    An organisational change-process in a UK local authority (LA) over two years is examined using transcribed excerpts from three meetings. The change-process is analysed using a Foucauldian analytical tool--Iterative Learning Conversations (ILCS). An Educational Psychology Service was changed from being primarily an education-focussed…

  11. Human Engineering of Space Vehicle Displays and Controls

    NASA Technical Reports Server (NTRS)

    Whitmore, Mihriban; Holden, Kritina L.; Boyer, Jennifer; Stephens, John-Paul; Ezer, Neta; Sandor, Aniko

    2010-01-01

    Proper attention to the integration of the human needs in the vehicle displays and controls design process creates a safe and productive environment for crew. Although this integration is critical for all phases of flight, for crew interfaces that are used during dynamic phases (e.g., ascent and entry), the integration is particularly important because of demanding environmental conditions. This panel addresses the process of how human engineering involvement ensures that human-system integration occurs early in the design and development process and continues throughout the lifecycle of a vehicle. This process includes the development of requirements and quantitative metrics to measure design success, research on fundamental design questions, human-in-the-loop evaluations, and iterative design. Processes and results from research on displays and controls; the creation and validation of usability, workload, and consistency metrics; and the design and evaluation of crew interfaces for NASA's Crew Exploration Vehicle are used as case studies.

  12. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  13. High power millimeter wave experiment of ITER relevant electron cyclotron heating and current drive system.

    PubMed

    Takahashi, K; Kajiwara, K; Oda, Y; Kasugai, A; Kobayashi, N; Sakamoto, K; Doane, J; Olstad, R; Henderson, M

    2011-06-01

    High power, long pulse millimeter (mm) wave experiments of the RF test stand (RFTS) of Japan Atomic Energy Agency (JAEA) were performed. The system consists of a 1 MW/170 GHz gyrotron, a long and short distance transmission line (TL), and an equatorial launcher (EL) mock-up. The RFTS has an ITER-relevant configuration, i.e., consisted by a 1 MW-170 GHz gyrotron, a mm wave TL, and an EL mock-up. The TL is composed of a matching optics unit, evacuated circular corrugated waveguides, 6-miter bends, an in-line waveguide switch, and an isolation valve. The EL-mock-up is fabricated according to the current design of the ITER launcher. The Gaussian-like beam radiation with the steering capability of 20°-40° from the EL mock-up was also successfully proved. The high power, long pulse power transmission test was conducted with the metallic load replaced by the EL mock-up, and the transmission of 1 MW/800 s and 0.5 MW/1000 s was successfully demonstrated with no arcing and no damages. The transmission efficiency of the TL was 96%. The results prove the feasibility of the ITER electron cyclotron heating and current drive system. © 2011 American Institute of Physics

  14. Simultaneous gains tuning in boiler/turbine PID-based controller clusters using iterative feedback tuning methodology.

    PubMed

    Zhang, Shu; Taft, Cyrus W; Bentsman, Joseph; Hussey, Aaron; Petrus, Bryan

    2012-09-01

    Tuning a complex multi-loop PID based control system requires considerable experience. In today's power industry the number of available qualified tuners is dwindling and there is a great need for better tuning tools to maintain and improve the performance of complex multivariable processes. Multi-loop PID tuning is the procedure for the online tuning of a cluster of PID controllers operating in a closed loop with a multivariable process. This paper presents the first application of the simultaneous tuning technique to the multi-input-multi-output (MIMO) PID based nonlinear controller in the power plant control context, with the closed-loop system consisting of a MIMO nonlinear boiler/turbine model and a nonlinear cluster of six PID-type controllers. Although simplified, the dynamics and cross-coupling of the process and the PID cluster are similar to those used in a real power plant. The particular technique selected, iterative feedback tuning (IFT), utilizes the linearized version of the PID cluster for signal conditioning, but the data collection and tuning is carried out on the full nonlinear closed-loop system. Based on the figure of merit for the control system performance, the IFT is shown to deliver performance favorably comparable to that attained through the empirical tuning carried out by an experienced control engineer. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  15. An iterative reduced field-of-view reconstruction for periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI.

    PubMed

    Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J

    2015-10-01

    To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.

  16. Cognitive representation of "musical fractals": Processing hierarchy and recursion in the auditory domain.

    PubMed

    Martins, Mauricio Dias; Gingras, Bruno; Puig-Waldmueller, Estela; Fitch, W Tecumseh

    2017-04-01

    The human ability to process hierarchical structures has been a longstanding research topic. However, the nature of the cognitive machinery underlying this faculty remains controversial. Recursion, the ability to embed structures within structures of the same kind, has been proposed as a key component of our ability to parse and generate complex hierarchies. Here, we investigated the cognitive representation of both recursive and iterative processes in the auditory domain. The experiment used a two-alternative forced-choice paradigm: participants were exposed to three-step processes in which pure-tone sequences were built either through recursive or iterative processes, and had to choose the correct completion. Foils were constructed according to generative processes that did not match the previous steps. Both musicians and non-musicians were able to represent recursion in the auditory domain, although musicians performed better. We also observed that general 'musical' aptitudes played a role in both recursion and iteration, although the influence of musical training was somehow independent from melodic memory. Moreover, unlike iteration, recursion in audition was well correlated with its non-auditory (recursive) analogues in the visual and action sequencing domains. These results suggest that the cognitive machinery involved in establishing recursive representations is domain-general, even though this machinery requires access to information resulting from domain-specific processes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Using In-Training Evaluation Report (ITER) Qualitative Comments to Assess Medical Students and Residents: A Systematic Review.

    PubMed

    Hatala, Rose; Sawatsky, Adam P; Dudek, Nancy; Ginsburg, Shiphra; Cook, David A

    2017-06-01

    In-training evaluation reports (ITERs) constitute an integral component of medical student and postgraduate physician trainee (resident) assessment. ITER narrative comments have received less attention than the numeric scores. The authors sought both to determine what validity evidence informs the use of narrative comments from ITERs for assessing medical students and residents and to identify evidence gaps. Reviewers searched for relevant English-language studies in MEDLINE, EMBASE, Scopus, and ERIC (last search June 5, 2015), and in reference lists and author files. They included all original studies that evaluated ITERs for qualitative assessment of medical students and residents. Working in duplicate, they selected articles for inclusion, evaluated quality, and abstracted information on validity evidence using Kane's framework (inferences of scoring, generalization, extrapolation, and implications). Of 777 potential articles, 22 met inclusion criteria. The scoring inference is supported by studies showing that rich narratives are possible, that changing the prompt can stimulate more robust narratives, and that comments vary by context. Generalization is supported by studies showing that narratives reach thematic saturation and that analysts make consistent judgments. Extrapolation is supported by favorable relationships between ITER narratives and numeric scores from ITERs and non-ITER performance measures, and by studies confirming that narratives reflect constructs deemed important in clinical work. Evidence supporting implications is scant. The use of ITER narratives for trainee assessment is generally supported, except that evidence is lacking for implications and decisions. Future research should seek to confirm implicit assumptions and evaluate the impact of decisions.

  18. Reducing Design Cycle Time and Cost Through Process Resequencing

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    2004-01-01

    In today's competitive environment, companies are under enormous pressure to reduce the time and cost of their design cycle. One method for reducing both time and cost is to develop an understanding of the flow of the design processes and the effects of the iterative subcycles that are found in complex design projects. Once these aspects are understood, the design manager can make decisions that take advantage of decomposition, concurrent engineering, and parallel processing techniques to reduce the total time and the total cost of the design cycle. One software tool that can aid in this decision-making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). The DeMAID software minimizes the feedback couplings that create iterative subcycles, groups processes into iterative subcycles, and decomposes the subcycles into a hierarchical structure. The real benefits of producing the best design in the least time and at a minimum cost are obtained from sequencing the processes in the subcycles.

  19. From Intent to Action: An Iterative Engineering Process

    ERIC Educational Resources Information Center

    Mouton, Patrice; Rodet, Jacques; Vacaresse, Sylvain

    2015-01-01

    Quite by chance, and over the course of a few haphazard meetings, a Master's degree in "E-learning Design" gradually developed in a Faculty of Economics. Its original and evolving design was the result of an iterative process carried out, not by a single Instructional Designer (ID), but by a full ID team. Over the last 10 years it has…

  20. Iterated learning and the evolution of language.

    PubMed

    Kirby, Simon; Griffiths, Tom; Smith, Kenny

    2014-10-01

    Iterated learning describes the process whereby an individual learns their behaviour by exposure to another individual's behaviour, who themselves learnt it in the same way. It can be seen as a key mechanism of cultural evolution. We review various methods for understanding how behaviour is shaped by the iterated learning process: computational agent-based simulations; mathematical modelling; and laboratory experiments in humans and non-human animals. We show how this framework has been used to explain the origins of structure in language, and argue that cultural evolution must be considered alongside biological evolution in explanations of language origins. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Learning Efficient Sparse and Low Rank Models.

    PubMed

    Sprechmann, P; Bronstein, A M; Sapiro, G

    2015-09-01

    Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.

  2. Analysis of one dimension migration law from rainfall runoff on urban roof

    NASA Astrophysics Data System (ADS)

    Weiwei, Chen

    2017-08-01

    Research was taken on the hydrology and water quality process in the natural rain condition and water samples were collected and analyzed. The pollutant were included SS, COD and TN. Based on the mass balance principle, one dimension migration model was built for the rainfall runoff pollution in surface. The difference equation was developed according to the finite difference method, by applying the Newton iteration method for solving it. The simulated pollutant concentration process was in consistent with the measured value on model, and Nash-Sutcliffe coefficient was higher than 0.80. The model had better practicability, which provided evidence for effectively utilizing urban rainfall resource, non-point source pollution of making management technologies and measures, sponge city construction, and so on.

  3. Twostep-by-twostep PIRK-type PC methods with continuous output formulas

    NASA Astrophysics Data System (ADS)

    Cong, Nguyen Huu; Xuan, Le Ngoc

    2008-11-01

    This paper deals with parallel predictor-corrector (PC) iteration methods based on collocation Runge-Kutta (RK) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of first-order differential equations. At nth step, the continuous output formulas are used not only for predicting the stage values in the PC iteration methods but also for calculating the step values at (n+2)th step. In this case, the integration processes can be proceeded twostep-by-twostep. The resulting twostep-by-twostep (TBT) parallel-iterated RK-type (PIRK-type) methods with continuous output formulas (twostep-by-twostep PIRKC methods or TBTPIRKC methods) give us a faster integration process. Fixed stepsize applications of these TBTPIRKC methods to a few widely-used test problems reveal that the new PC methods are much more efficient when compared with the well-known parallel-iterated RK methods (PIRK methods), parallel-iterated RK-type PC methods with continuous output formulas (PIRKC methods) and sequential explicit RK codes DOPRI5 and DOP853 available from the literature.

  4. Kernel-based least squares policy iteration for reinforcement learning.

    PubMed

    Xu, Xin; Hu, Dewen; Lu, Xicheng

    2007-07-01

    In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating an initial controller to ensure online performance.

  5. A new solution procedure for a nonlinear infinite beam equation of motion

    NASA Astrophysics Data System (ADS)

    Jang, T. S.

    2016-10-01

    Our goal of this paper is of a purely theoretical question, however which would be fundamental in computational partial differential equations: Can a linear solution-structure for the equation of motion for an infinite nonlinear beam be directly manipulated for constructing its nonlinear solution? Here, the equation of motion is modeled as mathematically a fourth-order nonlinear partial differential equation. To answer the question, a pseudo-parameter is firstly introduced to modify the equation of motion. And then, an integral formalism for the modified equation is found here, being taken as a linear solution-structure. It enables us to formulate a nonlinear integral equation of second kind, equivalent to the original equation of motion. The fixed point approach, applied to the integral equation, results in proposing a new iterative solution procedure for constructing the nonlinear solution of the original beam equation of motion, which consists luckily of just the simple regular numerical integration for its iterative process; i.e., it appears to be fairly simple as well as straightforward to apply. A mathematical analysis is carried out on both natures of convergence and uniqueness of the iterative procedure by proving a contractive character of a nonlinear operator. It follows conclusively,therefore, that it would be one of the useful nonlinear strategies for integrating the equation of motion for a nonlinear infinite beam, whereby the preceding question may be answered. In addition, it may be worth noticing that the pseudo-parameter introduced here has double roles; firstly, it connects the original beam equation of motion with the integral equation, second, it is related with the convergence of the iterative method proposed here.

  6. Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.

    2004-01-01

    Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.

  7. A superlinear interior points algorithm for engineering design optimization

    NASA Technical Reports Server (NTRS)

    Herskovits, J.; Asquier, J.

    1990-01-01

    We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.

  8. Multidisciplinary systems optimization by linear decomposition

    NASA Technical Reports Server (NTRS)

    Sobieski, J.

    1984-01-01

    In a typical design process major decisions are made sequentially. An illustrated example is given for an aircraft design in which the aerodynamic shape is usually decided first, then the airframe is sized for strength and so forth. An analogous sequence could be laid out for any other major industrial product, for instance, a ship. The loops in the discipline boxes symbolize iterative design improvements carried out within the confines of a single engineering discipline, or subsystem. The loops spanning several boxes depict multidisciplinary design improvement iterations. Omitted for graphical simplicity is parallelism of the disciplinary subtasks. The parallelism is important in order to develop a broad workfront necessary to shorten the design time. If all the intradisciplinary and interdisciplinary iterations were carried out to convergence, the process could yield a numerically optimal design. However, it usually stops short of that because of time and money limitations. This is especially true for the interdisciplinary iterations.

  9. Integrating Low-Cost Rapid Usability Testing into Agile System Development of Healthcare IT: A Methodological Perspective.

    PubMed

    Kushniruk, Andre W; Borycki, Elizabeth M

    2015-01-01

    The development of more usable and effective healthcare information systems has become a critical issue. In the software industry methodologies such as agile and iterative development processes have emerged to lead to more effective and usable systems. These approaches highlight focusing on user needs and promoting iterative and flexible development practices. Evaluation and testing of iterative agile development cycles is considered an important part of the agile methodology and iterative processes for system design and re-design. However, the issue of how to effectively integrate usability testing methods into rapid and flexible agile design cycles has remained to be fully explored. In this paper we describe our application of an approach known as low-cost rapid usability testing as it has been applied within agile system development in healthcare. The advantages of the integrative approach are described, along with current methodological considerations.

  10. Experimental demonstration of iterative post-equalization algorithm for 37.5-Gbaud PM-16QAM quad-carrier Terabit superchannel.

    PubMed

    Jia, Zhensheng; Chien, Hung-Chang; Cai, Yi; Yu, Jianjun; Zhang, Chengliang; Li, Junjie; Ma, Yiran; Shang, Dongdong; Zhang, Qi; Shi, Sheping; Wang, Huitao

    2015-02-09

    We experimentally demonstrate a quad-carrier 1-Tb/s solution with 37.5-GBaud PM-16QAM signal over 37.5-GHz optical grid at 6.7 b/s/Hz net spectral efficiency. Digital Nyquist pulse shaping at the transmitter and post-equalization at the receiver are employed to mitigate the impairments of joint inter-symbol-interference (ISI) and inter-channel-interference (ICI) symbol degradation. The post-equalization algorithms consist of one sample/symbol based decision-directed least mean square (DD-LMS) adaptive filter, digital post filter and maximum likelihood sequence estimation (MLSE), and a positive iterative process among them. By combining these algorithms, the improvement as much as 4-dB OSNR (0.1nm) at SD-FEC limit (Q(2) = 6.25 corresponding to BER = 2.0e-2) is obtained when compared to no such post-equalization process, and transmission over 820-km EDFA-only standard single-mode fiber (SSMF) link is achieved for two 1.2-Tb/s signals with the averaged Q(2) factor larger than 6.5 dB for all sub-channels. Additionally, 50-GBaud 16QAM operating at 1.28 samples/symbol in a DAC is also investigated and successful transmission over 410-km SSMF link is achieved at 62.5-GHz optical grid.

  11. To Boldly Go Where No Man has Gone Before: Seeking Gaia's Astrometric Solution with AGIS

    NASA Astrophysics Data System (ADS)

    Lammers, U.; Lindegren, L.; O'Mullane, W.; Hobbs, D.

    2009-09-01

    Gaia is ESA's ambitious space astrometry mission with a foreseen launch date in late 2011. Its main objective is to perform a stellar census of the 1,000 million brightest objects in our galaxy (completeness to V=20 mag) from which an astrometric catalog of micro-arcsec (μas) level accuracy will be constructed. A key element in this endeavor is the Astrometric Global Iterative Solution (AGIS) - the mathematical and numerical framework for combining the ≈80 available observations per star obtained during Gaia's 5 yr lifetime into a single global astrometic solution. AGIS consists of four main algorithmic cores which improve the source astrometic parameters, satellite attitude, calibration, and global parameters in a block-iterative manner. We present and discuss this basic scheme, the algorithms themselves and the overarching system architecture. The latter is a data-driven distributed processing framework designed to achieve an overall system performance that is not I/O limited. AGIS is being developed as a pure Java system by a small number of geographically distributed European groups. We present some of the software engineering aspects of the project and show used methodologies and tools. Finally we will briefly discuss how AGIS is embedded into the overall Gaia data processing architecture.

  12. Quantized Iterative Learning Consensus Tracking of Digital Networks With Limited Information Communication.

    PubMed

    Xiong, Wenjun; Yu, Xinghuo; Chen, Yao; Gao, Jie

    2017-06-01

    This brief investigates the quantized iterative learning problem for digital networks with time-varying topologies. The information is first encoded as symbolic data and then transmitted. After the data are received, a decoder is used by the receiver to get an estimate of the sender's state. Iterative learning quantized communication is considered in the process of encoding and decoding. A sufficient condition is then presented to achieve the consensus tracking problem in a finite interval using the quantized iterative learning controllers. Finally, simulation results are given to illustrate the usefulness of the developed criterion.

  13. Gaia DR2 documentation Chapter 3: Astrometry

    NASA Astrophysics Data System (ADS)

    Hobbs, D.; Lindegren, L.; Bastian, U.; Klioner, S.; Butkevich, A.; Stephenson, C.; Hernandez, J.; Lammers, U.; Bombrun, A.; Mignard, F.; Altmann, M.; Davidson, M.; de Bruijne, J. H. J.; Fernández-Hernández, J.; Siddiqui, H.; Utrilla Molina, E.

    2018-04-01

    This chapter of the Gaia DR2 documentation describes the models and processing steps used for the astrometric core solution, namely, the Astrometric Global Iterative Solution (AGIS). The inputs to this solution rely heavily on the basic observables (or astrometric elementaries) which have been pre-processed and discussed in Chapter 2, the results of which were published in Fabricius et al. (2016). The models consist of reference systems and time scales; assumed linear stellar motion and relativistic light deflection; in addition to fundamental constants and the transformation of coordinate systems. Higher level inputs such as: planetary and solar system ephemeris; Gaia tracking and orbit information; initial quasar catalogues and BAM data are all needed for the processing described here. The astrometric calibration models are outlined followed by the details processing steps which give AGIS its name. We also present a basic quality assessment and validation of the scientific results (for details, see Lindegren et al. 2018).

  14. Adaptive dynamic programming for discrete-time linear quadratic regulation based on multirate generalised policy iteration

    NASA Astrophysics Data System (ADS)

    Chun, Tae Yoon; Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho

    2018-06-01

    In this paper, we propose two multirate generalised policy iteration (GPI) algorithms applied to discrete-time linear quadratic regulation problems. The proposed algorithms are extensions of the existing GPI algorithm that consists of the approximate policy evaluation and policy improvement steps. The two proposed schemes, named heuristic dynamic programming (HDP) and dual HDP (DHP), based on multirate GPI, use multi-step estimation (M-step Bellman equation) at the approximate policy evaluation step for estimating the value function and its gradient called costate, respectively. Then, we show that these two methods with the same update horizon can be considered equivalent in the iteration domain. Furthermore, monotonically increasing and decreasing convergences, so called value iteration (VI)-mode and policy iteration (PI)-mode convergences, are proved to hold for the proposed multirate GPIs. Further, general convergence properties in terms of eigenvalues are also studied. The data-driven online implementation methods for the proposed HDP and DHP are demonstrated and finally, we present the results of numerical simulations performed to verify the effectiveness of the proposed methods.

  15. Reentrant processing mediates object substitution masking: comment on Põder (2013).

    PubMed

    Di Lollo, Vincent

    2014-01-01

    Object-substitution masking (OSM) occurs when a target stimulus and a surrounding mask are displayed briefly together, and the display then continues with the mask alone. Target identification is accurate when the stimuli co-terminate but is progressively impaired as the duration of the trailing mask is increased. In reentrant accounts, OSM is said to arise from iterative exchanges between brain regions connected by two-way pathways. In an alternative account, OSM is explained on the basis of exclusively feed-forward processes, without recourse to reentry. Here I show that the feed-forward account runs afoul of the extant phenomenological, behavioral, brain-imaging, and electrophysiological evidence. Further, the feed-forward assumption that masking occurs when attention finds a degraded target is shown to be entirely ad hoc. In contrast, the evidence is uniformly consistent with a reentrant-processing account of OSM.

  16. A methodology for automatic intensity-modulated radiation treatment planning for lung cancer

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaodong; Li, Xiaoqiang; Quan, Enzhuo M.; Pan, Xiaoning; Li, Yupeng

    2011-07-01

    In intensity-modulated radiotherapy (IMRT), the quality of the treatment plan, which is highly dependent upon the treatment planner's level of experience, greatly affects the potential benefits of the radiotherapy (RT). Furthermore, the planning process is complicated and requires a great deal of iteration, and is often the most time-consuming aspect of the RT process. In this paper, we describe a methodology to automate the IMRT planning process in lung cancer cases, the goal being to improve the quality and consistency of treatment planning. This methodology (1) automatically sets beam angles based on a beam angle automation algorithm, (2) judiciously designs the planning structures, which were shown to be effective for all the lung cancer cases we studied, and (3) automatically adjusts the objectives of the objective function based on a parameter automation algorithm. We compared treatment plans created in this system (mdaccAutoPlan) based on the overall methodology with plans from a clinical trial of IMRT for lung cancer run at our institution. The 'autoplans' were consistently better, or no worse, than the plans produced by experienced medical dosimetrists in terms of tumor coverage and normal tissue sparing. We conclude that the mdaccAutoPlan system can potentially improve the quality and consistency of treatment planning for lung cancer.

  17. SPIRiT: Iterative Self-consistent Parallel Imaging Reconstruction from Arbitrary k-Space

    PubMed Central

    Lustig, Michael; Pauly, John M.

    2010-01-01

    A new approach to autocalibrating, coil-by-coil parallel imaging reconstruction is presented. It is a generalized reconstruction framework based on self consistency. The reconstruction problem is formulated as an optimization that yields the most consistent solution with the calibration and acquisition data. The approach is general and can accurately reconstruct images from arbitrary k-space sampling patterns. The formulation can flexibly incorporate additional image priors such as off-resonance correction and regularization terms that appear in compressed sensing. Several iterative strategies to solve the posed reconstruction problem in both image and k-space domain are presented. These are based on a projection over convex sets (POCS) and a conjugate gradient (CG) algorithms. Phantom and in-vivo studies demonstrate efficient reconstructions from undersampled Cartesian and spiral trajectories. Reconstructions that include off-resonance correction and nonlinear ℓ1-wavelet regularization are also demonstrated. PMID:20665790

  18. Communication: A difference density picture for the self-consistent field ansatz.

    PubMed

    Parrish, Robert M; Liu, Fang; Martínez, Todd J

    2016-04-07

    We formulate self-consistent field (SCF) theory in terms of an interaction picture where the working variable is the difference density matrix between the true system and a corresponding superposition of atomic densities. As the difference density matrix directly represents the electronic deformations inherent in chemical bonding, this "difference self-consistent field (dSCF)" picture provides a number of significant conceptual and computational advantages. We show that this allows for a stable and efficient dSCF iterative procedure with wholly single-precision Coulomb and exchange matrix builds. We also show that the dSCF iterative procedure can be performed with aggressive screening of the pair space. These approximations are tested and found to be accurate for systems with up to 1860 atoms and >10 000 basis functions, providing for immediate overall speedups of up to 70% in the heavily optimized TeraChem SCF implementation.

  19. Communication: A difference density picture for the self-consistent field ansatz

    NASA Astrophysics Data System (ADS)

    Parrish, Robert M.; Liu, Fang; Martínez, Todd J.

    2016-04-01

    We formulate self-consistent field (SCF) theory in terms of an interaction picture where the working variable is the difference density matrix between the true system and a corresponding superposition of atomic densities. As the difference density matrix directly represents the electronic deformations inherent in chemical bonding, this "difference self-consistent field (dSCF)" picture provides a number of significant conceptual and computational advantages. We show that this allows for a stable and efficient dSCF iterative procedure with wholly single-precision Coulomb and exchange matrix builds. We also show that the dSCF iterative procedure can be performed with aggressive screening of the pair space. These approximations are tested and found to be accurate for systems with up to 1860 atoms and >10 000 basis functions, providing for immediate overall speedups of up to 70% in the heavily optimized TeraChem SCF implementation.

  20. Conceptual study of the cryocascade for pumping, separation and recycling of ITER torus exhaust

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mack, A.; Perinic, D.

    1994-12-31

    For pumping, separation and recycling of the ITER plasma exhaust, a pumping system working reliably under ambient and operating conditions of the ITER reactor is required. The pump is exposed to a magnetic field of about 3 T and has to be resistant to radioactive irradiation of 10{sup 9} rad. In the burn and dwell phase, a gas mixture consisting of hydrogen isotopes, helium ash and impurities has to be pumped at the pressure range of 10{sup {minus}2} - 10{sup {minus}1} mbar. Within the framework of the European Fusion Technology Programme, the concept of a primary cryopump for use inmore » ITER is being prepared at KfK. The cryocascade concept is planned to include three pump stages. These pump stages, which are connected in series, consist of individual chambers that may be separated from each other by means of cold valves. In the first stage, the impurities of the reactor exhaust gas are frozen out at 20-30 K. Settling of the hydrogen isotopes H/D/T on the 5 K cryosurfaces takes place in the second stage. This stage is made up of two parallel chambers, which can be switched from the pumping to the regeneration mode or vice versa. The helium fraction is bound in the downstream 5 K adsorption stage.« less

  1. Region of interest processing for iterative reconstruction in x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Kopp, Felix K.; Nasirudin, Radin A.; Mei, Kai; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Noël, Peter B.

    2015-03-01

    The recent advancements in the graphics card technology raised the performance of parallel computing and contributed to the introduction of iterative reconstruction methods for x-ray computed tomography in clinical CT scanners. Iterative maximum likelihood (ML) based reconstruction methods are known to reduce image noise and to improve the diagnostic quality of low-dose CT. However, iterative reconstruction of a region of interest (ROI), especially ML based, is challenging. But for some clinical procedures, like cardiac CT, only a ROI is needed for diagnostics. A high-resolution reconstruction of the full field of view (FOV) consumes unnecessary computation effort that results in a slower reconstruction than clinically acceptable. In this work, we present an extension and evaluation of an existing ROI processing algorithm. Especially improvements for the equalization between regions inside and outside of a ROI are proposed. The evaluation was done on data collected from a clinical CT scanner. The performance of the different algorithms is qualitatively and quantitatively assessed. Our solution to the ROI problem provides an increase in signal-to-noise ratio and leads to visually less noise in the final reconstruction. The reconstruction speed of our technique was observed to be comparable with other previous proposed techniques. The development of ROI processing algorithms in combination with iterative reconstruction will provide higher diagnostic quality in the near future.

  2. Accelerated perturbation-resilient block-iterative projection methods with application to image reconstruction

    PubMed Central

    Nikazad, T; Davidi, R; Herman, G. T.

    2013-01-01

    We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data. PMID:23440911

  3. Accelerated perturbation-resilient block-iterative projection methods with application to image reconstruction.

    PubMed

    Nikazad, T; Davidi, R; Herman, G T

    2012-03-01

    We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data.

  4. Fast Time Response Electromagnetic Particle Injection System for Disruption Mitigation

    NASA Astrophysics Data System (ADS)

    Raman, Roger; Lay, W.-S.; Jarboe, T. R.; Menard, J. E.; Ono, M.

    2017-10-01

    Predicting and controlling disruptions is an urgent issue for ITER. In this proposed method, a radiative payload consisting of micro spheres of Be, BN, B, or other acceptable low-Z materials would be injected inside the q =2 surface for thermal and runaway electron mitigation. The radiative payload would be accelerated to the required velocities (0.2 to >1km/s) in an Electromagnetic Particle Injector (EPI). An important advantage of the EPI system is that it could be positioned very close to the reactor vessel. This has the added benefit that the external field near a high-field tokamak dramatically improves the injector performance, while simultaneously reducing the system response time. A NSTX-U / DIII-D scale system has been tested off-line to verify the critical parameters - the projected system response time and attainable velocities. Both are consistent with the model calculations, giving confidence that an ITER-scale system could be built to ensure safety of the ITER device. This work is supported by U.S. DOE Contracts: DE-AC02-09CH11466, DE-FG02-99ER54519 AM08, and DE-SC0006757.

  5. A 2D systems approach to iterative learning control for discrete linear processes with zero Markov parameters

    NASA Astrophysics Data System (ADS)

    Hladowski, Lukasz; Galkowski, Krzysztof; Cai, Zhonglun; Rogers, Eric; Freeman, Chris T.; Lewin, Paul L.

    2011-07-01

    In this article a new approach to iterative learning control for the practically relevant case of deterministic discrete linear plants with uniform rank greater than unity is developed. The analysis is undertaken in a 2D systems setting that, by using a strong form of stability for linear repetitive processes, allows simultaneous consideration of both trial-to-trial error convergence and along the trial performance, resulting in design algorithms that can be computed using linear matrix inequalities (LMIs). Finally, the control laws are experimentally verified on a gantry robot that replicates a pick and place operation commonly found in a number of applications to which iterative learning control is applicable.

  6. Continuous analog of multiplicative algebraic reconstruction technique for computed tomography

    NASA Astrophysics Data System (ADS)

    Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya

    2016-03-01

    We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.

  7. Extended Kalman filtering for the detection of damage in linear mechanical structures

    NASA Astrophysics Data System (ADS)

    Liu, X.; Escamilla-Ambrosio, P. J.; Lieven, N. A. J.

    2009-09-01

    This paper addresses the problem of assessing the location and extent of damage in a vibrating structure by means of vibration measurements. Frequency domain identification methods (e.g. finite element model updating) have been widely used in this area while time domain methods such as the extended Kalman filter (EKF) method, are more sparsely represented. The difficulty of applying EKF in mechanical system damage identification and localisation lies in: the high computational cost, the dependence of estimation results on the initial estimation error covariance matrix P(0), the initial value of parameters to be estimated, and on the statistics of measurement noise R and process noise Q. To resolve these problems in the EKF, a multiple model adaptive estimator consisting of a bank of EKF in modal domain was designed, each filter in the bank is based on different P(0). The algorithm was iterated by using the weighted global iteration method. A fuzzy logic model was incorporated in each filter to estimate the variance of the measurement noise R. The application of the method is illustrated by simulated and real examples.

  8. A Technique for Transient Thermal Testing of Thick Structures

    NASA Technical Reports Server (NTRS)

    Horn, Thomas J.; Richards, W. Lance; Gong, Leslie

    1997-01-01

    A new open-loop heat flux control technique has been developed to conduct transient thermal testing of thick, thermally-conductive aerospace structures. This technique uses calibration of the radiant heater system power level as a function of heat flux, predicted aerodynamic heat flux, and the properties of an instrumented test article. An iterative process was used to generate open-loop heater power profiles prior to each transient thermal test. Differences between the measured and predicted surface temperatures were used to refine the heater power level command profiles through the iteration process. This iteration process has reduced the effects of environmental and test system design factors, which are normally compensated for by closed-loop temperature control, to acceptable levels. The final revised heater power profiles resulted in measured temperature time histories which deviated less than 25 F from the predicted surface temperatures.

  9. Measuring Critical Care Providers' Attitudes About Controlled Donation After Circulatory Death.

    PubMed

    Rodrigue, James R; Luskin, Richard; Nelson, Helen; Glazier, Alexandra; Henderson, Galen V; Delmonico, Francis L

    2018-06-01

    Unfavorable attitudes and insufficient knowledge about donation after cardiac death among critical care providers can have important consequences for the appropriate identification of potential donors, consistent implementation of donation after cardiac death policies, and relative strength of support for this type of donation. The lack of reliable and valid assessment measures has hampered research to capture providers' attitudes. Design and Research Aims: Using stakeholder engagement and an iterative process, we developed a questionnaire to measure attitudes of donation after cardiac death in critical care providers (n = 112) and examined its psychometric properties. Exploratory factor analysis, internal consistency, and validity analyses were conducted to examine the measure. A 34-item questionnaire consisting of 4 factors (Personal Comfort, Process Satisfaction, Family Comfort, and System Trust) provided the most parsimonious fit. Internal consistency was acceptable for each of the subscales and the total questionnaire (Cronbach α > .70). A strong association between more favorable attitudes overall and knowledge ( r = .43, P < .001) provides evidence of convergent validity. Multivariable regression analyses showed that white race ( P = .002) and more experience with donation after cardiac death ( P < .001) were significant predictors of more favorable attitudes. Study findings support the utility, reliability, and validity of a questionnaire for measuring attitudes in critical care providers and for isolating targets for additional education on donation after cardiac death.

  10. A Real-Time Data Acquisition and Processing Framework Based on FlexRIO FPGA and ITER Fast Plant System Controller

    NASA Astrophysics Data System (ADS)

    Yang, C.; Zheng, W.; Zhang, M.; Yuan, T.; Zhuang, G.; Pan, Y.

    2016-06-01

    Measurement and control of the plasma in real-time are critical for advanced Tokamak operation. It requires high speed real-time data acquisition and processing. ITER has designed the Fast Plant System Controllers (FPSC) for these purposes. At J-TEXT Tokamak, a real-time data acquisition and processing framework has been designed and implemented using standard ITER FPSC technologies. The main hardware components of this framework are an Industrial Personal Computer (IPC) with a real-time system and FlexRIO devices based on FPGA. With FlexRIO devices, data can be processed by FPGA in real-time before they are passed to the CPU. The software elements are based on a real-time framework which runs under Red Hat Enterprise Linux MRG-R and uses Experimental Physics and Industrial Control System (EPICS) for monitoring and configuring. That makes the framework accord with ITER FPSC standard technology. With this framework, any kind of data acquisition and processing FlexRIO FPGA program can be configured with a FPSC. An application using the framework has been implemented for the polarimeter-interferometer diagnostic system on J-TEXT. The application is able to extract phase-shift information from the intermediate frequency signal produced by the polarimeter-interferometer diagnostic system and calculate plasma density profile in real-time. Different algorithms implementations on the FlexRIO FPGA are compared in the paper.

  11. Rater variables associated with ITER ratings.

    PubMed

    Paget, Michael; Wu, Caren; McIlwrick, Joann; Woloschuk, Wayne; Wright, Bruce; McLaughlin, Kevin

    2013-10-01

    Advocates of holistic assessment consider the ITER a more authentic way to assess performance. But this assessment format is subjective and, therefore, susceptible to rater bias. Here our objective was to study the association between rater variables and ITER ratings. In this observational study our participants were clerks at the University of Calgary and preceptors who completed online ITERs between February 2008 and July 2009. Our outcome variable was global rating on the ITER (rated 1-5), and we used a generalized estimating equation model to identify variables associated with this rating. Students were rated "above expected level" or "outstanding" on 66.4 % of 1050 online ITERs completed during the study period. Two rater variables attenuated ITER ratings: the log transformed time taken to complete the ITER [β = -0.06, 95 % confidence interval (-0.10, -0.02), p = 0.002], and the number of ITERs that a preceptor completed over the time period of the study [β = -0.008 (-0.02, -0.001), p = 0.02]. In this study we found evidence of leniency bias that resulted in two thirds of students being rated above expected level of performance. This leniency bias appeared to be attenuated by delay in ITER completion, and was also blunted in preceptors who rated more students. As all biases threaten the internal validity of the assessment process, further research is needed to confirm these and other sources of rater bias in ITER ratings, and to explore ways of limiting their impact.

  12. eNOSHA, a Free, Open and Flexible Learning Object Repository--An Iterative Development Process for Global User-Friendliness

    ERIC Educational Resources Information Center

    Mozelius, Peter; Hettiarachchi, Enosha

    2012-01-01

    This paper describes the iterative development process of a Learning Object Repository (LOR), named eNOSHA. Discussions on a project for a LOR started at the e-Learning Centre (eLC) at The University of Colombo, School of Computing (UCSC) in 2007. The eLC has during the last decade been developing learning content for a nationwide e-learning…

  13. Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.

    1996-01-01

    The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.

  14. The new final Clinical Skills examination in human medicine in Switzerland: Essential steps of exam development, implementation and evaluation, and central insights from the perspective of the national Working Group

    PubMed Central

    Berendonk, Christoph; Schirlo, Christian; Balestra, Gianmarco; Bonvin, Raphael; Feller, Sabine; Huber, Philippe; Jünger, Ernst; Monti, Matteo; Schnabel, Kai; Beyeler, Christine; Guttormsen, Sissel; Huwendiek, Sören

    2015-01-01

    Objective: Since 2011, the new national final examination in human medicine has been implemented in Switzerland, with a structured clinical-practical part in the OSCE format. From the perspective of the national Working Group, the current article describes the essential steps in the development, implementation and evaluation of the Federal Licensing Examination Clinical Skills (FLE CS) as well as the applied quality assurance measures. Finally, central insights gained from the last years are presented. Methods: Based on the principles of action research, the FLE CS is in a constant state of further development. On the foundation of systematically documented experiences from previous years, in the Working Group, unresolved questions are discussed and resulting solution approaches are substantiated (planning), implemented in the examination (implementation) and subsequently evaluated (reflection). The presented results are the product of this iterative procedure. Results: The FLE CS is created by experts from all faculties and subject areas in a multistage process. The examination is administered in German and French on a decentralised basis and consists of twelve interdisciplinary stations per candidate. As important quality assurance measures, the national Review Board (content validation) and the meetings of the standardised patient trainers (standardisation) have proven worthwhile. The statistical analyses show good measurement reliability and support the construct validity of the examination. Among the central insights of the past years, it has been established that the consistent implementation of the principles of action research contributes to the successful further development of the examination. Conclusion: The centrally coordinated, collaborative-iterative process, incorporating experts from all faculties, makes a fundamental contribution to the quality of the FLE CS. The processes and insights presented here can be useful for others planning a similar undertaking. PMID:26483853

  15. Evolutionary Software Development (Developpement Evolutionnaire de Logiciels)

    DTIC Science & Technology

    2008-08-01

    development processes. While this may be true, frequently it is not. MIL-STD-498 was explicitly introduced to encourage iterative development; ISO /IEC... 12207 was carefully worded not to prohibit iterative development. Yet both standards were widely interpreted as requiring waterfall development, as

  16. Evolutionary Software Development (Developpement evolutionnaire de logiciels)

    DTIC Science & Technology

    2008-08-01

    development processes. While this may be true, frequently it is not. MIL-STD-498 was explicitly introduced to encourage iterative development; ISO /IEC... 12207 was carefully worded not to prohibit iterative development. Yet both standards were widely interpreted as requiring waterfall development, as

  17. Quickprop method to speed up learning process of Artificial Neural Network in money's nominal value recognition case

    NASA Astrophysics Data System (ADS)

    Swastika, Windra

    2017-03-01

    A money's nominal value recognition system has been developed using Artificial Neural Network (ANN). ANN with Back Propagation has one disadvantage. The learning process is very slow (or never reach the target) in the case of large number of iteration, weight and samples. One way to speed up the learning process is using Quickprop method. Quickprop method is based on Newton's method and able to speed up the learning process by assuming that the weight adjustment (E) is a parabolic function. The goal is to minimize the error gradient (E'). In our system, we use 5 types of money's nominal value, i.e. 1,000 IDR, 2,000 IDR, 5,000 IDR, 10,000 IDR and 50,000 IDR. One of the surface of each nominal were scanned and digitally processed. There are 40 patterns to be used as training set in ANN system. The effectiveness of Quickprop method in the ANN system was validated by 2 factors, (1) number of iterations required to reach error below 0.1; and (2) the accuracy to predict nominal values based on the input. Our results shows that the use of Quickprop method is successfully reduce the learning process compared to Back Propagation method. For 40 input patterns, Quickprop method successfully reached error below 0.1 for only 20 iterations, while Back Propagation method required 2000 iterations. The prediction accuracy for both method is higher than 90%.

  18. Appendices to the model description document for a computer program for the emulation/simulation of a space station environmental control and life support system

    NASA Technical Reports Server (NTRS)

    Yanosy, James L.

    1988-01-01

    A Model Description Document for the Emulation Simulation Computer Model was already published. The model consisted of a detailed model (emulation) of a SAWD CO2 removal subsystem which operated with much less detailed (simulation) models of a cabin, crew, and condensing and sensible heat exchangers. The purpose was to explore the utility of such an emulation simulation combination in the design, development, and test of a piece of ARS hardware, SAWD. Extensions to this original effort are presented. The first extension is an update of the model to reflect changes in the SAWD control logic which resulted from test. Also, slight changes were also made to the SAWD model to permit restarting and to improve the iteration technique. The second extension is the development of simulation models for more pieces of air and water processing equipment. Models are presented for: EDC, Molecular Sieve, Bosch, Sabatier, a new condensing heat exchanger, SPE, SFWES, Catalytic Oxidizer, and multifiltration. The third extension is to create two system simulations using these models. The first system presented consists of one air and one water processing system. The second consists of a potential air revitalization system.

  19. Six sigma: process of understanding the control and capability of ranitidine hydrochloride tablet.

    PubMed

    Chabukswar, Ar; Jagdale, Sc; Kuchekar, Bs; Joshi, Vd; Deshmukh, Gr; Kothawade, Hs; Kuckekar, Ab; Lokhande, Pd

    2011-01-01

    The process of understanding the control and capability (PUCC) is an iterative closed loop process for continuous improvement. It covers the DMAIC toolkit in its three phases. PUCC is an iterative approach that rotates between the three pillars of the process of understanding, process control, and process capability, with each iteration resulting in a more capable and robust process. It is rightly said that being at the top is a marathon and not a sprint. The objective of the six sigma study of Ranitidine hydrochloride tablets is to achieve perfection in tablet manufacturing by reviewing the present robust manufacturing process, to find out ways to improve and modify the process, which will yield tablets that are defect-free and will give more customer satisfaction. The application of six sigma led to an improved process capability, due to the improved sigma level of the process from 1.5 to 4, a higher yield, due to reduced variation and reduction of thick tablets, reduction in packing line stoppages, reduction in re-work by 50%, a more standardized process, with smooth flow and change in coating suspension reconstitution level (8%w/w), a huge cost reduction of approximately Rs.90 to 95 lakhs per annum, an improved overall efficiency by 30% approximately, and improved overall quality of the product.

  20. Six Sigma: Process of Understanding the Control and Capability of Ranitidine Hydrochloride Tablet

    PubMed Central

    Chabukswar, AR; Jagdale, SC; Kuchekar, BS; Joshi, VD; Deshmukh, GR; Kothawade, HS; Kuckekar, AB; Lokhande, PD

    2011-01-01

    The process of understanding the control and capability (PUCC) is an iterative closed loop process for continuous improvement. It covers the DMAIC toolkit in its three phases. PUCC is an iterative approach that rotates between the three pillars of the process of understanding, process control, and process capability, with each iteration resulting in a more capable and robust process. It is rightly said that being at the top is a marathon and not a sprint. The objective of the six sigma study of Ranitidine hydrochloride tablets is to achieve perfection in tablet manufacturing by reviewing the present robust manufacturing process, to find out ways to improve and modify the process, which will yield tablets that are defect-free and will give more customer satisfaction. The application of six sigma led to an improved process capability, due to the improved sigma level of the process from 1.5 to 4, a higher yield, due to reduced variation and reduction of thick tablets, reduction in packing line stoppages, reduction in re-work by 50%, a more standardized process, with smooth flow and change in coating suspension reconstitution level (8%w/w), a huge cost reduction of approximately Rs.90 to 95 lakhs per annum, an improved overall efficiency by 30% approximately, and improved overall quality of the product. PMID:21607050

  1. Dynamic simulation of relief line during loss of insulation vacuum of the ITER cryoline

    NASA Astrophysics Data System (ADS)

    Badgujar, S.; Kosek, J.; Grillot, D.; Forgeas, A.; Sarkar, B.; Shah, N.; Choukekar, K.; Chang, H.-S.

    2017-12-01

    The ITER cryoline (CL) system consists of 37 types of vacuum jacketed transfer lines which forms a complex structured network with a total length of about 5 km, spread inside the Tokamak building, on a dedicated plant bridge and in the Cryoplant building/area. One of them, the low pressure relief line (RL) recovers helium discharged from process safety relief valves of the different cryogenic users and is sent it back to the Cryoplant via heater and recovery system. The process pipe diameters of the RL vary from DN 50 to DN 200 and the length is more than 1500 m. Loss of insulation vacuum (LIV) of a CL is one of the worst scenarios apart from LIV in Auxiliary Cold Boxes (ACBs). The Torus and Cryostat CL is chosen to simulate the virtual LIV and to study the anticipated behavior of the RL. Both helium LIV (LIV due to leak in helium pipe) and air LIV (LIV due to air ingress in outer vacuum jacket of the cryoline) with and without fire) have been simulated during this study. After the brief description of the CL system, the paper will describe the EcosimPro® model prepared for the dynamic study. The paper will also describe the results like minimum temperature of RL, mass flow and maximum pressure in the RL which are essentially used to choose the type and location of safety relief devices to protect the CL process pipes.

  2. Analysis of the ITER low field side reflectometer transmission line system.

    PubMed

    Hanson, G R; Wilgen, J B; Bigelow, T S; Diem, S J; Biewer, T M

    2010-10-01

    A critical issue in the design of the ITER low field side reflectometer is the transmission line (TL) system. A TL connects each launcher to a diagnostic instrument. Each TL will typically consist of ∼42 m of corrugated waveguide and up to ten miter bends. Important issues for the performance of the TL system are mode conversion and reflections. Minimizing these issues are critical to minimizing standing waves and phase errors. The performance of TL system is analyzed and recommendations are given.

  3. An Evaluation of an Algorithm for Linear Inequalities and Its Applications

    NASA Technical Reports Server (NTRS)

    Jurgensen, J.

    1973-01-01

    An algorithm is presented for obtaining a solution alpha to a set of inequalities (A alpha) 0 where A is an N x m-matrix and alpha is an m-vector. If the set of inequalities is consistant, then the algorithm is guaranteed to arrive at a solution in a finite number of steps. Also, if in the iteration, a negative vector is obtained, then the initial set of inequalities is inconsistant, and the iteration is terminated.

  4. Validation of a coupled core-transport, pedestal-structure, current-profile and equilibrium model

    NASA Astrophysics Data System (ADS)

    Meneghini, O.

    2015-11-01

    The first workflow capable of predicting the self-consistent solution to the coupled core-transport, pedestal structure, and equilibrium problems from first-principles and its experimental tests are presented. Validation with DIII-D discharges in high confinement regimes shows that the workflow is capable of robustly predicting the kinetic profiles from on axis to the separatrix and matching the experimental measurements to within their uncertainty, with no prior knowledge of the pedestal height nor of any measurement of the temperature or pressure. Self-consistent coupling has proven to be essential to match the experimental results, and capture the non-linear physics that governs the core and pedestal solutions. In particular, clear stabilization of the pedestal peeling ballooning instabilities by the global Shafranov shift and destabilization by additional edge bootstrap current, and subsequent effect on the core plasma profiles, have been clearly observed and documented. In our model, self-consistency is achieved by iterating between the TGYRO core transport solver (with NEO and TGLF for neoclassical and turbulent flux), and the pedestal structure predicted by the EPED model. A self-consistent equilibrium is calculated by EFIT, while the ONETWO transport package evolves the current profile and calculates the particle and energy sources. The capabilities of such workflow are shown to be critical for the design of future experiments such as ITER and FNSF, which operate in a regime where the equilibrium, the pedestal, and the core transport problems are strongly coupled, and for which none of these quantities can be assumed to be known. Self-consistent core-pedestal predictions for ITER, as well as initial optimizations, will be presented. Supported by the US Department of Energy under DE-FC02-04ER54698, DE-SC0012652.

  5. RAVE: Rapid Visualization Environment

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M.; Anderson, Kevin; Simoudis, Avangelos

    1994-01-01

    Visualization is used in the process of analyzing large, multidimensional data sets. However, the selection and creation of visualizations that are appropriate for the characteristics of a particular data set and the satisfaction of the analyst's goals is difficult. The process consists of three tasks that are performed iteratively: generate, test, and refine. The performance of these tasks requires the utilization of several types of domain knowledge that data analysts do not often have. Existing visualization systems and frameworks do not adequately support the performance of these tasks. In this paper we present the RApid Visualization Environment (RAVE), a knowledge-based system that interfaces with commercial visualization frameworks and assists a data analyst in quickly and easily generating, testing, and refining visualizations. RAVE was used for the visualization of in situ measurement data captured by spacecraft.

  6. Utilizing the Iterative Closest Point (ICP) algorithm for enhanced registration of high resolution surface models - more than a simple black-box application

    NASA Astrophysics Data System (ADS)

    Stöcker, Claudia; Eltner, Anette

    2016-04-01

    Advances in computer vision and digital photogrammetry (i.e. structure from motion) allow for fast and flexible high resolution data supply. Within geoscience applications and especially in the field of small surface topography, high resolution digital terrain models and dense 3D point clouds are valuable data sources to capture actual states as well as for multi-temporal studies. However, there are still some limitations regarding robust registration and accuracy demands (e.g. systematic positional errors) which impede the comparison and/or combination of multi-sensor data products. Therefore, post-processing of 3D point clouds can heavily enhance data quality. In this matter the Iterative Closest Point (ICP) algorithm represents an alignment tool which iteratively minimizes distances of corresponding points within two datasets. Even though tool is widely used; it is often applied as a black-box application within 3D data post-processing for surface reconstruction. Aiming for precise and accurate combination of multi-sensor data sets, this study looks closely at different variants of the ICP algorithm including sub-steps of point selection, point matching, weighting, rejection, error metric and minimization. Therefore, an agricultural utilized field was investigated simultaneously by terrestrial laser scanning (TLS) and unmanned aerial vehicle (UAV) sensors two times (once covered with sparse vegetation and once bare soil). Due to different perspectives both data sets show diverse consistency in terms of shadowed areas and thus gaps so that data merging would provide consistent surface reconstruction. Although photogrammetric processing already included sub-cm accurate ground control surveys, UAV point cloud exhibits an offset towards TLS point cloud. In order to achieve the transformation matrix for fine registration of UAV point clouds, different ICP variants were tested. Statistical analyses of the results show that final success of registration and therefore data quality depends particularly on parameterization and choice of error metric, especially for erroneous data sets as in the case of sparse vegetation cover. At this, the point-to-point metric is more sensitive to data "noise" than the point-to-plane metric which results in considerably higher cloud-to-cloud distances. Concluding, in order to comply with accuracy demands of high resolution surface reconstruction and the aspect that ground control surveys can reach their limits both in time exposure and terrain accessibility ICP algorithm represents a great tool to refine rough initial alignment. Here different variants of registration modules allow for individual application according to the quality of the input data.

  7. An economic toolkit for identifying the cost of emergency medical services (EMS) systems: detailed methodology of the EMS Cost Analysis Project (EMSCAP).

    PubMed

    Lerner, E Brooke; Garrison, Herbert G; Nichol, Graham; Maio, Ronald F; Lookman, Hunaid A; Sheahan, William D; Franz, Timothy R; Austad, James D; Ginster, Aaron M; Spaite, Daniel W

    2012-02-01

    Calculating the cost of an emergency medical services (EMS) system using a standardized method is important for determining the value of EMS. This article describes the development of a methodology for calculating the cost of an EMS system to its community. This includes a tool for calculating the cost of EMS (the "cost workbook") and detailed directions for determining cost (the "cost guide"). The 12-step process that was developed is consistent with current theories of health economics, applicable to prehospital care, flexible enough to be used in varying sizes and types of EMS systems, and comprehensive enough to provide meaningful conclusions. It was developed by an expert panel (the EMS Cost Analysis Project [EMSCAP] investigator team) in an iterative process that included pilot testing the process in three diverse communities. The iterative process allowed ongoing modification of the toolkit during the development phase, based upon direct, practical, ongoing interaction with the EMS systems that were using the toolkit. The resulting methodology estimates EMS system costs within a user-defined community, allowing either the number of patients treated or the estimated number of lives saved by EMS to be assessed in light of the cost of those efforts. Much controversy exists about the cost of EMS and whether the resources spent for this purpose are justified. However, the existence of a validated toolkit that provides a standardized process will allow meaningful assessments and comparisons to be made and will supply objective information to inform EMS and community officials who are tasked with determining the utilization of scarce societal resources. © 2012 by the Society for Academic Emergency Medicine.

  8. Evolution of Pediatric Chronic Disease Treatment Decisions: A Qualitative, Longitudinal View of Parents' Decision-Making Process.

    PubMed

    Lipstein, Ellen A; Britto, Maria T

    2015-08-01

    In the context of pediatric chronic conditions, patients and families are called upon repeatedly to make treatment decisions. However, little is known about how their decision making evolves over time. The objective was to understand parents' processes for treatment decision making in pediatric chronic conditions. We conducted a qualitative, prospective longitudinal study using recorded clinic visits and individual interviews. After consent was obtained from health care providers, parents, and patients, clinic visits during which treatment decisions were expected to be discussed were video-recorded. Parents then participated in sequential telephone interviews about their decision-making experience. Data were coded by 2 people and analyzed using framework analysis with sequential, time-ordered matrices. 21 families, including 29 parents, participated in video-recording and interviews. We found 3 dominant patterns of decision evolution. Each consisted of a series of decision events, including conversations, disease flares, and researching of treatment options. Within all 3 patterns there were both constant and evolving elements of decision making, such as role perceptions and treatment expectations, respectively. After parents made a treatment decision, they immediately turned to the next decision related to the chronic condition, creating an iterative cycle. In this study, decision making was an iterative process occurring in 3 distinct patterns. Understanding these patterns and the varying elements of parents' decision processes is an essential step toward developing interventions that are appropriate to the setting and that capitalize on the skills families may develop as they gain experience with a chronic condition. Future research should also consider the role of children and adolescents in this decision process. © The Author(s) 2015.

  9. An iterative phase-space explicit discontinuous Galerkin method for stellar radiative transfer in extended atmospheres

    NASA Astrophysics Data System (ADS)

    de Almeida, Valmor F.

    2017-07-01

    A phase-space discontinuous Galerkin (PSDG) method is presented for the solution of stellar radiative transfer problems. It allows for greater adaptivity than competing methods without sacrificing generality. The method is extensively tested on a spherically symmetric, static, inverse-power-law scattering atmosphere. Results for different sizes of atmospheres and intensities of scattering agreed with asymptotic values. The exponentially decaying behavior of the radiative field in the diffusive-transparent transition region, and the forward peaking behavior at the surface of extended atmospheres were accurately captured. The integrodifferential equation of radiation transfer is solved iteratively by alternating between the radiative pressure equation and the original equation with the integral term treated as an energy density source term. In each iteration, the equations are solved via an explicit, flux-conserving, discontinuous Galerkin method. Finite elements are ordered in wave fronts perpendicular to the characteristic curves so that elemental linear algebraic systems are solved quickly by sweeping the phase space element by element. Two implementations of a diffusive boundary condition at the origin are demonstrated wherein the finite discontinuity in the radiation intensity is accurately captured by the proposed method. This allows for a consistent mechanism to preserve photon luminosity. The method was proved to be robust and fast, and a case is made for the adequacy of parallel processing. In addition to classical two-dimensional plots, results of normalized radiation intensity were mapped onto a log-polar surface exhibiting all distinguishing features of the problem studied.

  10. AMPHION: Specification-based programming for scientific subroutine libraries

    NASA Technical Reports Server (NTRS)

    Lowry, Michael; Philpot, Andrew; Pressburger, Thomas; Underwood, Ian; Waldinger, Richard; Stickel, Mark

    1994-01-01

    AMPHION is a knowledge-based software engineering (KBSE) system that guides a user in developing a diagram representing a formal problem specification. It then automatically implements a solution to this specification as a program consisting of calls to subroutines from a library. The diagram provides an intuitive domain oriented notation for creating a specification that also facilitates reuse and modification. AMPHION'S architecture is domain independent. AMPHION is specialized to an application domain by developing a declarative domain theory. Creating a domain theory is an iterative process that currently requires the joint expertise of domain experts and experts in automated formal methods for software development.

  11. The REFINEMENT Glossary of Terms: An International Terminology for Mental Health Systems Assessment.

    PubMed

    Montagni, Ilaria; Salvador-Carulla, Luis; Mcdaid, David; Straßmayr, Christa; Endel, Florian; Näätänen, Petri; Kalseth, Jorid; Kalseth, Birgitte; Matosevic, Tihana; Donisi, Valeria; Chevreul, Karine; Prigent, Amélie; Sfectu, Raluca; Pauna, Carmen; Gutiérrez-Colosia, Mencia R; Amaddeo, Francesco; Katschnig, Heinz

    2018-03-01

    Comparing mental health systems across countries is difficult because of the lack of an agreed upon terminology covering services and related financing issues. Within the European Union project REFINEMENT, international mental health care experts applied an innovative mixed "top-down" and "bottom-up" approach following a multistep design thinking strategy to compile a glossary on mental health systems, using local services as pilots. The final REFINEMENT glossary consisted of 432 terms related to service provision, service utilisation, quality of care and financing. The aim of this study was to describe the iterative process and methodology of developing this glossary.

  12. Improvements in surface singularity analysis and design methods. [applicable to airfoils

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.

    1979-01-01

    The coupling of the combined source vortex distribution of Green's potential flow function with contemporary numerical techniques is shown to provide accurate, efficient, and stable solutions to subsonic inviscid analysis and design problems for multi-element airfoils. The analysis problem is solved by direct calculation of the surface singularity distribution required to satisfy the flow tangency boundary condition. The design or inverse problem is solved by an iteration process. In this process, the geometry and the associated pressure distribution are iterated until the pressure distribution most nearly corresponding to the prescribed design distribution is obtained. Typically, five iteration cycles are required for convergence. A description of the analysis and design method is presented, along with supporting examples.

  13. Segregation process and phase transition in cyclic predator-prey models with an even number of species.

    PubMed

    Szabó, György; Szolnoki, Attila; Sznaider, Gustavo Ariel

    2007-11-01

    We study a spatial cyclic predator-prey model with an even number of species (for n=4, 6, and 8) that allows the formation of two defensive alliances consisting of the even and odd label species. The species are distributed on the sites of a square lattice. The evolution of spatial distribution is governed by iteration of two elementary processes on neighboring sites chosen randomly: if the sites are occupied by a predator-prey pair then the predator invades the prey's site; otherwise the species exchange their sites with a probability X . For low X values, a self-organizing pattern is maintained by cyclic invasions. If X exceeds a threshold value, then two types of domain grow up that are formed by the odd and even label species, respectively. Monte Carlo simulations indicate the blocking of this segregation process within a range of X for n=8.

  14. Multi-machine analysis of termination scenarios with comparison to simulations of controlled shutdown of ITER discharges

    DOE PAGES

    de Vries, Peter C.; Luce, Timothy C.; Bae, Young-soon; ...

    2017-11-22

    To improve our understanding of the dynamics and control of ITER terminations, a study has been carried out on data from existing tokamaks. The aim of this joint analysis is to compare the assumptions for ITER terminations with the present experience basis. The study examined the parameter ranges in which present day devices operated during their terminations, as well as the dynamics of these parameters. The analysis of a database, built using a selected set of experimental termination cases, showed that, the H-mode density decays slower than the plasma current ramp-down. The consequential increase in fGW limits the duration ofmore » the H-mode phase or result in disruptions. The lower temperatures after the drop out of H-mode will allow the plasma internal inductance to increase. But vertical stability control remains manageable in ITER at high internal inductance when accompanied by a strong elongation reduction. This will result in ITER terminations remaining longer at low q (q95~3) than most present-day devices during the current ramp-down. A fast power ramp-down leads to a larger change in βp at the H-L transition, but the experimental data showed that these are manageable for the ITER radial position control. The analysis of JET data shows that radiation and impurity levels significantly alter the H-L transition dynamics. Self-consistent calculations of the impurity content and resulting radiation should be taken into account when modelling ITER termination scenarios. Here, the results from this analysis can be used to better prescribe the inputs for the detailed modelling and preparation of ITER termination scenarios.« less

  15. Multi-machine analysis of termination scenarios with comparison to simulations of controlled shutdown of ITER discharges

    NASA Astrophysics Data System (ADS)

    de Vries, P. C.; Luce, T. C.; Bae, Y. S.; Gerhardt, S.; Gong, X.; Gribov, Y.; Humphreys, D.; Kavin, A.; Khayrutdinov, R. R.; Kessel, C.; Kim, S. H.; Loarte, A.; Lukash, V. E.; de la Luna, E.; Nunes, I.; Poli, F.; Qian, J.; Reinke, M.; Sauter, O.; Sips, A. C. C.; Snipes, J. A.; Stober, J.; Treutterer, W.; Teplukhina, A. A.; Voitsekhovitch, I.; Woo, M. H.; Wolfe, S.; Zabeo, L.; the Alcator C-MOD Team; the ASDEX Upgrade Team; the DIII-D Team; the EAST Team; contributors, JET; the KSTAR Team; the NSTX-U Team; the TCV Team; IOS members, ITPA; experts

    2018-02-01

    To improve our understanding of the dynamics and control of ITER terminations, a study has been carried out on data from existing tokamaks. The aim of this joint analysis is to compare the assumptions for ITER terminations with the present experience basis. The study examined the parameter ranges in which present day devices operated during their terminations, as well as the dynamics of these parameters. The analysis of a database, built using a selected set of experimental termination cases, showed that, the H-mode density decays slower than the plasma current ramp-down. The consequential increase in f GW limits the duration of the H-mode phase or result in disruptions. The lower temperatures after the drop out of H-mode will allow the plasma internal inductance to increase. But vertical stability control remains manageable in ITER at high internal inductance when accompanied by a strong elongation reduction. This will result in ITER terminations remaining longer at low q (q 95 ~ 3) than most present-day devices during the current ramp-down. A fast power ramp-down leads to a larger change in β p at the H-L transition, but the experimental data showed that these are manageable for the ITER radial position control. The analysis of JET data shows that radiation and impurity levels significantly alter the H-L transition dynamics. Self-consistent calculations of the impurity content and resulting radiation should be taken into account when modelling ITER termination scenarios. The results from this analysis can be used to better prescribe the inputs for the detailed modelling and preparation of ITER termination scenarios.

  16. CORSICA modelling of ITER hybrid operation scenarios

    NASA Astrophysics Data System (ADS)

    Kim, S. H.; Bulmer, R. H.; Campbell, D. J.; Casper, T. A.; LoDestro, L. L.; Meyer, W. H.; Pearlstein, L. D.; Snipes, J. A.

    2016-12-01

    The hybrid operating mode observed in several tokamaks is characterized by further enhancement over the high plasma confinement (H-mode) associated with reduced magneto-hydro-dynamic (MHD) instabilities linked to a stationary flat safety factor (q ) profile in the core region. The proposed ITER hybrid operation is currently aiming at operating for a long burn duration (>1000 s) with a moderate fusion power multiplication factor, Q , of at least 5. This paper presents candidate ITER hybrid operation scenarios developed using a free-boundary transport modelling code, CORSICA, taking all relevant physics and engineering constraints into account. The ITER hybrid operation scenarios have been developed by tailoring the 15 MA baseline ITER inductive H-mode scenario. Accessible operation conditions for ITER hybrid operation and achievable range of plasma parameters have been investigated considering uncertainties on the plasma confinement and transport. ITER operation capability for avoiding the poloidal field coil current, field and force limits has been examined by applying different current ramp rates, flat-top plasma currents and densities, and pre-magnetization of the poloidal field coils. Various combinations of heating and current drive (H&CD) schemes have been applied to study several physics issues, such as the plasma current density profile tailoring, enhancement of the plasma energy confinement and fusion power generation. A parameterized edge pedestal model based on EPED1 added to the CORSICA code has been applied to hybrid operation scenarios. Finally, fully self-consistent free-boundary transport simulations have been performed to provide information on the poloidal field coil voltage demands and to study the controllability with the ITER controllers. Extended from Proc. 24th Int. Conf. on Fusion Energy (San Diego, 2012) IT/P1-13.

  17. Iterative inversion of deformation vector fields with feedback control.

    PubMed

    Dubey, Abhishek; Iliopoulos, Alexandros-Stavros; Sun, Xiaobai; Yin, Fang-Fang; Ren, Lei

    2018-05-14

    Often, the inverse deformation vector field (DVF) is needed together with the corresponding forward DVF in four-dimesional (4D) reconstruction and dose calculation, adaptive radiation therapy, and simultaneous deformable registration. This study aims at improving both accuracy and efficiency of iterative algorithms for DVF inversion, and advancing our understanding of divergence and latency conditions. We introduce a framework of fixed-point iteration algorithms with active feedback control for DVF inversion. Based on rigorous convergence analysis, we design control mechanisms for modulating the inverse consistency (IC) residual of the current iterate, to be used as feedback into the next iterate. The control is designed adaptively to the input DVF with the objective to enlarge the convergence area and expedite convergence. Three particular settings of feedback control are introduced: constant value over the domain throughout the iteration; alternating values between iteration steps; and spatially variant values. We also introduce three spectral measures of the displacement Jacobian for characterizing a DVF. These measures reveal the critical role of what we term the nontranslational displacement component (NTDC) of the DVF. We carry out inversion experiments with an analytical DVF pair, and with DVFs associated with thoracic CT images of six patients at end of expiration and end of inspiration. The NTDC-adaptive iterations are shown to attain a larger convergence region at a faster pace compared to previous nonadaptive DVF inversion iteration algorithms. By our numerical experiments, alternating control yields smaller IC residuals and inversion errors than constant control. Spatially variant control renders smaller residuals and errors by at least an order of magnitude, compared to other schemes, in no more than 10 steps. Inversion results also show remarkable quantitative agreement with analysis-based predictions. Our analysis captures properties of DVF data associated with clinical CT images, and provides new understanding of iterative DVF inversion algorithms with a simple residual feedback control. Adaptive control is necessary and highly effective in the presence of nonsmall NTDCs. The adaptive iterations or the spectral measures, or both, may potentially be incorporated into deformable image registration methods. © 2018 American Association of Physicists in Medicine.

  18. A Fractal Excursion.

    ERIC Educational Resources Information Center

    Camp, Dane R.

    1991-01-01

    After introducing the two-dimensional Koch curve, which is generated by simple recursions on an equilateral triangle, the process is extended to three dimensions with simple recursions on a regular tetrahedron. Included, for both fractal sequences, are iterative formulae, illustrations of the first several iterations, and a sample PASCAL program.…

  19. The application of contraction theory to an iterative formulation of electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Brand, J. C.; Kauffman, J. F.

    1985-01-01

    Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.

  20. Fast generating Greenberger-Horne-Zeilinger state via iterative interaction pictures

    NASA Astrophysics Data System (ADS)

    Huang, Bi-Hua; Chen, Ye-Hong; Wu, Qi-Cheng; Song, Jie; Xia, Yan

    2016-10-01

    We delve a little deeper into the construction of shortcuts to adiabatic passage for three-level systems by iterative interaction picture (multiple Schrödinger dynamics). As an application example, we use the deduced iterative based shortcuts to rapidly generate the Greenberger-Horne-Zeilinger (GHZ) state in a three-atom system with the help of quantum Zeno dynamics. Numerical simulation shows the dynamics designed by the iterative picture method is physically feasible and the shortcut scheme performs much better than that using the conventional adiabatic passage techniques. Also, the influences of various decoherence processes are discussed by numerical simulation and the results prove that the scheme is fast and robust against decoherence and operational imperfection.

  1. Self-consistent field for fragmented quantum mechanical model of large molecular systems.

    PubMed

    Jin, Yingdi; Su, Neil Qiang; Xu, Xin; Hu, Hao

    2016-01-30

    Fragment-based linear scaling quantum chemistry methods are a promising tool for the accurate simulation of chemical and biomolecular systems. Because of the coupled inter-fragment electrostatic interactions, a dual-layer iterative scheme is often employed to compute the fragment electronic structure and the total energy. In the dual-layer scheme, the self-consistent field (SCF) of the electronic structure of a fragment must be solved first, then followed by the updating of the inter-fragment electrostatic interactions. The two steps are sequentially carried out and repeated; as such a significant total number of fragment SCF iterations is required to converge the total energy and becomes the computational bottleneck in many fragment quantum chemistry methods. To reduce the number of fragment SCF iterations and speed up the convergence of the total energy, we develop here a new SCF scheme in which the inter-fragment interactions can be updated concurrently without converging the fragment electronic structure. By constructing the global, block-wise Fock matrix and density matrix, we prove that the commutation between the two global matrices guarantees the commutation of the corresponding matrices in each fragment. Therefore, many highly efficient numerical techniques such as the direct inversion of the iterative subspace method can be employed to converge simultaneously the electronic structure of all fragments, reducing significantly the computational cost. Numerical examples for water clusters of different sizes suggest that the method shall be very useful in improving the scalability of fragment quantum chemistry methods. © 2015 Wiley Periodicals, Inc.

  2. An Integrative Object-Based Image Analysis Workflow for Uav Images

    NASA Astrophysics Data System (ADS)

    Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong

    2016-06-01

    In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  3. Generalization of the Hartree-Fock approach to collision processes

    NASA Astrophysics Data System (ADS)

    Hahn, Yukap

    1997-06-01

    The conventional Hartree and Hartree-Fock approaches for bound states are generalized to treat atomic collision processes. All the single-particle orbitals, for both bound and scattering states, are determined simultaneously by requiring full self-consistency. This generalization is achieved by introducing two Ansäauttze: (a) the weak asymptotic boundary condition, which maintains the correct scattering energy and target orbitals with correct number of nodes, and (b) square integrable amputated scattering functions to generate self-consistent field (SCF) potentials for the target orbitals. The exact initial target and final-state asymptotic wave functions are not required and thus need not be specified a priori, as they are determined simultaneously by the SCF iterations. To check the asymptotic behavior of the solution, the theory is applied to elastic electron-hydrogen scattering at low energies. The solution is found to be stable and the weak asymptotic condition is sufficient to produce the correct scattering amplitudes. The SCF potential for the target orbital shows the strong penetration by the projectile electron during the collision, but the exchange term tends to restore the original form. Potential applicabilities of this extension are discussed, including the treatment of ionization and shake-off processes.

  4. The preconditioned Gauss-Seidel method faster than the SOR method

    NASA Astrophysics Data System (ADS)

    Niki, Hiroshi; Kohno, Toshiyuki; Morimoto, Munenori

    2008-09-01

    In recent years, a number of preconditioners have been applied to linear systems [A.D. Gunawardena, S.K. Jain, L. Snyder, Modified iterative methods for consistent linear systems, Linear Algebra Appl. 154-156 (1991) 123-143; T. Kohno, H. Kotakemori, H. Niki, M. Usui, Improving modified Gauss-Seidel method for Z-matrices, Linear Algebra Appl. 267 (1997) 113-123; H. Kotakemori, K. Harada, M. Morimoto, H. Niki, A comparison theorem for the iterative method with the preconditioner (I+Smax), J. Comput. Appl. Math. 145 (2002) 373-378; H. Kotakemori, H. Niki, N. Okamoto, Accelerated iteration method for Z-matrices, J. Comput. Appl. Math. 75 (1996) 87-97; M. Usui, H. Niki, T.Kohno, Adaptive Gauss-Seidel method for linear systems, Internat. J. Comput. Math. 51(1994)119-125 [10

  5. A new analytical method for characterizing nonlinear visual processes with stimuli of arbitrary distribution: Theory and applications.

    PubMed

    Hayashi, Ryusuke; Watanabe, Osamu; Yokoyama, Hiroki; Nishida, Shin'ya

    2017-06-01

    Characterization of the functional relationship between sensory inputs and neuronal or observers' perceptual responses is one of the fundamental goals of systems neuroscience and psychophysics. Conventional methods, such as reverse correlation and spike-triggered data analyses are limited in their ability to resolve complex and inherently nonlinear neuronal/perceptual processes because these methods require input stimuli to be Gaussian with a zero mean. Recent studies have shown that analyses based on a generalized linear model (GLM) do not require such specific input characteristics and have advantages over conventional methods. GLM, however, relies on iterative optimization algorithms and its calculation costs become very expensive when estimating the nonlinear parameters of a large-scale system using large volumes of data. In this paper, we introduce a new analytical method for identifying a nonlinear system without relying on iterative calculations and yet also not requiring any specific stimulus distribution. We demonstrate the results of numerical simulations, showing that our noniterative method is as accurate as GLM in estimating nonlinear parameters in many cases and outperforms conventional, spike-triggered data analyses. As an example of the application of our method to actual psychophysical data, we investigated how different spatiotemporal frequency channels interact in assessments of motion direction. The nonlinear interaction estimated by our method was consistent with findings from previous vision studies and supports the validity of our method for nonlinear system identification.

  6. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    NASA Astrophysics Data System (ADS)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  7. Iterative nonlinear joint transform correlation for the detection of objects in cluttered scenes

    NASA Astrophysics Data System (ADS)

    Haist, Tobias; Tiziani, Hans J.

    1999-03-01

    An iterative correlation technique with digital image processing in the feedback loop for the detection of small objects in cluttered scenes is proposed. A scanning aperture is combined with the method in order to improve the immunity against noise and clutter. Multiple reference objects or different views of one object are processed in parallel. We demonstrate the method by detecting a noisy and distorted face in a crowd with a nonlinear joint transform correlator.

  8. Review of particle-in-cell modeling for the extraction region of large negative hydrogen ion sources for fusion

    NASA Astrophysics Data System (ADS)

    Wünderlich, D.; Mochalskyy, S.; Montellano, I. M.; Revel, A.

    2018-05-01

    Particle-in-cell (PIC) codes are used since the early 1960s for calculating self-consistently the motion of charged particles in plasmas, taking into account external electric and magnetic fields as well as the fields created by the particles itself. Due to the used very small time steps (in the order of the inverse plasma frequency) and mesh size, the computational requirements can be very high and they drastically increase with increasing plasma density and size of the calculation domain. Thus, usually small computational domains and/or reduced dimensionality are used. In the last years, the available central processing unit (CPU) power strongly increased. Together with a massive parallelization of the codes, it is now possible to describe in 3D the extraction of charged particles from a plasma, using calculation domains with an edge length of several centimeters, consisting of one extraction aperture, the plasma in direct vicinity of the aperture, and a part of the extraction system. Large negative hydrogen or deuterium ion sources are essential parts of the neutral beam injection (NBI) system in future fusion devices like the international fusion experiment ITER and the demonstration reactor (DEMO). For ITER NBI RF driven sources with a source area of 0.9 × 1.9 m2 and 1280 extraction apertures will be used. The extraction of negative ions is accompanied by the co-extraction of electrons which are deflected onto an electron dump. Typically, the maximum negative extracted ion current is limited by the amount and the temporal instability of the co-extracted electrons, especially for operation in deuterium. Different PIC codes are available for the extraction region of large driven negative ion sources for fusion. Additionally, some effort is ongoing in developing codes that describe in a simplified manner (coarser mesh or reduced dimensionality) the plasma of the whole ion source. The presentation first gives a brief overview of the current status of the ion source development for ITER NBI and of the PIC method. Different PIC codes for the extraction region are introduced as well as the coupling to codes describing the whole source (PIC codes or fluid codes). Presented and discussed are different physical and numerical aspects of applying PIC codes to negative hydrogen ion sources for fusion as well as selected code results. The main focus of future calculations will be the meniscus formation and identifying measures for reducing the co-extracted electrons, in particular for deuterium operation. The recent results of the 3D PIC code ONIX (calculation domain: one extraction aperture and its vicinity) for the ITER prototype source (1/8 size of the ITER NBI source) are presented.

  9. Compensation for the phase-type spatial periodic modulation of the near-field beam at 1053 nm

    NASA Astrophysics Data System (ADS)

    Gao, Yaru; Liu, Dean; Yang, Aihua; Tang, Ruyu; Zhu, Jianqiang

    2017-10-01

    A phase-only spatial light modulator is used to provide and compensate for the spatial periodic modulation (SPM) of the near-field beam at the near infrared at 1053nm wavelength with an improved iterative weight-based method. The transmission characteristics of the incident beam has been changed by a spatial light modulator (SLM) to shape the spatial intensity of the output beam. The propagation and reverse propagation of the light in free space are two important processes in the iterative process. The based theory is the beam angular spectrum transmit formula (ASTF) and the principle of the iterative weight-based method. We have made two improvements to the originally proposed iterative weight-based method. We select the appropriate parameter by choosing the minimum value of the output beam contrast degree and use the MATLAB built-in angle function to acquire the corresponding phase of the light wave function. The required phase that compensates for the intensity distribution of the incident SPM beam is iterated by this algorithm, which can decrease the magnitude of the SPM of the intensity on the observation plane. The experimental results show that the phase-type SPM of the near-field beam is subject to a certain restriction. We have also analyzed some factors that make the results imperfect. The experiment results verifies the possible applicability of this iterative weight-based method to compensate for the SPM of the near-field beam.

  10. Development of an evidence-based review with recommendations using an online iterative process.

    PubMed

    Rudmik, Luke; Smith, Timothy L

    2011-01-01

    The practice of modern medicine is governed by evidence-based principles. Due to the plethora of medical literature, clinicians often rely on systematic reviews and clinical guidelines to summarize the evidence and provide best practices. Implementation of an evidence-based clinical approach can minimize variation in health care delivery and optimize the quality of patient care. This article reports a method for developing an "Evidence-based Review with Recommendations" using an online iterative process. The manuscript describes the following steps involved in this process: Clinical topic selection, Evidence-hased review assignment, Literature review and initial manuscript preparation, Iterative review process with author selection, and Manuscript finalization. The goal of this article is to improve efficiency and increase the production of evidence-based reviews while maintaining the high quality and transparency associated with the rigorous methodology utilized for clinical guideline development. With the rise of evidence-based medicine, most medical and surgical specialties have an abundance of clinical topics which would benefit from a formal evidence-based review. Although clinical guideline development is an important methodology, the associated challenges limit development to only the absolute highest priority clinical topics. As outlined in this article, the online iterative approach to the development of an Evidence-based Review with Recommendations may improve productivity without compromising the quality associated with formal guideline development methodology. Copyright © 2011 American Rhinologic Society-American Academy of Otolaryngic Allergy, LLC.

  11. Performing Systematic Literature Reviews with Novices: An Iterative Approach

    ERIC Educational Resources Information Center

    Lavallée, Mathieu; Robillard, Pierre-N.; Mirsalari, Reza

    2014-01-01

    Reviewers performing systematic literature reviews require understanding of the review process and of the knowledge domain. This paper presents an iterative approach for conducting systematic literature reviews that addresses the problems faced by reviewers who are novices in one or both levels of understanding. This approach is derived from…

  12. Efficient and robust relaxation procedures for multi-component mixtures including phase transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Ee, E-mail: eehan@math.uni-bremen.de; Hantke, Maren, E-mail: maren.hantke@ovgu.de; Müller, Siegfried, E-mail: mueller@igpm.rwth-aachen.de

    We consider a thermodynamic consistent multi-component model in multi-dimensions that is a generalization of the classical two-phase flow model of Baer and Nunziato. The exchange of mass, momentum and energy between the phases is described by additional source terms. Typically these terms are handled by relaxation procedures. Available relaxation procedures suffer from efficiency and robustness resulting in very costly computations that in general only allow for one-dimensional computations. Therefore we focus on the development of new efficient and robust numerical methods for relaxation processes. We derive exact procedures to determine mechanical and thermal equilibrium states. Further we introduce a novelmore » iterative method to treat the mass transfer for a three component mixture. All new procedures can be extended to an arbitrary number of inert ideal gases. We prove existence, uniqueness and physical admissibility of the resulting states and convergence of our new procedures. Efficiency and robustness of the procedures are verified by means of numerical computations in one and two space dimensions. - Highlights: • We develop novel relaxation procedures for a generalized, thermodynamically consistent Baer–Nunziato type model. • Exact procedures for mechanical and thermal relaxation procedures avoid artificial parameters. • Existence, uniqueness and physical admissibility of the equilibrium states are proven for special mixtures. • A novel iterative method for mass transfer is introduced for a three component mixture providing a unique and admissible equilibrium state.« less

  13. Conceptual design of data acquisition and control system for two Rf driver based negative ion source for fusion R&D

    NASA Astrophysics Data System (ADS)

    Soni, Jigensh; Yadav, R. K.; Patel, A.; Gahlaut, A.; Mistry, H.; Parmar, K. G.; Mahesh, V.; Parmar, D.; Prajapati, B.; Singh, M. J.; Bandyopadhyay, M.; Bansal, G.; Pandya, K.; Chakraborty, A.

    2013-02-01

    Twin Source - An Inductively coupled two RF driver based 180 kW, 1 MHz negative ion source experimental setup is initiated at IPR, Gandhinagar, under Indian program, with the objective of understanding the physics and technology of multi-driver coupling. Twin Source [1] (TS) also provides an intermediate platform between operational ROBIN [2] [5] and eight RF drivers based Indian test facility -INTF [3]. A twin source experiment requires a central system to provide control, data acquisition and communication interface, referred as TS-CODAC, for which a software architecture similar to ITER CODAC core system has been decided for implementation. The Core System is a software suite for ITER plant system manufacturers to use as a template for the development of their interface with CODAC. The ITER approach, in terms of technology, has been adopted for the TS-CODAC so as to develop necessary expertise for developing and operating a control system based on the ITER guidelines as similar configuration needs to be implemented for the INTF. This cost effective approach will provide an opportunity to evaluate and learn ITER CODAC technology, documentation, information technology and control system processes, on an operational machine. Conceptual design of the TS-CODAC system has been completed. For complete control of the system, approximately 200 Nos. control signals and 152 acquisition signals are needed. In TS-CODAC, control loop time required is within the range of 5ms - 10 ms, therefore for the control system, PLC (Siemens S-7 400) has been chosen as suggested in the ITER slow controller catalog. For the data acquisition, the maximum sampling interval required is 100 micro second, and therefore National Instruments (NI) PXIe system and NI 6259 digitizer cards have been selected as suggested in the ITER fast controller catalog. This paper will present conceptual design of TS -CODAC system based on ITER CODAC Core software and applicable plant system integration processes.

  14. Shading correction assisted iterative cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye

    2017-11-01

    Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.

  15. In-vessel tritium retention and removal in ITER

    NASA Astrophysics Data System (ADS)

    Federici, G.; Anderl, R. A.; Andrew, P.; Brooks, J. N.; Causey, R. A.; Coad, J. P.; Cowgill, D.; Doerner, R. P.; Haasz, A. A.; Janeschitz, G.; Jacob, W.; Longhurst, G. R.; Nygren, R.; Peacock, A.; Pick, M. A.; Philipps, V.; Roth, J.; Skinner, C. H.; Wampler, W. R.

    Tritium retention inside the vacuum vessel has emerged as a potentially serious constraint in the operation of the International Thermonuclear Experimental Reactor (ITER). In this paper we review recent tokamak and laboratory data on hydrogen, deuterium and tritium retention for materials and conditions which are of direct relevance to the design of ITER. These data, together with significant advances in understanding the underlying physics, provide the basis for modelling predictions of the tritium inventory in ITER. We present the derivation, and discuss the results, of current predictions both in terms of implantation and codeposition rates, and critically discuss their uncertainties and sensitivity to important design and operation parameters such as the plasma edge conditions, the surface temperature, the presence of mixed-materials, etc. These analyses are consistent with recent tokamak findings and show that codeposition of tritium occurs on the divertor surfaces primarily with carbon eroded from a limited area of the divertor near the strike zones. This issue remains an area of serious concern for ITER. The calculated codeposition rates for ITER are relatively high and the in-vessel tritium inventory limit could be reached, under worst assumptions, in approximately a week of continuous operation. We discuss the implications of these estimates on the design, operation and safety of ITER and present a strategy for resolving the issues. We conclude that as long as carbon is used in ITER - and more generically in any other next-step experimental fusion facility fuelled with tritium - the efficient control and removal of the codeposited tritium is essential. There is a critical need to develop and test in situ cleaning techniques and procedures that are beyond the current experience of present-day tokamaks. We review some of the principal methods that are being investigated and tested, in conjunction with the R&D work still required to extrapolate their applicability to ITER. Finally, unresolved issues are identified and recommendations are made on potential R&D avenues for their resolution.

  16. ITER Fusion Energy

    ScienceCinema

    Holtkamp, Norbert

    2018-01-09

    ITER (in Latin “the way”) is designed to demonstrate the scientific and technological feasibility of fusion energy. Fusion is the process by which two light atomic nuclei combine to form a heavier over one and thus release energy. In the fusion process two isotopes of hydrogen – deuterium and tritium – fuse together to form a helium atom and a neutron. Thus fusion could provide large scale energy production without greenhouse effects; essentially limitless fuel would be available all over the world. The principal goals of ITER are to generate 500 megawatts of fusion power for periods of 300 to 500 seconds with a fusion power multiplication factor, Q, of at least 10. Q ? 10 (input power 50 MW / output power 500 MW). The ITER Organization was officially established in Cadarache, France, on 24 October 2007. The seven members engaged in the project – China, the European Union, India, Japan, Korea, Russia and the United States – represent more than half the world’s population. The costs for ITER are shared by the seven members. The cost for the construction will be approximately 5.5 billion Euros, a similar amount is foreseen for the twenty-year phase of operation and the subsequent decommissioning.

  17. An analytically iterative method for solving problems of cosmic-ray modulation

    NASA Astrophysics Data System (ADS)

    Kolesnyk, Yuriy L.; Bobik, Pavol; Shakhov, Boris A.; Putis, Marian

    2017-09-01

    The development of an analytically iterative method for solving steady-state as well as unsteady-state problems of cosmic-ray (CR) modulation is proposed. Iterations for obtaining the solutions are constructed for the spherically symmetric form of the CR propagation equation. The main solution of the considered problem consists of the zero-order solution that is obtained during the initial iteration and amendments that may be obtained by subsequent iterations. The finding of the zero-order solution is based on the CR isotropy during propagation in the space, whereas the anisotropy is taken into account when finding the next amendments. To begin with, the method is applied to solve the problem of CR modulation where the diffusion coefficient κ and the solar wind speed u are constants with an Local Interstellar Spectra (LIS) spectrum. The solution obtained with two iterations was compared with an analytical solution and with numerical solutions. Finally, solutions that have only one iteration for two problems of CR modulation with u = constant and the same form of LIS spectrum were obtained and tested against numerical solutions. For the first problem, κ is proportional to the momentum of the particle p, so it has the form κ = k0η, where η =p/m_0c. For the second problem, the diffusion coefficient is given in the form κ = k0βη, where β =v/c is the particle speed relative to the speed of light. There was a good matching of the obtained solutions with the numerical solutions as well as with the analytical solution for the problem where κ = constant.

  18. On the safety of ITER accelerators.

    PubMed

    Li, Ge

    2013-01-01

    Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate -1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER.

  19. On the convergence of an iterative formulation of the electromagnetic scattering from an infinite grating of thin wires

    NASA Technical Reports Server (NTRS)

    Brand, J. C.

    1985-01-01

    Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. The mathematical background for formulating an iterative equation is covered using straightforward single variable examples including an extension to vector spaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.

  20. On the safety of ITER accelerators

    PubMed Central

    Li, Ge

    2013-01-01

    Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate −1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER. PMID:24008267

  1. Single-agent parallel window search

    NASA Technical Reports Server (NTRS)

    Powley, Curt; Korf, Richard E.

    1991-01-01

    Parallel window search is applied to single-agent problems by having different processes simultaneously perform iterations of Iterative-Deepening-A(asterisk) (IDA-asterisk) on the same problem but with different cost thresholds. This approach is limited by the time to perform the goal iteration. To overcome this disadvantage, the authors consider node ordering. They discuss how global node ordering by minimum h among nodes with equal f = g + h values can reduce the time complexity of serial IDA-asterisk by reducing the time to perform the iterations prior to the goal iteration. Finally, the two ideas of parallel window search and node ordering are combined to eliminate the weaknesses of each approach while retaining the strengths. The resulting approach, called simply parallel window search, can be used to find a near-optimal solution quickly, improve the solution until it is optimal, and then finally guarantee optimality, depending on the amount of time available.

  2. A fast reconstruction algorithm for fluorescence optical diffusion tomography based on preiteration.

    PubMed

    Song, Xiaolei; Xiong, Xiaoyun; Bai, Jing

    2007-01-01

    Fluorescence optical diffusion tomography in the near-infrared (NIR) bandwidth is considered to be one of the most promising ways for noninvasive molecular-based imaging. Many reconstructive approaches to it utilize iterative methods for data inversion. However, they are time-consuming and they are far from meeting the real-time imaging demands. In this work, a fast preiteration algorithm based on the generalized inverse matrix is proposed. This method needs only one step of matrix-vector multiplication online, by pushing the iteration process to be executed offline. In the preiteration process, the second-order iterative format is employed to exponentially accelerate the convergence. Simulations based on an analytical diffusion model show that the distribution of fluorescent yield can be well estimated by this algorithm and the reconstructed speed is remarkably increased.

  3. Improvement of tritium accountancy technology for ITER fuel cycle safety enhancement

    NASA Astrophysics Data System (ADS)

    O'hira, S.; Hayashi, T.; Nakamura, H.; Kobayashi, K.; Tadokoro, T.; Nakamura, H.; Itoh, T.; Yamanishi, T.; Kawamura, Y.; Iwai, Y.; Arita, T.; Maruyama, T.; Kakuta, T.; Konishi, S.; Enoeda, M.; Yamada, M.; Suzuki, T.; Nishi, M.; Nagashima, T.; Ohta, M.

    2000-03-01

    In order to improve the safe handling and control of tritium for the ITER fuel cycle, effective in situ tritium accounting methods have been developed at the Tritium Process Laboratory in the Japan Atomic Energy Research Institute under one of the ITER-EDA R&D tasks. The remote and multilocation analysis of process gases by an application of laser Raman spectroscopy developed and tested could provide a measurement of hydrogen isotope gases with a detection limit of 0.3 kPa analytical periods of 120 s. An in situ tritium inventory measurement by application of a `self-assaying' storage bed with 25 g tritium capacity could provide a measurement with the required detection limit of less than 1% and a design proof of a bed with 100 g tritium capacity.

  4. Assessing the performance of self-consistent hybrid functional for band gap calculation in oxide semiconductors

    NASA Astrophysics Data System (ADS)

    He, Jiangang; Franchini, Cesare

    2017-11-01

    In this paper we assess the predictive power of the self-consistent hybrid functional scPBE0 in calculating the band gap of oxide semiconductors. The computational procedure is based on the self-consistent evaluation of the mixing parameter α by means of an iterative calculation of the static dielectric constant using the perturbation expansion after discretization method and making use of the relation \

  5. MS lesion segmentation using a multi-channel patch-based approach with spatial consistency

    NASA Astrophysics Data System (ADS)

    Mechrez, Roey; Goldberger, Jacob; Greenspan, Hayit

    2015-03-01

    This paper presents an automatic method for segmentation of Multiple Sclerosis (MS) in Magnetic Resonance Images (MRI) of the brain. The approach is based on similarities between multi-channel patches (T1, T2 and FLAIR). An MS lesion patch database is built using training images for which the label maps are known. For each patch in the testing image, k similar patches are retrieved from the database. The matching labels for these k patches are then combined to produce an initial segmentation map for the test case. Finally a novel iterative patch-based label refinement process based on the initial segmentation map is performed to ensure spatial consistency of the detected lesions. A leave-one-out evaluation is done for each testing image in the MS lesion segmentation challenge of MICCAI 2008. Results are shown to compete with the state-of-the-art methods on the MICCAI 2008 challenge.

  6. Regularized finite element modeling of progressive failure in soils within nonlocal softening plasticity

    NASA Astrophysics Data System (ADS)

    Huang, Maosong; Qu, Xie; Lü, Xilin

    2017-11-01

    By solving a nonlinear complementarity problem for the consistency condition, an improved implicit stress return iterative algorithm for a generalized over-nonlocal strain softening plasticity was proposed, and the consistent tangent matrix was obtained. The proposed algorithm was embodied into existing finite element codes, and it enables the nonlocal regularization of ill-posed boundary value problem caused by the pressure independent and dependent strain softening plasticity. The algorithm was verified by the numerical modeling of strain localization in a plane strain compression test. The results showed that a fast convergence can be achieved and the mesh-dependency caused by strain softening can be effectively eliminated. The influences of hardening modulus and material characteristic length on the simulation were obtained. The proposed algorithm was further used in the simulations of the bearing capacity of a strip footing; the results are mesh-independent, and the progressive failure process of the soil was well captured.

  7. Chimera states in networks of logistic maps with hierarchical connectivities

    NASA Astrophysics Data System (ADS)

    zur Bonsen, Alexander; Omelchenko, Iryna; Zakharova, Anna; Schöll, Eckehard

    2018-04-01

    Chimera states are complex spatiotemporal patterns consisting of coexisting domains of coherence and incoherence. We study networks of nonlocally coupled logistic maps and analyze systematically how the dilution of the network links influences the appearance of chimera patterns. The network connectivities are constructed using an iterative Cantor algorithm to generate fractal (hierarchical) connectivities. Increasing the hierarchical level of iteration, we compare the resulting spatiotemporal patterns. We demonstrate that a high clustering coefficient and symmetry of the base pattern promotes chimera states, and asymmetric connectivities result in complex nested chimera patterns.

  8. Iterative methods used in overlap astrometric reduction techniques do not always converge

    NASA Astrophysics Data System (ADS)

    Rapaport, M.; Ducourant, C.; Colin, J.; Le Campion, J. F.

    1993-04-01

    In this paper we prove that the classical Gauss-Seidel type iterative methods used for the solution of the reduced normal equations occurring in overlapping reduction methods of astrometry do not always converge. We exhibit examples of divergence. We then analyze an alternative algorithm proposed by Wang (1985). We prove the consistency of this algorithm and verify that it can be convergent while the Gauss-Seidel method is divergent. We conjecture the convergence of Wang method for the solution of astrometric problems using overlap techniques.

  9. Evaluation of the cryogenic mechanical properties of the insulation material for ITER Feeder superconducting joint

    NASA Astrophysics Data System (ADS)

    Wu, Zhixiong; Huang, Rongjin; Huang, ChuanJun; Yang, Yanfang; Huang, Xiongyi; Li, Laifeng

    2017-12-01

    The Glass-fiber reinforced plastic (GFRP) fabricated by the vacuum bag process was selected as the high voltage electrical insulation and mechanical support for the superconducting joints and the current leads for the ITER Feeder system. To evaluate the cryogenic mechanical properties of the GFRP, the mechanical properties such as the short beam strength (SBS), the tensile strength and the fatigue fracture strength after 30,000 cycles, were measured at 77K in this study. The results demonstrated that the GFRP met the design requirements of ITER.

  10. Rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, and produce modified schedules quickly. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. These experiments were performed within the domain of Space Shuttle ground processing.

  11. Studies on the behaviour of tritium in components and structure materials of tritium confinement and detritiation systems of ITER

    NASA Astrophysics Data System (ADS)

    Kobayashi, K.; Isobe, K.; Iwai, Y.; Hayashi, T.; Shu, W.; Nakamura, H.; Kawamura, Y.; Yamada, M.; Suzuki, T.; Miura, H.; Uzawa, M.; Nishikawa, M.; Yamanishi, T.

    2007-12-01

    Confinement and the removal of tritium are key subjects for the safety of ITER. The ITER buildings are confinement barriers of tritium. In a hot cell, tritium is often released as vapour and is in contact with the inner walls. The inner walls of the ITER tritium plant building will also be exposed to tritium in an accident. The tritium released in the buildings is removed by the atmosphere detritiation systems (ADS), where the tritium is oxidized by catalysts and is removed as water. A special gas of SF6 is used in ITER and is expected to be released in an accident such as a fire. Although the SF6 gas has potential as a catalyst poison, the performance of ADS with the existence of SF6 has not been confirmed as yet. Tritiated water is produced in the regeneration process of ADS and is subsequently processed by the ITER water detritiation system (WDS). One of the key components of the WDS is an electrolysis cell. To overcome the issues in a global tritium confinement, a series of experimental studies have been carried out as an ITER R&D task: (1) tritium behaviour in concrete; (2) the effect of SF6 on the performance of ADS and (3) tritium durability of the electrolysis cell of the ITER-WDS. (1) The tritiated water vapour penetrated up to 50 mm into the concrete from the surface in six months' exposure. The penetration rate of tritium in the concrete was thus appreciably first, the isotope exchange capacity of the cement paste plays an important role in tritium trapping and penetration into concrete materials when concrete is exposed to tritiated water vapour. It is required to evaluate the effect of coating on the penetration rate quantitatively from the actual tritium tests. (2) SF6 gas decreased the detritiation factor of ADS. Since the effect of SF6 depends closely on its concentration, the amount of SF6 released into the tritium handling area in an accident should be reduced by some ideas of arrangement of components in the buildings. (3) It was expected that the electrolysis cell of the ITER-WDS could endure 3 years' operation under the ITER design conditions. Measuring the concentration of the fluorine ions could be a promising technique for monitoring the damage to the electrolysis cell.

  12. Improving Drive Files for Vehicle Road Simulations

    NASA Astrophysics Data System (ADS)

    Cherng, John G.; Goktan, Ali; French, Mark; Gu, Yi; Jacob, Anil

    2001-09-01

    Shaker tables are commonly used in laboratories for automotive vehicle component testing to study durability and acoustics performance. An example is development testing of car seats. However, it is difficult to repeat the measured road data perfectly with the response of a shaker table as there are basic differences in dynamic characteristics between a flexible vehicle and substantially rigid shaker table. In addition, there are performance limits in the shaker table drive systems that can limit correlation. In practice, an optimal drive signal for the actuators is created iteratively. During each iteration, the error between the road data and the response data is minimised by an optimising algorithm which is generally a part of the feed back loop of the shake table controller. This study presents a systematic investigation to the errors in time and frequency domains as well as joint time-frequency domain and an evaluation of different digital signal processing techniques that have been used in previous work. In addition, we present an innovative approach that integrates the dynamic characteristics of car seats and the human body into the error-minimising iteration process. We found that the iteration process can be shortened and the error reduced by using a weighting function created by normalising the frequency response function of the car seat. Two road data test sets were used in the study.

  13. Using Negotiated Joining to Construct and Fill Open-ended Roles in Elite Culinary Groups.

    PubMed

    Tan, Vaughn

    2015-03-01

    This qualitative study examines membership processes in groups operating in an uncertain environment that prevents them from fully predefining new members' roles. I describe how nine elite high-end, cutting-edge culinary groups in the U.S. and Europe, ranging from innovative restaurants to culinary R&D groups, use negotiated joining-a previously undocumented process-to systematically construct and fill these emergent, open-ended roles. I show that negotiated joining is a consistently patterned, iterative process that begins with a role that both aspirant and target group explicitly understand to be provisional. This provisional role is then jointly modified and constructed by the aspirant and target group through repeated iterations of proposition, validation through trial and evaluation, and selective integration of validated role components. The initially provisional role stabilizes and the aspirant achieves membership if enough role components are validated; otherwise the negotiated joining process is abandoned. Negotiated joining allows the aspirant and target group to learn if a mutually desirable role is likely and, if so, to construct such a role. In addition, the provisional roles in negotiated joining can support absorptive capacity by allowing novel role components to enter target groups through aspirants' efforts to construct stable roles for themselves, while the internal adjustment involved in integrating newly validated role components can have the unintended side effect of supporting adaptation by providing opportunities for the groups to use these novel role components to modify their role structure and goals to suit a changing and uncertain environment. Negotiated joining thus reveals role ambiguity's hitherto unexamined beneficial consequences and provides a foundation for a contingency theory of new-member acquisition.

  14. Conjecture Mapping to Optimize the Educational Design Research Process

    ERIC Educational Resources Information Center

    Wozniak, Helen

    2015-01-01

    While educational design research promotes closer links between practice and theory, reporting its outcomes from iterations across multiple contexts is often constrained by the volumes of data generated, and the context bound nature of the research outcomes. Reports tend to focus on a single iteration of implementation without further research to…

  15. SU-D-17A-02: Four-Dimensional CBCT Using Conventional CBCT Dataset and Iterative Subtraction Algorithm of a Lung Patient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, E; Lasio, G; Yi, B

    2014-06-01

    Purpose: The Iterative Subtraction Algorithm (ISA) method generates retrospectively a pre-selected motion phase cone-beam CT image from the full motion cone-beam CT acquired at standard rotation speed. This work evaluates ISA method with real lung patient data. Methods: The goal of the ISA algorithm is to extract motion and no- motion components form the full reconstruction CBCT. The workflow consists of subtracting from the full CBCT all of the undesired motion phases and obtain a motion de-blurred single-phase CBCT image, followed by iteration of this subtraction process. ISA is realized as follows: 1) The projections are sorted to various phases,more » and from all phases, a full reconstruction is performed to generate an image CTM. 2) Generate forward projections of CTM at the desired phase projection angles, the subtraction of projection and the forward projection will reconstruct a CTSub1, which diminishes the desired phase component. 3) By adding back the CTSub1 to CTm, no motion CBCT, CTS1, can be computed. 4) CTS1 still contains residual motion component. 5) This residual motion component can be further reduced by iteration.The ISA 4DCBCT technique was implemented using Varian Trilogy accelerator OBI system. To evaluate the method, a lung patient CBCT dataset was used. The reconstruction algorithm is FDK. Results: The single phase CBCT reconstruction generated via ISA successfully isolates the desired motion phase from the full motion CBCT, effectively reducing motion blur. It also shows improved image quality, with reduced streak artifacts with respect to the reconstructions from unprocessed phase-sorted projections only. Conclusion: A CBCT motion de-blurring algorithm, ISA, has been developed and evaluated with lung patient data. The algorithm allows improved visualization of a single phase motion extracted from a standard CBCT dataset. This study has been supported by National Institute of Health through R01CA133539.« less

  16. Formation and termination of runaway beams in ITER disruptions

    NASA Astrophysics Data System (ADS)

    Martín-Solís, J. R.; Loarte, A.; Lehnen, M.

    2017-06-01

    A self-consistent analysis of the relevant physics regarding the formation and termination of runaway beams during mitigated disruptions by Ar and Ne injection is presented for selected ITER scenarios with the aim of improving our understanding of the physics underlying the runaway heat loads onto the plasma facing components (PFCs) and identifying open issues for developing and accessing disruption mitigation schemes for ITER. This is carried out by means of simplified models, but still retaining sufficient details of the key physical processes, including: (a) the expected dominant runaway generation mechanisms (avalanche and primary runaway seeds: Dreicer and hot tail runaway generation, tritium decay and Compton scattering of γ rays emitted by the activated wall), (b) effects associated with the plasma and runaway current density profile shape, and (c) corrections to the runaway dynamics to account for the collisions of the runaways with the partially stripped impurity ions, which are found to have strong effects leading to low runaway current generation and low energy conversion during current termination for mitigated disruptions by noble gas injection (particularly for Ne injection) for the shortest current quench times compatible with acceptable forces on the ITER vessel and in-vessel components ({τ\\text{res}}∼ 22~\\text{ms} ). For the case of long current quench times ({τ\\text{res}}∼ 66~\\text{ms} ), runaway beams up to  ∼10 MA can be generated during the disruption current quench and, if the termination of the runaway current is slow enough, the generation of runaways by the avalanche mechanism can play an important role, increasing substantially the energy deposited by the runaways onto the PFCs up to a few hundreds of MJs. Mixed impurity (Ar or Ne) plus deuterium injection proves to be effective in controlling the formation of the runaway current during the current quench, even for the longest current quench times, as well as in decreasing the energy deposited on the runaway electrons during current termination.

  17. Fusion materials: Technical evaluation of the technology of vandium alloys for use as blanket structural materials in fusion power systems

    NASA Astrophysics Data System (ADS)

    1993-08-01

    The Committee's evaluation of vanadium alloys as a structural material for fusion reactors was constrained by limited data and time. The design of the International Thermonuclear Experimental Reactor is still in the concept stage, so meaningful design requirements were not available. The data on the effect of environment and irradiation on vanadium alloys were sparse, and interpolation of these data were made to select the V-5Cr-5Ti alloy. With an aggressive, fully funded program it is possible to qualify a vanadium alloy as the principal structural material for the ITER blanket in the available 5 to 8-year window. However, the data base for V-5Cr-5Ti is limited and will require an extensive development and test program. Because of the chemical reactivity of vanadium the alloy will be less tolerant of system failures, accidents, and off-normal events than most other candidate blanket structural materials and will require more careful handling during fabrication of hardware. Because of the cost of the material more stringent requirements on processes, and minimal historical working experience, it will cost an order of magnitude to qualify a vanadium alloy for ITER blanket structures than other candidate materials. The use of vanadium is difficult and uncertain; therefore, other options should be explored more thoroughly before a final selection of vanadium is confirmed. The Committee views the risk as being too high to rely solely on vanadium alloys. In viewing the state and nature of the design of the ITER blanket as presented to the Committee, it is obvious that there is a need to move toward integrating fabrication, welding, and materials engineers into the ITER design team. If the vanadium alloy option is to be pursued, a large program needs to be started immediately. The commitment of funding and other resources needs to be firm and consistent with a realistic program plan.

  18. A fast method to emulate an iterative POCS image reconstruction algorithm.

    PubMed

    Zeng, Gengsheng L

    2017-10-01

    Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.

  19. Robust iterative learning control for multi-phase batch processes: an average dwell-time method with 2D convergence indexes

    NASA Astrophysics Data System (ADS)

    Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong

    2018-01-01

    In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.

  20. High-Performance Agent-Based Modeling Applied to Vocal Fold Inflammation and Repair.

    PubMed

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K

    2018-01-01

    Fast and accurate computational biology models offer the prospect of accelerating the development of personalized medicine. A tool capable of estimating treatment success can help prevent unnecessary and costly treatments and potential harmful side effects. A novel high-performance Agent-Based Model (ABM) was adopted to simulate and visualize multi-scale complex biological processes arising in vocal fold inflammation and repair. The computational scheme was designed to organize the 3D ABM sub-tasks to fully utilize the resources available on current heterogeneous platforms consisting of multi-core CPUs and many-core GPUs. Subtasks are further parallelized and convolution-based diffusion is used to enhance the performance of the ABM simulation. The scheme was implemented using a client-server protocol allowing the results of each iteration to be analyzed and visualized on the server (i.e., in-situ ) while the simulation is running on the same server. The resulting simulation and visualization software enables users to interact with and steer the course of the simulation in real-time as needed. This high-resolution 3D ABM framework was used for a case study of surgical vocal fold injury and repair. The new framework is capable of completing the simulation, visualization and remote result delivery in under 7 s per iteration, where each iteration of the simulation represents 30 min in the real world. The case study model was simulated at the physiological scale of a human vocal fold. This simulation tracks 17 million biological cells as well as a total of 1.7 billion signaling chemical and structural protein data points. The visualization component processes and renders all simulated biological cells and 154 million signaling chemical data points. The proposed high-performance 3D ABM was verified through comparisons with empirical vocal fold data. Representative trends of biomarker predictions in surgically injured vocal folds were observed.

  1. Use of sediment source fingerprinting to assess the role of subsurface erosion in the supply of fine sediment in a degraded catchment in the Eastern Cape, South Africa.

    PubMed

    Manjoro, Munyaradzi; Rowntree, Kate; Kakembo, Vincent; Foster, Ian; Collins, Adrian L

    2017-06-01

    Sediment source fingerprinting has been successfully deployed to provide information on the surface and subsurface sources of sediment in many catchments around the world. However, there is still scope to re-examine some of the major assumptions of the technique with reference to the number of fingerprint properties used in the model, the number of model iterations and the potential uncertainties of using more than one sediment core collected from the same floodplain sink. We investigated the role of subsurface erosion in the supply of fine sediment to two sediment cores collected from a floodplain in a small degraded catchment in the Eastern Cape, South Africa. The results showed that increasing the number of individual fingerprint properties in the composite signature did not improve the model goodness-of-fit. This is still a much debated issue in sediment source fingerprinting. To test the goodness-of-fit further, the number of model repeat iterations was increased from 5000 to 30,000. However, this did not reduce uncertainty ranges in modelled source proportions nor improve the model goodness-of-fit. The estimated sediment source contributions were not consistent with the available published data on erosion processes in the study catchment. The temporal pattern of sediment source contributions predicted for the two sediment cores was very different despite the cores being collected in close proximity from the same floodplain. This highlights some of the potential limitations associated with using floodplain cores to reconstruct catchment erosion processes and associated sediment source contributions. For the source tracing approach in general, the findings here suggest the need for further investigations into uncertainties related to the number of fingerprint properties included in un-mixing models. The findings support the current widespread use of ≤5000 model repeat iterations for estimating the key sources of sediment samples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. High-Performance Agent-Based Modeling Applied to Vocal Fold Inflammation and Repair

    PubMed Central

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y. K.

    2018-01-01

    Fast and accurate computational biology models offer the prospect of accelerating the development of personalized medicine. A tool capable of estimating treatment success can help prevent unnecessary and costly treatments and potential harmful side effects. A novel high-performance Agent-Based Model (ABM) was adopted to simulate and visualize multi-scale complex biological processes arising in vocal fold inflammation and repair. The computational scheme was designed to organize the 3D ABM sub-tasks to fully utilize the resources available on current heterogeneous platforms consisting of multi-core CPUs and many-core GPUs. Subtasks are further parallelized and convolution-based diffusion is used to enhance the performance of the ABM simulation. The scheme was implemented using a client-server protocol allowing the results of each iteration to be analyzed and visualized on the server (i.e., in-situ) while the simulation is running on the same server. The resulting simulation and visualization software enables users to interact with and steer the course of the simulation in real-time as needed. This high-resolution 3D ABM framework was used for a case study of surgical vocal fold injury and repair. The new framework is capable of completing the simulation, visualization and remote result delivery in under 7 s per iteration, where each iteration of the simulation represents 30 min in the real world. The case study model was simulated at the physiological scale of a human vocal fold. This simulation tracks 17 million biological cells as well as a total of 1.7 billion signaling chemical and structural protein data points. The visualization component processes and renders all simulated biological cells and 154 million signaling chemical data points. The proposed high-performance 3D ABM was verified through comparisons with empirical vocal fold data. Representative trends of biomarker predictions in surgically injured vocal folds were observed. PMID:29706894

  3. Development of a Mobile Clinical Prediction Tool to Estimate Future Depression Severity and Guide Treatment in Primary Care: User-Centered Design

    PubMed Central

    2018-01-01

    Background Around the world, depression is both under- and overtreated. The diamond clinical prediction tool was developed to assist with appropriate treatment allocation by estimating the 3-month prognosis among people with current depressive symptoms. Delivering clinical prediction tools in a way that will enhance their uptake in routine clinical practice remains challenging; however, mobile apps show promise in this respect. To increase the likelihood that an app-delivered clinical prediction tool can be successfully incorporated into clinical practice, it is important to involve end users in the app design process. Objective The aim of the study was to maximize patient engagement in an app designed to improve treatment allocation for depression. Methods An iterative, user-centered design process was employed. Qualitative data were collected via 2 focus groups with a community sample (n=17) and 7 semistructured interviews with people with depressive symptoms. The results of the focus groups and interviews were used by the computer engineering team to modify subsequent protoypes of the app. Results Iterative development resulted in 3 prototypes and a final app. The areas requiring the most substantial changes following end-user input were related to the iconography used and the way that feedback was provided. In particular, communicating risk of future depressive symptoms proved difficult; these messages were consistently misinterpreted and negatively viewed and were ultimately removed. All participants felt positively about seeing their results summarized after completion of the clinical prediction tool, but there was a need for a personalized treatment recommendation made in conjunction with a consultation with a health professional. Conclusions User-centered design led to valuable improvements in the content and design of an app designed to improve allocation of and engagement in depression treatment. Iterative design allowed us to develop a tool that allows users to feel hope, engage in self-reflection, and motivate them to treatment. The tool is currently being evaluated in a randomized controlled trial. PMID:29685864

  4. Characterizing Young Giant Planets with the Gemini Planet Imager: An Iterative Approach to Planet Characterization

    NASA Technical Reports Server (NTRS)

    Marley, Mark

    2015-01-01

    After discovery, the first task of exoplanet science is characterization. However experience has shown that the limited spectral range and resolution of most directly imaged exoplanet data requires an iterative approach to spectral modeling. Simple, brown dwarf-like models, must first be tested to ascertain if they are both adequate to reproduce the available data and consistent with additional constraints, including the age of the system and available limits on the planet's mass and luminosity, if any. When agreement is lacking, progressively more complex solutions must be considered, including non-solar composition, partial cloudiness, and disequilibrium chemistry. Such additional complexity must be balanced against an understanding of the limitations of the atmospheric models themselves. For example while great strides have been made in improving the opacities of important molecules, particularly NH3 and CH4, at high temperatures, much more work is needed to understand the opacity of atomic Na and K. The highly pressure broadened fundamental band of Na and K in the optical stretches into the near-infrared, strongly influencing the spectral shape of Y and J spectral bands. Discerning gravity and atmospheric composition is difficult, if not impossible, without both good atomic opacities as well as an excellent understanding of the relevant atmospheric chemistry. I will present examples of the iterative process of directly imaged exoplanet characterization as applied to both known and potentially newly discovered exoplanets with a focus on constraints provided by GPI spectra. If a new GPI planet is lacking, as a case study I will discuss HR 8799 c and d will explain why some solutions, such as spatially inhomogeneous cloudiness, introduce their own additional layers of complexity. If spectra of new planets from GPI are available I will explain the modeling process in the context of understanding these new worlds.

  5. Scenario-based fitted Q-iteration for adaptive control of water reservoir systems under uncertainty

    NASA Astrophysics Data System (ADS)

    Bertoni, Federica; Giuliani, Matteo; Castelletti, Andrea

    2017-04-01

    Over recent years, mathematical models have largely been used to support planning and management of water resources systems. Yet, the increasing uncertainties in their inputs - due to increased variability in the hydrological regimes - are a major challenge to the optimal operations of these systems. Such uncertainty, boosted by projected changing climate, violates the stationarity principle generally used for describing hydro-meteorological processes, which assumes time persisting statistical characteristics of a given variable as inferred by historical data. As this principle is unlikely to be valid in the future, the probability density function used for modeling stochastic disturbances (e.g., inflows) becomes an additional uncertain parameter of the problem, which can be described in a deterministic and set-membership based fashion. This study contributes a novel method for designing optimal, adaptive policies for controlling water reservoir systems under climate-related uncertainty. The proposed method, called scenario-based Fitted Q-Iteration (sFQI), extends the original Fitted Q-Iteration algorithm by enlarging the state space to include the space of the uncertain system's parameters (i.e., the uncertain climate scenarios). As a result, sFQI embeds the set-membership uncertainty of the future inflow scenarios in the action-value function and is able to approximate, with a single learning process, the optimal control policy associated to any scenario included in the uncertainty set. The method is demonstrated on a synthetic water system, consisting of a regulated lake operated for ensuring reliable water supply to downstream users. Numerical results show that the sFQI algorithm successfully identifies adaptive solutions to operate the system under different inflow scenarios, which outperform the control policy designed under historical conditions. Moreover, the sFQI policy generalizes over inflow scenarios not directly experienced during the policy design, thus alleviating the risk of mis-adaptation, namely the design of a solution fully adapted to a scenario that is different from the one that will actually realize.

  6. Fast and automatic depth control of iterative bone ablation based on optical coherence tomography data

    NASA Astrophysics Data System (ADS)

    Fuchs, Alexander; Pengel, Steffen; Bergmeier, Jan; Kahrs, Lüder A.; Ortmaier, Tobias

    2015-07-01

    Laser surgery is an established clinical procedure in dental applications, soft tissue ablation, and ophthalmology. The presented experimental set-up for closed-loop control of laser bone ablation addresses a feedback system and enables safe ablation towards anatomical structures that usually would have high risk of damage. This study is based on combined working volumes of optical coherence tomography (OCT) and Er:YAG cutting laser. High level of automation in fast image data processing and tissue treatment enables reproducible results and shortens the time in the operating room. For registration of the two coordinate systems a cross-like incision is ablated with the Er:YAG laser and segmented with OCT in three distances. The resulting Er:YAG coordinate system is reconstructed. A parameter list defines multiple sets of laser parameters including discrete and specific ablation rates as ablation model. The control algorithm uses this model to plan corrective laser paths for each set of laser parameters and dynamically adapts the distance of the laser focus. With this iterative control cycle consisting of image processing, path planning, ablation, and moistening of tissue the target geometry and desired depth are approximated until no further corrective laser paths can be set. The achieved depth stays within the tolerances of the parameter set with the smallest ablation rate. Specimen trials with fresh porcine bone have been conducted to prove the functionality of the developed concept. Flat bottom surfaces and sharp edges of the outline without visual signs of thermal damage verify the feasibility of automated, OCT controlled laser bone ablation with minimal process time.

  7. Implementing the Science Assessment Standards: Developing and validating a set of laboratory assessment tasks in high school biology

    NASA Astrophysics Data System (ADS)

    Saha, Gouranga Chandra

    Very often a number of factors, especially time, space and money, deter many science educators from using inquiry-based, hands-on, laboratory practical tasks as alternative assessment instruments in science. A shortage of valid inquiry-based laboratory tasks for high school biology has been cited. Driven by this need, this study addressed the following three research questions: (1) How can laboratory-based performance tasks be designed and developed that are doable by students for whom they are designed/written? (2) Do student responses to the laboratory-based performance tasks validly represent at least some of the intended process skills that new biology learning goals want students to acquire? (3) Are the laboratory-based performance tasks psychometrically consistent as individual tasks and as a set? To answer these questions, three tasks were used from the six biology tasks initially designed and developed by an iterative process of trial testing. Analyses of data from 224 students showed that performance-based laboratory tasks that are doable by all students require careful and iterative process of development. Although the students demonstrated more skill in performing than planning and reasoning, their performances at the item level were very poor for some items. Possible reasons for the poor performances have been discussed and suggestions on how to remediate the deficiencies have been made. Empirical evidences for validity and reliability of the instrument have been presented both from the classical and the modern validity criteria point of view. Limitations of the study have been identified. Finally implications of the study and directions for further research have been discussed.

  8. GLobal Integrated Design Environment

    NASA Technical Reports Server (NTRS)

    Kunkel, Matthew; McGuire, Melissa; Smith, David A.; Gefert, Leon P.

    2011-01-01

    The GLobal Integrated Design Environment (GLIDE) is a collaborative engineering application built to resolve the design session issues of real-time passing of data between multiple discipline experts in a collaborative environment. Utilizing Web protocols and multiple programming languages, GLIDE allows engineers to use the applications to which they are accustomed in this case, Excel to send and receive datasets via the Internet to a database-driven Web server. Traditionally, a collaborative design session consists of one or more engineers representing each discipline meeting together in a single location. The discipline leads exchange parameters and iterate through their respective processes to converge on an acceptable dataset. In cases in which the engineers are unable to meet, their parameters are passed via e-mail, telephone, facsimile, or even postal mail. The result of this slow process of data exchange would elongate a design session to weeks or even months. While the iterative process remains in place, software can now exchange parameters securely and efficiently, while at the same time allowing for much more information about a design session to be made available. GLIDE is written in a compilation of several programming languages, including REALbasic, PHP, and Microsoft Visual Basic. GLIDE client installers are available to download for both Microsoft Windows and Macintosh systems. The GLIDE client software is compatible with Microsoft Excel 2000 or later on Windows systems, and with Microsoft Excel X or later on Macintosh systems. GLIDE follows the Client-Server paradigm, transferring encrypted and compressed data via standard Web protocols. Currently, the engineers use Excel as a front end to the GLIDE Client, as many of their custom tools run in Excel.

  9. Development of two-channel prototype ITER vacuum ultraviolet spectrometer with back-illuminated charge-coupled device and microchannel plate detectors.

    PubMed

    Seon, C R; Choi, S H; Cheon, M S; Pak, S; Lee, H G; Biel, W; Barnsley, R

    2010-10-01

    A vacuum ultraviolet (VUV) spectrometer of a five-channel spectral system is designed for ITER main plasma impurity measurement. To develop and verify the system design, a two-channel prototype system is fabricated with No. 3 (14.4-31.8 nm) and No. 4 (29.0-60.0 nm) among the five channels. The optical system consists of a collimating mirror to collect the light from source to slit, two holographic diffraction gratings with toroidal geometry, and two different electronic detectors. For the test of the prototype system, a hollow cathode lamp is used as a light source. To find the appropriate detector for ITER VUV system, two kinds of detectors of the back-illuminated charge-coupled device and the microchannel plate electron multiplier are tested, and their performance has been investigated.

  10. Using an Iterative Fourier Series Approach in Determining Orbital Elements of Detached Visual Binary Stars

    NASA Astrophysics Data System (ADS)

    Tupa, Peter R.; Quirin, S.; DeLeo, G. G.; McCluskey, G. E., Jr.

    2007-12-01

    We present a modified Fourier transform approach to determine the orbital parameters of detached visual binary stars. Originally inspired by Monet (ApJ 234, 275, 1979), this new method utilizes an iterative routine of refining higher order Fourier terms in a manner consistent with Keplerian motion. In most cases, this approach is not sensitive to the starting orbital parameters in the iterative loop. In many cases we have determined orbital elements even with small fragments of orbits and noisy data, although some systems show computational instabilities. The algorithm was constructed using the MAPLE mathematical software code and tested on artificially created orbits and many real binary systems, including Gliese 22 AC, Tau 51, and BU 738. This work was supported at Lehigh University by NSF-REU grant PHY-9820301.

  11. Ion-source modeling and improved performance of the CAMS high-intensity Cs-sputter ion source

    NASA Astrophysics Data System (ADS)

    Brown, T. A.; Roberts, M. L.; Southon, J. R.

    2000-10-01

    The interior of the high-intensity Cs-sputter source used in routine operations at the Center for Accelerator Mass Spectrometry (CAMS) has been computer modeled using the program NEDLab, with the aim of improving negative ion output. Space charge effects on ion trajectories within the source were modeled through a successive iteration process involving the calculation of ion trajectories through Poisson-equation-determined electric fields, followed by calculation of modified electric fields incorporating the charge distribution from the previously calculated ion trajectories. The program has several additional features that are useful in ion source modeling: (1) averaging of space charge distributions over successive iterations to suppress instabilities, (2) Child's Law modeling of space charge limited ion emission from surfaces, and (3) emission of particular ion groups with a thermal energy distribution and at randomized angles. The results of the modeling effort indicated that significant modification of the interior geometry of the source would double Cs + ion production from our spherical ionizer and produce a significant increase in negative ion output from the source. The results of the implementation of the new geometry were found to be consistent with the model results.

  12. Applying matching pursuit decomposition time-frequency processing to UGS footstep classification

    NASA Astrophysics Data System (ADS)

    Larsen, Brett W.; Chung, Hugh; Dominguez, Alfonso; Sciacca, Jacob; Kovvali, Narayan; Papandreou-Suppappola, Antonia; Allee, David R.

    2013-06-01

    The challenge of rapid footstep detection and classification in remote locations has long been an important area of study for defense technology and national security. Also, as the military seeks to create effective and disposable unattended ground sensors (UGS), computational complexity and power consumption have become essential considerations in the development of classification techniques. In response to these issues, a research project at the Flexible Display Center at Arizona State University (ASU) has experimented with footstep classification using the matching pursuit decomposition (MPD) time-frequency analysis method. The MPD provides a parsimonious signal representation by iteratively selecting matched signal components from a pre-determined dictionary. The resulting time-frequency representation of the decomposed signal provides distinctive features for different types of footsteps, including footsteps during walking or running activities. The MPD features were used in a Bayesian classification method to successfully distinguish between the different activities. The computational cost of the iterative MPD algorithm was reduced, without significant loss in performance, using a modified MPD with a dictionary consisting of signals matched to cadence temporal gait patterns obtained from real seismic measurements. The classification results were demonstrated with real data from footsteps under various conditions recorded using a low-cost seismic sensor.

  13. Attenuation-emission alignment in cardiac PET∕CT based on consistency conditions

    PubMed Central

    Alessio, Adam M.; Kinahan, Paul E.; Champley, Kyle M.; Caldwell, James H.

    2010-01-01

    Purpose: In cardiac PET and PET∕CT imaging, misaligned transmission and emission images are a common problem due to respiratory and cardiac motion. This misalignment leads to erroneous attenuation correction and can cause errors in perfusion mapping and quantification. This study develops and tests a method for automated alignment of attenuation and emission data. Methods: The CT-based attenuation map is iteratively transformed until the attenuation corrected emission data minimize an objective function based on the Radon consistency conditions. The alignment process is derived from previous work by Welch et al. [“Attenuation correction in PET using consistency information,” IEEE Trans. Nucl. Sci. 45, 3134–3141 (1998)] for stand-alone PET imaging. The process was evaluated with the simulated data and measured patient data from multiple cardiac ammonia PET∕CT exams. The alignment procedure was applied to simulations of five different noise levels with three different initial attenuation maps. For the measured patient data, the alignment procedure was applied to eight attenuation-emission combinations with initially acceptable alignment and eight combinations with unacceptable alignment. The initially acceptable alignment studies were forced out of alignment a known amount and quantitatively evaluated for alignment and perfusion accuracy. The initially unacceptable studies were compared to the proposed aligned images in a blinded side-by-side review. Results: The proposed automatic alignment procedure reduced errors in the simulated data and iteratively approaches global minimum solutions with the patient data. In simulations, the alignment procedure reduced the root mean square error to less than 5 mm and reduces the axial translation error to less than 1 mm. In patient studies, the procedure reduced the translation error by >50% and resolved perfusion artifacts after a known misalignment for the eight initially acceptable patient combinations. The side-by-side review of the proposed aligned attenuation-emission maps and initially misaligned attenuation-emission maps revealed that reviewers preferred the proposed aligned maps in all cases, except one inconclusive case. Conclusions: The proposed alignment procedure offers an automatic method to reduce attenuation correction artifacts in cardiac PET∕CT and provides a viable supplement to subjective manual realignment tools. PMID:20384256

  14. Fast divide-and-conquer algorithm for evaluating polarization in classical force fields

    NASA Astrophysics Data System (ADS)

    Nocito, Dominique; Beran, Gregory J. O.

    2017-03-01

    Evaluation of the self-consistent polarization energy forms a major computational bottleneck in polarizable force fields. In large systems, the linear polarization equations are typically solved iteratively with techniques based on Jacobi iterations (JI) or preconditioned conjugate gradients (PCG). Two new variants of JI are proposed here that exploit domain decomposition to accelerate the convergence of the induced dipoles. The first, divide-and-conquer JI (DC-JI), is a block Jacobi algorithm which solves the polarization equations within non-overlapping sub-clusters of atoms directly via Cholesky decomposition, and iterates to capture interactions between sub-clusters. The second, fuzzy DC-JI, achieves further acceleration by employing overlapping blocks. Fuzzy DC-JI is analogous to an additive Schwarz method, but with distance-based weighting when averaging the fuzzy dipoles from different blocks. Key to the success of these algorithms is the use of K-means clustering to identify natural atomic sub-clusters automatically for both algorithms and to determine the appropriate weights in fuzzy DC-JI. The algorithm employs knowledge of the 3-D spatial interactions to group important elements in the 2-D polarization matrix. When coupled with direct inversion in the iterative subspace (DIIS) extrapolation, fuzzy DC-JI/DIIS in particular converges in a comparable number of iterations as PCG, but with lower computational cost per iteration. In the end, the new algorithms demonstrated here accelerate the evaluation of the polarization energy by 2-3 fold compared to existing implementations of PCG or JI/DIIS.

  15. Development of an efficient multigrid method for the NEM form of the multigroup neutron diffusion equation

    NASA Astrophysics Data System (ADS)

    Al-Chalabi, Rifat M. Khalil

    1997-09-01

    Development of an improvement to the computational efficiency of the existing nested iterative solution strategy of the Nodal Exapansion Method (NEM) nodal based neutron diffusion code NESTLE is presented. The improvement in the solution strategy is the result of developing a multilevel acceleration scheme that does not suffer from the numerical stalling associated with a number of iterative solution methods. The acceleration scheme is based on the multigrid method, which is specifically adapted for incorporation into the NEM nonlinear iterative strategy. This scheme optimizes the computational interplay between the spatial discretization and the NEM nonlinear iterative solution process through the use of the multigrid method. The combination of the NEM nodal method, calculation of the homogenized, neutron nodal balance coefficients (i.e. restriction operator), efficient underlying smoothing algorithm (power method of NESTLE), and the finer mesh reconstruction algorithm (i.e. prolongation operator), all operating on a sequence of coarser spatial nodes, constitutes the multilevel acceleration scheme employed in this research. Two implementations of the multigrid method into the NESTLE code were examined; the Imbedded NEM Strategy and the Imbedded CMFD Strategy. The main difference in implementation between the two methods is that in the Imbedded NEM Strategy, the NEM solution is required at every MG level. Numerical tests have shown that the Imbedded NEM Strategy suffers from divergence at coarse- grid levels, hence all the results for the different benchmarks presented here were obtained using the Imbedded CMFD Strategy. The novelties in the developed MG method are as follows: the formulation of the restriction and prolongation operators, and the selection of the relaxation method. The restriction operator utilizes a variation of the reactor physics, consistent homogenization technique. The prolongation operator is based upon a variant of the pin power reconstruction methodology. The relaxation method, which is the power method, utilizes a constant coefficient matrix within the NEM non-linear iterative strategy. The choice of the MG nesting within the nested iterative strategy enables the incorporation of other non-linear effects with no additional coding effort. In addition, if an eigenvalue problem is being solved, it remains an eigenvalue problem at all grid levels, simplifying coding implementation. The merit of the developed MG method was tested by incorporating it into the NESTLE iterative solver, and employing it to solve four different benchmark problems. In addition to the base cases, three different sensitivity studies are performed, examining the effects of number of MG levels, homogenized coupling coefficients correction (i.e. restriction operator), and fine-mesh reconstruction algorithm (i.e. prolongation operator). The multilevel acceleration scheme developed in this research provides the foundation for developing adaptive multilevel acceleration methods for steady-state and transient NEM nodal neutron diffusion equations. (Abstract shortened by UMI.)

  16. Harmonics analysis of the ITER poloidal field converter based on a piecewise method

    NASA Astrophysics Data System (ADS)

    Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU

    2017-12-01

    Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.

  17. Design optimization of first wall and breeder unit module size for the Indian HCCB blanket module

    NASA Astrophysics Data System (ADS)

    Deepak, SHARMA; Paritosh, CHAUDHURI

    2018-04-01

    The Indian test blanket module (TBM) program in ITER is one of the major steps in the Indian fusion reactor program for carrying out the R&D activities in the critical areas like design of tritium breeding blankets relevant to future Indian fusion devices (ITER relevant and DEMO). The Indian Lead–Lithium Cooled Ceramic Breeder (LLCB) blanket concept is one of the Indian DEMO relevant TBM, to be tested in ITER as a part of the TBM program. Helium-Cooled Ceramic Breeder (HCCB) is an alternative blanket concept that consists of lithium titanate (Li2TiO3) as ceramic breeder (CB) material in the form of packed pebble beds and beryllium as the neutron multiplier. Specifically, attentions are given to the optimization of first wall coolant channel design and size of breeder unit module considering coolant pressure and thermal loads for the proposed Indian HCCB blanket based on ITER relevant TBM and loading conditions. These analyses will help proceeding further in designing blankets for loads relevant to the future fusion device.

  18. Installation and Testing of ITER Integrated Modeling and Analysis Suite (IMAS) on DIII-D

    NASA Astrophysics Data System (ADS)

    Lao, L.; Kostuk, M.; Meneghini, O.; Smith, S.; Staebler, G.; Kalling, R.; Pinches, S.

    2017-10-01

    A critical objective of the ITER Integrated Modeling Program is the development of IMAS to support ITER plasma operation and research activities. An IMAS framework has been established based on the earlier work carried out within the EU. It consists of a physics data model and a workflow engine. The data model is capable of representing both simulation and experimental data and is applicable to ITER and other devices. IMAS has been successfully installed on a local DIII-D server using a flexible installer capable of managing the core data access tools (Access Layer and Data Dictionary) and optionally the Kepler workflow engine and coupling tools. A general adaptor for OMFIT (a workflow engine) is being built for adaptation of any analysis code to IMAS using a new IMAS universal access layer (UAL) interface developed from an existing OMFIT EU Integrated Tokamak Modeling UAL. Ongoing work includes development of a general adaptor for EFIT and TGLF based on this new UAL that can be readily extended for other physics codes within OMFIT. Work supported by US DOE under DE-FC02-04ER54698.

  19. Cyclic Game Dynamics Driven by Iterated Reasoning

    PubMed Central

    Frey, Seth; Goldstone, Robert L.

    2013-01-01

    Recent theories from complexity science argue that complex dynamics are ubiquitous in social and economic systems. These claims emerge from the analysis of individually simple agents whose collective behavior is surprisingly complicated. However, economists have argued that iterated reasoning–what you think I think you think–will suppress complex dynamics by stabilizing or accelerating convergence to Nash equilibrium. We report stable and efficient periodic behavior in human groups playing the Mod Game, a multi-player game similar to Rock-Paper-Scissors. The game rewards subjects for thinking exactly one step ahead of others in their group. Groups that play this game exhibit cycles that are inconsistent with any fixed-point solution concept. These cycles are driven by a “hopping” behavior that is consistent with other accounts of iterated reasoning: agents are constrained to about two steps of iterated reasoning and learn an additional one-half step with each session. If higher-order reasoning can be complicit in complex emergent dynamics, then cyclic and chaotic patterns may be endogenous features of real-world social and economic systems. PMID:23441191

  20. Prospective ECG-Triggered Coronary CT Angiography: Clinical Value of Noise-Based Tube Current Reduction Method with Iterative Reconstruction

    PubMed Central

    Shen, Junlin; Du, Xiangying; Guo, Daode; Cao, Lizhen; Gao, Yan; Yang, Qi; Li, Pengyu; Liu, Jiabin; Li, Kuncheng

    2013-01-01

    Objectives To evaluate the clinical value of noise-based tube current reduction method with iterative reconstruction for obtaining consistent image quality with dose optimization in prospective electrocardiogram (ECG)-triggered coronary CT angiography (CCTA). Materials and Methods We performed a prospective randomized study evaluating 338 patients undergoing CCTA with prospective ECG-triggering. Patients were randomly assigned to fixed tube current with filtered back projection (Group 1, n = 113), noise-based tube current with filtered back projection (Group 2, n = 109) or with iterative reconstruction (Group 3, n = 116). Tube voltage was fixed at 120 kV. Qualitative image quality was rated on a 5-point scale (1 = impaired, to 5 = excellent, with 3–5 defined as diagnostic). Image noise and signal intensity were measured; signal-to-noise ratio was calculated; radiation dose parameters were recorded. Statistical analyses included one-way analysis of variance, chi-square test, Kruskal-Wallis test and multivariable linear regression. Results Image noise was maintained at the target value of 35HU with small interquartile range for Group 2 (35.00–35.03HU) and Group 3 (34.99–35.02HU), while from 28.73 to 37.87HU for Group 1. All images in the three groups were acceptable for diagnosis. A relative 20% and 51% reduction in effective dose for Group 2 (2.9 mSv) and Group 3 (1.8 mSv) were achieved compared with Group 1 (3.7 mSv). After adjustment for scan characteristics, iterative reconstruction was associated with 26% reduction in effective dose. Conclusion Noise-based tube current reduction method with iterative reconstruction maintains image noise precisely at the desired level and achieves consistent image quality. Meanwhile, effective dose can be reduced by more than 50%. PMID:23741444

  1. An iterative phase-space explicit discontinuous Galerkin method for stellar radiative transfer in extended atmospheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Almeida, Valmor F.

    In this work, a phase-space discontinuous Galerkin (PSDG) method is presented for the solution of stellar radiative transfer problems. It allows for greater adaptivity than competing methods without sacrificing generality. The method is extensively tested on a spherically symmetric, static, inverse-power-law scattering atmosphere. Results for different sizes of atmospheres and intensities of scattering agreed with asymptotic values. The exponentially decaying behavior of the radiative field in the diffusive-transparent transition region, and the forward peaking behavior at the surface of extended atmospheres were accurately captured. The integrodifferential equation of radiation transfer is solved iteratively by alternating between the radiative pressure equationmore » and the original equation with the integral term treated as an energy density source term. In each iteration, the equations are solved via an explicit, flux-conserving, discontinuous Galerkin method. Finite elements are ordered in wave fronts perpendicular to the characteristic curves so that elemental linear algebraic systems are solved quickly by sweeping the phase space element by element. Two implementations of a diffusive boundary condition at the origin are demonstrated wherein the finite discontinuity in the radiation intensity is accurately captured by the proposed method. This allows for a consistent mechanism to preserve photon luminosity. The method was proved to be robust and fast, and a case is made for the adequacy of parallel processing. In addition to classical two-dimensional plots, results of normalized radiation intensity were mapped onto a log-polar surface exhibiting all distinguishing features of the problem studied.« less

  2. Using a web-based, iterative education model to enhance clinical clerkships.

    PubMed

    Alexander, Erik K; Bloom, Nurit; Falchuk, Kenneth H; Parker, Michael

    2006-10-01

    Although most clinical clerkship curricula are designed to provide all students consistent exposure to defined course objectives, it is clear that individual students are diverse in their backgrounds and baseline knowledge. Ideally, the learning process should be individualized towards the strengths and weakness of each student, but, until recently, this has proved prohibitively time-consuming. The authors describe a program to develop and evaluate an iterative, Web-based educational model assessing medical students' knowledge deficits and allowing targeted teaching shortly after their identification. Beginning in 2002, a new educational model was created, validated, and applied in a prospective fashion to medical students during an internal medicine clerkship at Harvard Medical School. Using a Web-based platform, five validated questions were delivered weekly and a specific knowledge deficiency identified. Teaching targeted to the deficiency was provided to an intervention cohort of five to seven students in each clerkship, though not to controls (the remaining 7-10 students). Effectiveness of this model was assessed by performance on the following week's posttest question. Specific deficiencies were readily identified weekly using this model. Throughout the year, however, deficiencies varied unpredictably. Teaching targeted to deficiencies resulted in significantly better performance on follow-up questioning compared to the performance of those who did not receive this intervention. This model was easily applied in an additive fashion to the current curriculum, and student acceptance was high. The authors conclude that a Web-based, iterative assessment model can effectively target specific curricular needs unique to each group; focus teaching in a rapid, formative, and highly efficient manner; and may improve the efficiency of traditional clerkship teaching.

  3. An iterative phase-space explicit discontinuous Galerkin method for stellar radiative transfer in extended atmospheres

    DOE PAGES

    de Almeida, Valmor F.

    2017-04-19

    In this work, a phase-space discontinuous Galerkin (PSDG) method is presented for the solution of stellar radiative transfer problems. It allows for greater adaptivity than competing methods without sacrificing generality. The method is extensively tested on a spherically symmetric, static, inverse-power-law scattering atmosphere. Results for different sizes of atmospheres and intensities of scattering agreed with asymptotic values. The exponentially decaying behavior of the radiative field in the diffusive-transparent transition region, and the forward peaking behavior at the surface of extended atmospheres were accurately captured. The integrodifferential equation of radiation transfer is solved iteratively by alternating between the radiative pressure equationmore » and the original equation with the integral term treated as an energy density source term. In each iteration, the equations are solved via an explicit, flux-conserving, discontinuous Galerkin method. Finite elements are ordered in wave fronts perpendicular to the characteristic curves so that elemental linear algebraic systems are solved quickly by sweeping the phase space element by element. Two implementations of a diffusive boundary condition at the origin are demonstrated wherein the finite discontinuity in the radiation intensity is accurately captured by the proposed method. This allows for a consistent mechanism to preserve photon luminosity. The method was proved to be robust and fast, and a case is made for the adequacy of parallel processing. In addition to classical two-dimensional plots, results of normalized radiation intensity were mapped onto a log-polar surface exhibiting all distinguishing features of the problem studied.« less

  4. An improved parallel fuzzy connected image segmentation method based on CUDA.

    PubMed

    Wang, Liansheng; Li, Dong; Huang, Shaohui

    2016-05-12

    Fuzzy connectedness method (FC) is an effective method for extracting fuzzy objects from medical images. However, when FC is applied to large medical image datasets, its running time will be greatly expensive. Therefore, a parallel CUDA version of FC (CUDA-kFOE) was proposed by Ying et al. to accelerate the original FC. Unfortunately, CUDA-kFOE does not consider the edges between GPU blocks, which causes miscalculation of edge points. In this paper, an improved algorithm is proposed by adding a correction step on the edge points. The improved algorithm can greatly enhance the calculation accuracy. In the improved method, an iterative manner is applied. In the first iteration, the affinity computation strategy is changed and a look up table is employed for memory reduction. In the second iteration, the error voxels because of asynchronism are updated again. Three different CT sequences of hepatic vascular with different sizes were used in the experiments with three different seeds. NVIDIA Tesla C2075 is used to evaluate our improved method over these three data sets. Experimental results show that the improved algorithm can achieve a faster segmentation compared to the CPU version and higher accuracy than CUDA-kFOE. The calculation results were consistent with the CPU version, which demonstrates that it corrects the edge point calculation error of the original CUDA-kFOE. The proposed method has a comparable time cost and has less errors compared to the original CUDA-kFOE as demonstrated in the experimental results. In the future, we will focus on automatic acquisition method and automatic processing.

  5. SU-E-I-33: Initial Evaluation of Model-Based Iterative CT Reconstruction Using Standard Image Quality Phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gingold, E; Dave, J

    2014-06-01

    Purpose: The purpose of this study was to compare a new model-based iterative reconstruction with existing reconstruction methods (filtered backprojection and basic iterative reconstruction) using quantitative analysis of standard image quality phantom images. Methods: An ACR accreditation phantom (Gammex 464) and a CATPHAN600 phantom were scanned using 3 routine clinical acquisition protocols (adult axial brain, adult abdomen, and pediatric abdomen) on a Philips iCT system. Each scan was acquired using default conditions and 75%, 50% and 25% dose levels. Images were reconstructed using standard filtered backprojection (FBP), conventional iterative reconstruction (iDose4) and a prototype model-based iterative reconstruction (IMR). Phantom measurementsmore » included CT number accuracy, contrast to noise ratio (CNR), modulation transfer function (MTF), low contrast detectability (LCD), and noise power spectrum (NPS). Results: The choice of reconstruction method had no effect on CT number accuracy, or MTF (p<0.01). The CNR of a 6 HU contrast target was improved by 1–67% with iDose4 relative to FBP, while IMR improved CNR by 145–367% across all protocols and dose levels. Within each scan protocol, the CNR improvement from IMR vs FBP showed a general trend of greater improvement at lower dose levels. NPS magnitude was greatest for FBP and lowest for IMR. The NPS of the IMR reconstruction showed a pronounced decrease with increasing spatial frequency, consistent with the unusual noise texture seen in IMR images. Conclusion: Iterative Model Reconstruction reduces noise and improves contrast-to-noise ratio without sacrificing spatial resolution in CT phantom images. This offers the possibility of radiation dose reduction and improved low contrast detectability compared with filtered backprojection or conventional iterative reconstruction.« less

  6. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    NASA Astrophysics Data System (ADS)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  7. Calibration and Data Analysis of the MC-130 Air Balance

    NASA Technical Reports Server (NTRS)

    Booth, Dennis; Ulbrich, N.

    2012-01-01

    Design, calibration, calibration analysis, and intended use of the MC-130 air balance are discussed. The MC-130 balance is an 8.0 inch diameter force balance that has two separate internal air flow systems and one external bellows system. The manual calibration of the balance consisted of a total of 1854 data points with both unpressurized and pressurized air flowing through the balance. A subset of 1160 data points was chosen for the calibration data analysis. The regression analysis of the subset was performed using two fundamentally different analysis approaches. First, the data analysis was performed using a recently developed extension of the Iterative Method. This approach fits gage outputs as a function of both applied balance loads and bellows pressures while still allowing the application of the iteration scheme that is used with the Iterative Method. Then, for comparison, the axial force was also analyzed using the Non-Iterative Method. This alternate approach directly fits loads as a function of measured gage outputs and bellows pressures and does not require a load iteration. The regression models used by both the extended Iterative and Non-Iterative Method were constructed such that they met a set of widely accepted statistical quality requirements. These requirements lead to reliable regression models and prevent overfitting of data because they ensure that no hidden near-linear dependencies between regression model terms exist and that only statistically significant terms are included. Finally, a comparison of the axial force residuals was performed. Overall, axial force estimates obtained from both methods show excellent agreement as the differences of the standard deviation of the axial force residuals are on the order of 0.001 % of the axial force capacity.

  8. An iterative analytical technique for the design of interplanetary direct transfer trajectories including perturbations

    NASA Astrophysics Data System (ADS)

    Parvathi, S. P.; Ramanan, R. V.

    2018-06-01

    An iterative analytical trajectory design technique that includes perturbations in the departure phase of the interplanetary orbiter missions is proposed. The perturbations such as non-spherical gravity of Earth and the third body perturbations due to Sun and Moon are included in the analytical design process. In the design process, first the design is obtained using the iterative patched conic technique without including the perturbations and then modified to include the perturbations. The modification is based on, (i) backward analytical propagation of the state vector obtained from the iterative patched conic technique at the sphere of influence by including the perturbations, and (ii) quantification of deviations in the orbital elements at periapsis of the departure hyperbolic orbit. The orbital elements at the sphere of influence are changed to nullify the deviations at the periapsis. The analytical backward propagation is carried out using the linear approximation technique. The new analytical design technique, named as biased iterative patched conic technique, does not depend upon numerical integration and all computations are carried out using closed form expressions. The improved design is very close to the numerical design. The design analysis using the proposed technique provides a realistic insight into the mission aspects. Also, the proposed design is an excellent initial guess for numerical refinement and helps arrive at the four distinct design options for a given opportunity.

  9. Incorporating Prototyping and Iteration into Intervention Development: A Case Study of a Dining Hall-Based Intervention

    ERIC Educational Resources Information Center

    McClain, Arianna D.; Hekler, Eric B.; Gardner, Christopher D.

    2013-01-01

    Background: Previous research from the fields of computer science and engineering highlight the importance of an iterative design process (IDP) to create more creative and effective solutions. Objective: This study describes IDP as a new method for developing health behavior interventions and evaluates the effectiveness of a dining hall--based…

  10. Not All Wizards Are from Oz: Iterative Design of Intelligent Learning Environments by Communication Capacity Tapering

    ERIC Educational Resources Information Center

    Mavrikis, Manolis; Gutierrez-Santos, Sergio

    2010-01-01

    This paper presents a methodology for the design of intelligent learning environments. We recognise that in the educational technology field, theory development and system-design should be integrated and rely on an iterative process that addresses: (a) the difficulty to elicit precise, concise, and operationalized knowledge from "experts" and (b)…

  11. Item Purification Does Not Always Improve DIF Detection: A Counterexample with Angoff's Delta Plot

    ERIC Educational Resources Information Center

    Magis, David; Facon, Bruno

    2013-01-01

    Item purification is an iterative process that is often advocated as improving the identification of items affected by differential item functioning (DIF). With test-score-based DIF detection methods, item purification iteratively removes the items currently flagged as DIF from the test scores to get purified sets of items, unaffected by DIF. The…

  12. The role of simulation in the design of a neural network chip

    NASA Technical Reports Server (NTRS)

    Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.

    1993-01-01

    An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.

  13. Exact exchange potential evaluated from occupied Kohn-Sham and Hartree-Fock solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cinal, M.; Holas, A.

    2011-06-15

    The reported algorithm determines the exact exchange potential v{sub x} in an iterative way using energy shifts (ESs) and orbital shifts (OSs) obtained with finite-difference formulas from the solutions (occupied orbitals and their energies) of the Hartree-Fock-like equation and the Kohn-Sham-like equation, the former used for the initial approximation to v{sub x} and the latter for increments of ES and OS due to subsequent changes of v{sub x}. Thus, the need for solution of the differential equations for OSs, used by Kuemmel and Perdew [Phys. Rev. Lett. 90, 043004 (2003)], is bypassed. The iterated exchange potential, expressed in terms ofmore » ESs and OSs, is improved by modifying ESs at odd iteration steps and OSs at even steps. The modification formulas are related to the optimized-effective-potential equation (satisfied at convergence) written as the condition of vanishing density shift (DS). They are obtained, respectively, by enforcing its satisfaction through corrections to approximate OSs and by determining the optimal ESs that minimize the DS norm. The proposed method, successfully tested for several closed-(sub)shell atoms, from Be to Kr, within the density functional theory exchange-only approximation, proves highly efficient. The calculations using the pseudospectral method for representing orbitals give iterative sequences of approximate exchange potentials (starting with the Krieger-Li-Iafrate approximation) that rapidly approach the exact v{sub x} so that, for Ne, Ar, and Zn, the corresponding DS norm becomes less than 10{sup -6} after 13, 13, and 9 iteration steps for a given electron density. In self-consistent density calculations, orbital energies of 10{sup -4} hartree accuracy are obtained for these atoms after, respectively, 9, 12, and 12 density iteration steps, each involving just two steps of v{sub x} iteration, while the accuracy limit of 10{sup -6} to 10{sup -7} hartree is reached after 20 density iterations.« less

  14. Exact exchange potential evaluated from occupied Kohn-Sham and Hartree-Fock solutions

    NASA Astrophysics Data System (ADS)

    Cinal, M.; Holas, A.

    2011-06-01

    The reported algorithm determines the exact exchange potential vx in an iterative way using energy shifts (ESs) and orbital shifts (OSs) obtained with finite-difference formulas from the solutions (occupied orbitals and their energies) of the Hartree-Fock-like equation and the Kohn-Sham-like equation, the former used for the initial approximation to vx and the latter for increments of ES and OS due to subsequent changes of vx. Thus, the need for solution of the differential equations for OSs, used by Kümmel and Perdew [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.90.043004 90, 043004 (2003)], is bypassed. The iterated exchange potential, expressed in terms of ESs and OSs, is improved by modifying ESs at odd iteration steps and OSs at even steps. The modification formulas are related to the optimized-effective-potential equation (satisfied at convergence) written as the condition of vanishing density shift (DS). They are obtained, respectively, by enforcing its satisfaction through corrections to approximate OSs and by determining the optimal ESs that minimize the DS norm. The proposed method, successfully tested for several closed-(sub)shell atoms, from Be to Kr, within the density functional theory exchange-only approximation, proves highly efficient. The calculations using the pseudospectral method for representing orbitals give iterative sequences of approximate exchange potentials (starting with the Krieger-Li-Iafrate approximation) that rapidly approach the exact vx so that, for Ne, Ar, and Zn, the corresponding DS norm becomes less than 10-6 after 13, 13, and 9 iteration steps for a given electron density. In self-consistent density calculations, orbital energies of 10-4 hartree accuracy are obtained for these atoms after, respectively, 9, 12, and 12 density iteration steps, each involving just two steps of vx iteration, while the accuracy limit of 10-6 to 10-7 hartree is reached after 20 density iterations.

  15. Design Features of the Neutral Particle Diagnostic System for the ITER Tokamak

    NASA Astrophysics Data System (ADS)

    Petrov, S. Ya.; Afanasyev, V. I.; Melnik, A. D.; Mironov, M. I.; Navolotsky, A. S.; Nesenevich, V. G.; Petrov, M. P.; Chernyshev, F. V.; Kedrov, I. V.; Kuzmin, E. G.; Lyublin, B. V.; Kozlovski, S. S.; Mokeev, A. N.

    2017-12-01

    The control of the deuterium-tritium (DT) fuel isotopic ratio has to ensure the best performance of the ITER thermonuclear fusion reactor. The diagnostic system described in this paper allows the measurement of this ratio analyzing the hydrogen isotope fluxes (performing neutral particle analysis (NPA)). The development and supply of the NPA diagnostics for ITER was delegated to the Russian Federation. The diagnostics is being developed at the Ioffe Institute. The system consists of two analyzers, viz., LENPA (Low Energy Neutral Particle Analyzer) with 10-200 keV energy range and HENPA (High Energy Neutral Particle Analyzer) with 0.1-4.0MeV energy range. Simultaneous operation of both analyzers in different energy ranges enables researchers to measure the DT fuel ratio both in the central burning plasma (thermonuclear burn zone) and at the edge as well. When developing the diagnostic complex, it was necessary to account for the impact of several factors: high levels of neutron and gamma radiation, the direct vacuum connection to the ITER vessel, implying high tritium containment, strict requirements on reliability of all units and mechanisms, and the limited space available for accommodation of the diagnostic hardware at the ITER tokamak. The paper describes the design of the diagnostic complex and the engineering solutions that make it possible to conduct measurements under tokamak reactor conditions. The proposed engineering solutions provide a safe—with respect to thermal and mechanical loads—common vacuum channel for hydrogen isotope atoms to pass to the analyzers; ensure efficient shielding of the analyzers from the ITER stray magnetic field (up to 1 kG); provide the remote control of the NPA diagnostic complex, in particular, connection/disconnection of the NPA vacuum beamline from the ITER vessel; meet the ITER radiation safety requirements; and ensure measurements of the fuel isotopic ratio under high levels of neutron and gamma radiation.

  16. Modeling design iteration in product design and development and its solution by a novel artificial bee colony algorithm.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.

  17. Usability Evaluation of a Clinical Decision Support System for Geriatric ED Pain Treatment.

    PubMed

    Genes, Nicholas; Kim, Min Soon; Thum, Frederick L; Rivera, Laura; Beato, Rosemary; Song, Carolyn; Soriano, Jared; Kannry, Joseph; Baumlin, Kevin; Hwang, Ula

    2016-01-01

    Older adults are at risk for inadequate emergency department (ED) pain care. Unrelieved acute pain is associated with poor outcomes. Clinical decision support systems (CDSS) hold promise to improve patient care, but CDSS quality varies widely, particularly when usability evaluation is not employed. To conduct an iterative usability and redesign process of a novel geriatric abdominal pain care CDSS. We hypothesized this process would result in the creation of more usable and favorable pain care interventions. Thirteen emergency physicians familiar with the Electronic Health Record (EHR) in use at the study site were recruited. Over a 10-week period, 17 1-hour usability test sessions were conducted across 3 rounds of testing. Participants were given 3 patient scenarios and provided simulated clinical care using the EHR, while interacting with the CDSS interventions. Quantitative System Usability Scores (SUS), favorability scores and qualitative narrative feedback were collected for each session. Using a multi-step review process by an interdisciplinary team, positive and negative usability issues in effectiveness, efficiency, and satisfaction were considered, prioritized and incorporated in the iterative redesign process of the CDSS. Video analysis was used to determine the appropriateness of the CDS appearances during simulated clinical care. Over the 3 rounds of usability evaluations and subsequent redesign processes, mean SUS progressively improved from 74.8 to 81.2 to 88.9; mean favorability scores improved from 3.23 to 4.29 (1 worst, 5 best). Video analysis revealed that, in the course of the iterative redesign processes, rates of physicians' acknowledgment of CDS interventions increased, however most rates of desired actions by physicians (such as more frequent pain score updates) decreased. The iterative usability redesign process was instrumental in improving the usability of the CDSS; if implemented in practice, it could improve geriatric pain care. The usability evaluation process led to improved acknowledgement and favorability. Incorporating usability testing when designing CDSS interventions for studies may be effective to enhance clinician use.

  18. Optimal spiral phase modulation in Gerchberg-Saxton algorithm for wavefront reconstruction and correction

    NASA Astrophysics Data System (ADS)

    Baránek, M.; Běhal, J.; Bouchal, Z.

    2018-01-01

    In the phase retrieval applications, the Gerchberg-Saxton (GS) algorithm is widely used for the simplicity of implementation. This iterative process can advantageously be deployed in the combination with a spatial light modulator (SLM) enabling simultaneous correction of optical aberrations. As recently demonstrated, the accuracy and efficiency of the aberration correction using the GS algorithm can be significantly enhanced by a vortex image spot used as the target intensity pattern in the iterative process. Here we present an optimization of the spiral phase modulation incorporated into the GS algorithm.

  19. A Predictive Model for Toxicity Effects Assessment of Biotransformed Hepatic Drugs Using Iterative Sampling Method.

    PubMed

    Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella

    2016-12-09

    Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.

  20. Pollution Reduction Technology Program for Small Jet Aircraft Engines, Phase 2

    NASA Technical Reports Server (NTRS)

    Bruce, T. W.; Davis, F. G.; Kuhn, T. E.; Mongia, H. C.

    1978-01-01

    A series of iterative combustor pressure rig tests were conducted on two combustor concepts applied to the AiResearch TFE731-2 turbofan engine combustion system for the purpose of optimizing combustor performance and operating characteristics consistant with low emissions. The two concepts were an axial air-assisted airblast fuel injection configuration with variable-geometry air swirlers and a staged premix/prevaporization configuration. The iterative rig testing and modification sequence on both concepts was intended to provide operational compatibility with the engine and determine one concept for further evaluation in a TFE731-2 engine.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panayotov, Dobromir; Poitevin, Yves; Grief, Andrew

    'Fusion for Energy' (F4E) is designing, developing, and implementing the European Helium-Cooled Lead-Lithium (HCLL) and Helium-Cooled Pebble-Bed (HCPB) Test Blanket Systems (TBSs) for ITER (Nuclear Facility INB-174). Safety demonstration is an essential element for the integration of these TBSs into ITER and accident analysis is one of its critical components. A systematic approach to accident analysis has been developed under the F4E contract on TBS safety analyses. F4E technical requirements, together with Amec Foster Wheeler and INL efforts, have resulted in a comprehensive methodology for fusion breeding blanket accident analysis that addresses the specificity of the breeding blanket designs, materials,more » and phenomena while remaining consistent with the approach already applied to ITER accident analyses. Furthermore, the methodology phases are illustrated in the paper by its application to the EU HCLL TBS using both MELCOR and RELAP5 codes.« less

  2. Iterative matrix algorithm for high precision temperature and force decoupling in multi-parameter FBG sensing.

    PubMed

    Hopf, Barbara; Dutz, Franz J; Bosselmann, Thomas; Willsch, Michael; Koch, Alexander W; Roths, Johannes

    2018-04-30

    A new iterative matrix algorithm has been applied to improve the precision of temperature and force decoupling in multi-parameter FBG sensing. For the first time, this evaluation technique allows the integration of nonlinearities in the sensor's temperature characteristic and the temperature dependence of the sensor's force sensitivity. Applied to a sensor cable consisting of two FBGs in fibers with 80 µm and 125 µm cladding diameter installed in a 7 m-long coiled PEEK capillary, this technique significantly reduced the uncertainties in friction-compensated temperature measurements. In the presence of high friction-induced forces of up to 1.6 N the uncertainties in temperature evaluation were reduced from several degrees Celsius if using a standard linear matrix approach to less than 0.5°C if using the iterative matrix approach in an extended temperature range between -35°C and 125°C.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parrish, Robert M.; Liu, Fang; Martínez, Todd J., E-mail: toddjmartinez@gmail.com

    We formulate self-consistent field (SCF) theory in terms of an interaction picture where the working variable is the difference density matrix between the true system and a corresponding superposition of atomic densities. As the difference density matrix directly represents the electronic deformations inherent in chemical bonding, this “difference self-consistent field (dSCF)” picture provides a number of significant conceptual and computational advantages. We show that this allows for a stable and efficient dSCF iterative procedure with wholly single-precision Coulomb and exchange matrix builds. We also show that the dSCF iterative procedure can be performed with aggressive screening of the pair space.more » These approximations are tested and found to be accurate for systems with up to 1860 atoms and >10 000 basis functions, providing for immediate overall speedups of up to 70% in the heavily optimized TERACHEM SCF implementation.« less

  4. I-V characterization of a quantum well infrared photodetector with stepped and graded barriers

    NASA Astrophysics Data System (ADS)

    Nutku, F.; Erol, A.; Gunes, M.; Buklu, L. B.; Ergun, Y.; Arikan, M. C.

    2012-09-01

    I-V characterization of an n-type quantum well infrared photodetector which consists of stepped and graded barriers has been done under dark at temperatures between 20-300 K. Different current transport mechanisms and transition between them have been observed at temperature around 47 K. Activation energies of the electrons at various bias voltages have been obtained from the temperature dependent I-V measurements. Activation energy at zero bias has been calculated by extrapolating the bias dependence of the activation energies. Ground state energies and barrier heights of the four different quantum wells have been calculated by using an iterative technique, which depends on experimentally obtained activation energy. Ground state energies also have been calculated with transfer matrix technique and compared with iteration results. Incorporating the effect of high electron density induced electron exchange interaction on ground state energies; more consistent results with theoretical transfer matrix calculations have been obtained.

  5. An adaptive moving finite volume scheme for modeling flood inundation over dry and complex topography

    NASA Astrophysics Data System (ADS)

    Zhou, Feng; Chen, Guoxian; Huang, Yuefei; Yang, Jerry Zhijian; Feng, Hui

    2013-04-01

    A new geometrical conservative interpolation on unstructured meshes is developed for preserving still water equilibrium and positivity of water depth at each iteration of mesh movement, leading to an adaptive moving finite volume (AMFV) scheme for modeling flood inundation over dry and complex topography. Unlike traditional schemes involving position-fixed meshes, the iteration process of the AFMV scheme moves a fewer number of the meshes adaptively in response to flow variables calculated in prior solutions and then simulates their posterior values on the new meshes. At each time step of the simulation, the AMFV scheme consists of three parts: an adaptive mesh movement to shift the vertices position, a geometrical conservative interpolation to remap the flow variables by summing the total mass over old meshes to avoid the generation of spurious waves, and a partial differential equations(PDEs) discretization to update the flow variables for a new time step. Five different test cases are presented to verify the computational advantages of the proposed scheme over nonadaptive methods. The results reveal three attractive features: (i) the AMFV scheme could preserve still water equilibrium and positivity of water depth within both mesh movement and PDE discretization steps; (ii) it improved the shock-capturing capability for handling topographic source terms and wet-dry interfaces by moving triangular meshes to approximate the spatial distribution of time-variant flood processes; (iii) it was able to solve the shallow water equations with a relatively higher accuracy and spatial-resolution with a lower computational cost.

  6. U.S. Seismic Design Maps Web Application

    NASA Astrophysics Data System (ADS)

    Martinez, E.; Fee, J.

    2015-12-01

    The application computes earthquake ground motion design parameters compatible with the International Building Code and other seismic design provisions. It is the primary method for design engineers to obtain ground motion parameters for multiple building codes across the country. When designing new buildings and other structures, engineers around the country use the application. Users specify the design code of interest, location, and other parameters to obtain necessary ground motion information consisting of a high-level executive summary as well as detailed information including maps, data, and graphs. Results are formatted such that they can be directly included in a final engineering report. In addition to single-site analysis, the application supports a batch mode for simultaneous consideration of multiple locations. Finally, an application programming interface (API) is available which allows other application developers to integrate this application's results into larger applications for additional processing. Development on the application has proceeded in an iterative manner working with engineers through email, meetings, and workshops. Each iteration provided new features, improved performance, and usability enhancements. This development approach positioned the application to be integral to the structural design process and is now used to produce over 1800 reports daily. Recent efforts have enhanced the application to be a data-driven, mobile-first, responsive web application. Development is ongoing, and source code has recently been published into the open-source community on GitHub. Open-sourcing the code facilitates improved incorporation of user feedback to add new features ensuring the application's continued success.

  7. Precise and fast spatial-frequency analysis using the iterative local Fourier transform.

    PubMed

    Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook

    2016-09-19

    The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.

  8. Development and verification of an agent-based model of opinion leadership.

    PubMed

    Anderson, Christine A; Titler, Marita G

    2014-09-27

    The use of opinion leaders is a strategy used to speed the process of translating research into practice. Much is still unknown about opinion leader attributes and activities and the context in which they are most effective. Agent-based modeling is a methodological tool that enables demonstration of the interactive and dynamic effects of individuals and their behaviors on other individuals in the environment. The purpose of this study was to develop and test an agent-based model of opinion leadership. The details of the design and verification of the model are presented. The agent-based model was developed by using a software development platform to translate an underlying conceptual model of opinion leadership into a computer model. Individual agent attributes (for example, motives and credibility) and behaviors (seeking or providing an opinion) were specified as variables in the model in the context of a fictitious patient care unit. The verification process was designed to test whether or not the agent-based model was capable of reproducing the conditions of the preliminary conceptual model. The verification methods included iterative programmatic testing ('debugging') and exploratory analysis of simulated data obtained from execution of the model. The simulation tests included a parameter sweep, in which the model input variables were adjusted systematically followed by an individual time series experiment. Statistical analysis of model output for the 288 possible simulation scenarios in the parameter sweep revealed that the agent-based model was performing, consistent with the posited relationships in the underlying model. Nurse opinion leaders act on the strength of their beliefs and as a result, become an opinion resource for their uncertain colleagues, depending on their perceived credibility. Over time, some nurses consistently act as this type of resource and have the potential to emerge as opinion leaders in a context where uncertainty exists. The development and testing of agent-based models is an iterative process. The opinion leader model presented here provides a basic structure for continued model development, ongoing verification, and the establishment of validation procedures, including empirical data collection.

  9. A Mixed Methods Bounded Case Study: Data-Driven Decision Making within Professional Learning Communities for Response to Intervention

    ERIC Educational Resources Information Center

    Rodriguez, Gabriel R.

    2017-01-01

    A growing number of schools are implementing PLCs to address school improvement, staff engage with data to identify student needs and determine instructional interventions. This is a starting point for engaging in the iterative process of learning for the teach in order to increase student learning (Hord & Sommers, 2008). The iterative process…

  10. Evaluating the iterative development of VR/AR human factors tools for manual work.

    PubMed

    Liston, Paul M; Kay, Alison; Cromie, Sam; Leva, Chiara; D'Cruz, Mirabelle; Patel, Harshada; Langley, Alyson; Sharples, Sarah; Aromaa, Susanna

    2012-01-01

    This paper outlines the approach taken to iteratively evaluate a set of VR/AR (virtual reality / augmented reality) applications for five different manual-work applications - terrestrial spacecraft assembly, assembly-line design, remote maintenance of trains, maintenance of nuclear reactors, and large-machine assembly process design - and examines the evaluation data for evidence of the effectiveness of the evaluation framework as well as the benefits to the development process of feedback from iterative evaluation. ManuVAR is an EU-funded research project that is working to develop an innovative technology platform and a framework to support high-value, high-knowledge manual work throughout the product lifecycle. The results of this study demonstrate the iterative improvements reached throughout the design cycles, observable through the trending of the quantitative results from three successive trials of the applications and the investigation of the qualitative interview findings. The paper discusses the limitations of evaluation in complex, multi-disciplinary development projects and finds evidence of the effectiveness of the use of the particular set of complementary evaluation methods incorporating a common inquiry structure used for the evaluation - particularly in facilitating triangulation of the data.

  11. An adaptive Gaussian process-based iterative ensemble smoother for data assimilation

    NASA Astrophysics Data System (ADS)

    Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao

    2018-05-01

    Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.

  12. Iteration of ultrasound aberration correction methods

    NASA Astrophysics Data System (ADS)

    Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond

    2004-05-01

    Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.

  13. Iterative load-balancing method with multigrid level relaxation for particle simulation with short-range interactions

    NASA Astrophysics Data System (ADS)

    Furuichi, Mikito; Nishiura, Daisuke

    2017-10-01

    We developed dynamic load-balancing algorithms for Particle Simulation Methods (PSM) involving short-range interactions, such as Smoothed Particle Hydrodynamics (SPH), Moving Particle Semi-implicit method (MPS), and Discrete Element method (DEM). These are needed to handle billions of particles modeled in large distributed-memory computer systems. Our method utilizes flexible orthogonal domain decomposition, allowing the sub-domain boundaries in the column to be different for each row. The imbalances in the execution time between parallel logical processes are treated as a nonlinear residual. Load-balancing is achieved by minimizing the residual within the framework of an iterative nonlinear solver, combined with a multigrid technique in the local smoother. Our iterative method is suitable for adjusting the sub-domain frequently by monitoring the performance of each computational process because it is computationally cheaper in terms of communication and memory costs than non-iterative methods. Numerical tests demonstrated the ability of our approach to handle workload imbalances arising from a non-uniform particle distribution, differences in particle types, or heterogeneous computer architecture which was difficult with previously proposed methods. We analyzed the parallel efficiency and scalability of our method using Earth simulator and K-computer supercomputer systems.

  14. A Novel Automatic Detection System for ECG Arrhythmias Using Maximum Margin Clustering with Immune Evolutionary Algorithm

    PubMed Central

    Zhu, Bohui; Ding, Yongsheng; Hao, Kuangrong

    2013-01-01

    This paper presents a novel maximum margin clustering method with immune evolution (IEMMC) for automatic diagnosis of electrocardiogram (ECG) arrhythmias. This diagnostic system consists of signal processing, feature extraction, and the IEMMC algorithm for clustering of ECG arrhythmias. First, raw ECG signal is processed by an adaptive ECG filter based on wavelet transforms, and waveform of the ECG signal is detected; then, features are extracted from ECG signal to cluster different types of arrhythmias by the IEMMC algorithm. Three types of performance evaluation indicators are used to assess the effect of the IEMMC method for ECG arrhythmias, such as sensitivity, specificity, and accuracy. Compared with K-means and iterSVR algorithms, the IEMMC algorithm reflects better performance not only in clustering result but also in terms of global search ability and convergence ability, which proves its effectiveness for the detection of ECG arrhythmias. PMID:23690875

  15. Towards the Experimental Assessment of the DQE in SPECT Scanners

    NASA Astrophysics Data System (ADS)

    Fountos, G. P.; Michail, C. M.

    2017-11-01

    The purpose of this work was to introduce the Detective Quantum Efficiency (DQE) in single photon emission computed tomography (SPECT) systems using a flood source. A Tc-99m-based flood source (Eγ = 140 keV) consisting of a radiopharmaceutical solution of dithiothreitol (DTT, 10-3 M)/Tc-99m(III)-DMSA, 40 mCi/40 ml bound to the grains of an Agfa MammoRay HDR Medical X-ray film) was prepared in laboratory. The source was placed between two PMMA blocks and images were obtained by using the brain tomographic acquisition protocol (DatScan-brain). The Modulation Transfer Function (MTF) was evaluated using the Iterative 2D algorithm. All imaging experiments were performed in a Siemens e-Cam gamma camera. The Normalized Noise Power spectra (NNPS) were obtained from the sagittal views of the source. The higher MTF values were obtained for the Flash Iterative 2D with 24 iterations and 20 subsets. The noise levels of the SPECT reconstructed images, in terms of the NNPS, were found to increase as the number of iterations increase. The behavior of the DQE was influenced by both MTF and NNPS. As the number of iterations was increased, higher MTF values were obtained, however with a parallel, increase of magnitude in image noise, as depicted from the NNPS results. DQE values, which were influenced by both MTF and NNPS, were found higher when the number of iterations results in resolution saturation. The method presented here is novel and easy to implement, requiring materials commonly found in clinical practice and can be useful in the quality control of SPECT scanners.

  16. Regularization Parameter Selection for Nonlinear Iterative Image Restoration and MRI Reconstruction Using GCV and SURE-Based Methods

    PubMed Central

    Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764

  17. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    PubMed

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.

  18. The child's perspective as a guiding principle: Young children as co-designers in the design of an interactive application meant to facilitate participation in healthcare situations.

    PubMed

    Stålberg, Anna; Sandberg, Anette; Söderbäck, Maja; Larsson, Thomas

    2016-06-01

    During the last decade, interactive technology has entered mainstream society. Its many users also include children, even the youngest ones, who use the technology in different situations for both fun and learning. When designing technology for children, it is crucial to involve children in the process in order to arrive at an age-appropriate end product. In this study we describe the specific iterative process by which an interactive application was developed. This application is intended to facilitate young children's, three-to five years old, participation in healthcare situations. We also describe the specific contributions of the children, who tested the prototypes in a preschool, a primary health care clinic and an outpatient unit at a hospital, during the development process. The iterative phases enabled the children to be involved at different stages of the process and to evaluate modifications and improvements made after each prior iteration. The children contributed their own perspectives (the child's perspective) on the usability, content and graphic design of the application, substantially improving the software and resulting in an age-appropriate product. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Iterative dip-steering median filter

    NASA Astrophysics Data System (ADS)

    Huo, Shoudong; Zhu, Weihong; Shi, Taikun

    2017-09-01

    Seismic data are always contaminated with high noise components, which present processing challenges especially for signal preservation and its true amplitude response. This paper deals with an extension of the conventional median filter, which is widely used in random noise attenuation. It is known that the standard median filter works well with laterally aligned coherent events but cannot handle steep events, especially events with conflicting dips. In this paper, an iterative dip-steering median filter is proposed for the attenuation of random noise in the presence of multiple dips. The filter first identifies the dominant dips inside an optimized processing window by a Fourier-radial transform in the frequency-wavenumber domain. The optimum size of the processing window depends on the intensity of random noise that needs to be attenuated and the amount of signal to be preserved. It then applies median filter along the dominant dip and retains the signals. Iterations are adopted to process the residual signals along the remaining dominant dips in a descending sequence, until all signals have been retained. The method is tested by both synthetic and field data gathers and also compared with the commonly used f-k least squares de-noising and f-x deconvolution.

  20. Biocatalytic Conversion of Avermectin to 4″-Oxo-Avermectin: Improvement of Cytochrome P450 Monooxygenase Specificity by Directed Evolution▿ †

    PubMed Central

    Trefzer, Axel; Jungmann, Volker; Molnár, István; Botejue, Ajit; Buckel, Dagmar; Frey, Gerhard; Hill, D. Steven; Jörg, Mario; Ligon, James M.; Mason, Dylan; Moore, David; Pachlatko, J. Paul; Richardson, Toby H.; Spangenberg, Petra; Wall, Mark A.; Zirkle, Ross; Stege, Justin T.

    2007-01-01

    Discovery of the CYP107Z subfamily of cytochrome P450 oxidases (CYPs) led to an alternative biocatalytic synthesis of 4″-oxo-avermectin, a key intermediate for the commercial production of the semisynthetic insecticide emamectin. However, under industrial process conditions, these wild-type CYPs showed lower yields due to side product formation. Molecular evolution employing GeneReassembly was used to improve the regiospecificity of these enzymes by a combination of random mutagenesis, protein structure-guided site-directed mutagenesis, and recombination of multiple natural and synthetic CYP107Z gene fragments. To assess the specificity of CYP mutants, a miniaturized, whole-cell biocatalytic reaction system that allowed high-throughput screening of large numbers of variants was developed. In an iterative process consisting of four successive rounds of GeneReassembly evolution, enzyme variants with significantly improved specificity for the production of 4″-oxo-avermectin were identified; these variants could be employed for a more economical industrial biocatalytic process to manufacture emamectin. PMID:17483257

  1. How does culture affect experiential training feedback in exported Canadian health professional curricula?

    PubMed Central

    Mousa Bacha, Rasha; Abdelaziz, Somaia

    2017-01-01

    Objectives To explore feedback processes of Western-based health professional student training curricula conducted in an Arab clinical teaching setting. Methods This qualitative study employed document analysis of in-training evaluation reports (ITERs) used by Canadian nursing, pharmacy, respiratory therapy, paramedic, dental hygiene, and pharmacy technician programs established in Qatar. Six experiential training program coordinators were interviewed between February and May 2016 to explore how national cultural differences are perceived to affect feedback processes between students and clinical supervisors. Interviews were recorded, transcribed, and coded according to a priori cultural themes. Results Document analysis found all programs’ ITERs outlined competency items for students to achieve. Clinical supervisors choose a response option corresponding to their judgment of student performance and may provide additional written feedback in spaces provided. Only one program required formal face-to-face feedback exchange between students and clinical supervisors. Experiential training program coordinators identified that no ITER was expressly culturally adapted, although in some instances, modifications were made for differences in scopes of practice between Canada and Qatar.  Power distance was recognized by all coordinators who also identified both student and supervisor reluctance to document potentially negative feedback in ITERs. Instances of collectivism were described as more lenient student assessment by clinical supervisors of the same cultural background. Uncertainty avoidance did not appear to impact feedback processes. Conclusions Our findings suggest that differences in specific cultural dimensions between Qatar and Canada have implications on the feedback process in experiential training which may be addressed through simple measures to accommodate communication preferences. PMID:28315858

  2. How does culture affect experiential training feedback in exported Canadian health professional curricula?

    PubMed

    Wilbur, Kerry; Mousa Bacha, Rasha; Abdelaziz, Somaia

    2017-03-17

    To explore feedback processes of Western-based health professional student training curricula conducted in an Arab clinical teaching setting. This qualitative study employed document analysis of in-training evaluation reports (ITERs) used by Canadian nursing, pharmacy, respiratory therapy, paramedic, dental hygiene, and pharmacy technician programs established in Qatar. Six experiential training program coordinators were interviewed between February and May 2016 to explore how national cultural differences are perceived to affect feedback processes between students and clinical supervisors. Interviews were recorded, transcribed, and coded according to a priori cultural themes. Document analysis found all programs' ITERs outlined competency items for students to achieve. Clinical supervisors choose a response option corresponding to their judgment of student performance and may provide additional written feedback in spaces provided. Only one program required formal face-to-face feedback exchange between students and clinical supervisors. Experiential training program coordinators identified that no ITER was expressly culturally adapted, although in some instances, modifications were made for differences in scopes of practice between Canada and Qatar.  Power distance was recognized by all coordinators who also identified both student and supervisor reluctance to document potentially negative feedback in ITERs. Instances of collectivism were described as more lenient student assessment by clinical supervisors of the same cultural background. Uncertainty avoidance did not appear to impact feedback processes. Our findings suggest that differences in specific cultural dimensions between Qatar and Canada have implications on the feedback process in experiential training which may be addressed through simple measures to accommodate communication preferences.

  3. Simulation of High Power Lasers (Preprint)

    DTIC Science & Technology

    2010-06-01

    integration, which requires communication of zonal boundary information after each inner- iteration of the Gauss - Seidel or Jacobi matrix solver. Each...experiment consisting of a supersonic (M~2.2) converging -diverging nozzle section with secondary mass injection in the nozzle expansion downstream of...consists of a section of a supersonic (M~2.2) converging -diverging slit nozzle with one large and two small orifices that inject reactants into the

  4. ENVIRONMENTAL QUALITY INFORMATION SYSTEM - EQULS® - ITER

    EPA Science Inventory

    This project consisted of an evaluation of the Environmental Quality Information System (EQuIS) software designed by Earthsoft, Inc. as an environmental data management and analysis platform for monitoring and remediation projects. In consultation with the EQuIS vendor, six pri...

  5. Iterative algorithms for tridiagonal matrices on a WSI-multiprocessor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gajski, D.D.; Sameh, A.H.; Wisniewski, J.A.

    1982-01-01

    With the rapid advances in semiconductor technology, the construction of Wafer Scale Integration (WSI)-multiprocessors consisting of a large number of processors is now feasible. We illustrate the implementation of some basic linear algebra algorithms on such multiprocessors.

  6. Progress in preparing scenarios for operation of the International Thermonuclear Experimental Reactor

    NASA Astrophysics Data System (ADS)

    Sips, A. C. C.; Giruzzi, G.; Ide, S.; Kessel, C.; Luce, T. C.; Snipes, J. A.; Stober, J. K.

    2015-02-01

    The development of operating scenarios is one of the key issues in the research for ITER which aims to achieve a fusion gain (Q) of ˜10, while producing 500 MW of fusion power for ≥300 s. The ITER Research plan proposes a success oriented schedule starting in hydrogen and helium, to be followed by a nuclear operation phase with a rapid development towards Q ˜ 10 in deuterium/tritium. The Integrated Operation Scenarios Topical Group of the International Tokamak Physics Activity initiates joint activities among worldwide institutions and experiments to prepare ITER operation. Plasma formation studies report robust plasma breakdown in devices with metal walls over a wide range of conditions, while other experiments use an inclined EC launch angle at plasma formation to mimic the conditions in ITER. Simulations of the plasma burn-through predict that at least 4 MW of Electron Cyclotron heating (EC) assist would be required in ITER. For H-modes at q95 ˜ 3, many experiments have demonstrated operation with scaled parameters for the ITER baseline scenario at ne/nGW ˜ 0.85. Most experiments, however, obtain stable discharges at H98(y,2) ˜ 1.0 only for βN = 2.0-2.2. For the rampup in ITER, early X-point formation is recommended, allowing auxiliary heating to reduce the flux consumption. A range of plasma inductance (li(3)) can be obtained from 0.65 to 1.0, with the lowest values obtained in H-mode operation. For the rampdown, the plasma should stay diverted maintaining H-mode together with a reduction of the elongation from 1.85 to 1.4. Simulations show that the proposed rampup and rampdown schemes developed since 2007 are compatible with the present ITER design for the poloidal field coils. At 13-15 MA and densities down to ne/nGW ˜ 0.5, long pulse operation (>1000 s) in ITER is possible at Q ˜ 5, useful to provide neutron fluence for Test Blanket Module assessments. ITER scenario preparation in hydrogen and helium requires high input power (>50 MW). H-mode operation in helium may be possible at input powers above 35 MW at a toroidal field of 2.65 T, for studying H-modes and ELM mitigation. In hydrogen, H-mode operation is expected to be marginal, even at 2.65 T with 60 MW of input power. Simulation code benchmark studies using hybrid and steady state scenario parameters have proved to be a very challenging and lengthy task of testing suites of codes, consisting of tens of sophisticated modules. Nevertheless, the general basis of the modelling appears sound, with substantial consistency among codes developed by different groups. For a hybrid scenario at 12 MA, the code simulations give a range for Q = 6.5-8.3, using 30 MW neutral beam injection and 20 MW ICRH. For non-inductive operation at 7-9 MA, the simulation results show more variation. At high edge pedestal pressure (Tped ˜ 7 keV), the codes predict Q = 3.3-3.8 using 33 MW NB, 20 MW EC, and 20 MW ion cyclotron to demonstrate the feasibility of steady-state operation with the day-1 heating systems in ITER. Simulations using a lower edge pedestal temperature (˜3 keV) but improved core confinement obtain Q = 5-6.5, when ECCD is concentrated at mid-radius and ˜20 MW off-axis current drive (ECCD or LHCD) is added. Several issues remain to be studied, including plasmas with dominant electron heating, mitigation of transient heat loads integrated in scenario demonstrations and (burn) control simulations in ITER scenarios.

  7. Varying face occlusion detection and iterative recovery for face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Meng; Hu, Zhengping; Sun, Zhe; Zhao, Shuhuan; Sun, Mei

    2017-05-01

    In most sparse representation methods for face recognition (FR), occlusion problems were usually solved via removing the occlusion part of both query samples and training samples to perform the recognition process. This practice ignores the global feature of facial image and may lead to unsatisfactory results due to the limitation of local features. Considering the aforementioned drawback, we propose a method called varying occlusion detection and iterative recovery for FR. The main contributions of our method are as follows: (1) to detect an accurate occlusion area of facial images, an image processing and intersection-based clustering combination method is used for occlusion FR; (2) according to an accurate occlusion map, the new integrated facial images are recovered iteratively and put into a recognition process; and (3) the effectiveness on recognition accuracy of our method is verified by comparing it with three typical occlusion map detection methods. Experiments show that the proposed method has a highly accurate detection and recovery performance and that it outperforms several similar state-of-the-art methods against partial contiguous occlusion.

  8. Iterative near-term ecological forecasting: Needs, opportunities, and challenges

    USGS Publications Warehouse

    Dietze, Michael C.; Fox, Andrew; Beck-Johnson, Lindsay; Betancourt, Julio L.; Hooten, Mevin B.; Jarnevich, Catherine S.; Keitt, Timothy H.; Kenney, Melissa A.; Laney, Christine M.; Larsen, Laurel G.; Loescher, Henry W.; Lunch, Claire K.; Pijanowski, Bryan; Randerson, James T.; Read, Emily; Tredennick, Andrew T.; Vargas, Rodrigo; Weathers, Kathleen C.; White, Ethan P.

    2018-01-01

    Two foundational questions about sustainability are “How are ecosystems and the services they provide going to change in the future?” and “How do human decisions affect these trajectories?” Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  9. Iterative near-term ecological forecasting: Needs, opportunities, and challenges.

    PubMed

    Dietze, Michael C; Fox, Andrew; Beck-Johnson, Lindsay M; Betancourt, Julio L; Hooten, Mevin B; Jarnevich, Catherine S; Keitt, Timothy H; Kenney, Melissa A; Laney, Christine M; Larsen, Laurel G; Loescher, Henry W; Lunch, Claire K; Pijanowski, Bryan C; Randerson, James T; Read, Emily K; Tredennick, Andrew T; Vargas, Rodrigo; Weathers, Kathleen C; White, Ethan P

    2018-02-13

    Two foundational questions about sustainability are "How are ecosystems and the services they provide going to change in the future?" and "How do human decisions affect these trajectories?" Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  10. A novel decoding algorithm based on the hierarchical reliable strategy for SCG-LDPC codes in optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Tong, Qing-zhen; Huang, Sheng; Wang, Yong

    2013-11-01

    An effective hierarchical reliable belief propagation (HRBP) decoding algorithm is proposed according to the structural characteristics of systematically constructed Gallager low-density parity-check (SCG-LDPC) codes. The novel decoding algorithm combines the layered iteration with the reliability judgment, and can greatly reduce the number of the variable nodes involved in the subsequent iteration process and accelerate the convergence rate. The result of simulation for SCG-LDPC(3969,3720) code shows that the novel HRBP decoding algorithm can greatly reduce the computing amount at the condition of ensuring the performance compared with the traditional belief propagation (BP) algorithm. The bit error rate (BER) of the HRBP algorithm is considerable at the threshold value of 15, but in the subsequent iteration process, the number of the variable nodes for the HRBP algorithm can be reduced by about 70% at the high signal-to-noise ratio (SNR) compared with the BP algorithm. When the threshold value is further increased, the HRBP algorithm will gradually degenerate into the layered-BP algorithm, but at the BER of 10-7 and the maximal iteration number of 30, the net coding gain (NCG) of the HRBP algorithm is 0.2 dB more than that of the BP algorithm, and the average iteration times can be reduced by about 40% at the high SNR. Therefore, the novel HRBP decoding algorithm is more suitable for optical communication systems.

  11. Rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, produce modified schedules, quickly, and exhibits 'anytime' behavior. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. We also show the anytime characteristics of the system. These experiments were performed within the domain of Space Shuttle ground processing.

  12. Scheduling and rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper describes the GERRY scheduling and rescheduling system being applied to coordinate Space Shuttle Ground Processing. The system uses constraint-based iterative repair, a technique that starts with a complete but possibly flawed schedule and iteratively improves it by using constraint knowledge within repair heuristics. In this paper we explore the tradeoff between the informedness and the computational cost of several repair heuristics. We show empirically that some knowledge can greatly improve the convergence speed of a repair-based system, but that too much knowledge, such as the knowledge embodied within the MIN-CONFLICTS lookahead heuristic, can overwhelm a system and result in degraded performance.

  13. Using Negotiated Joining to Construct and Fill Open-ended Roles in Elite Culinary Groups

    PubMed Central

    Tan, Vaughn

    2015-01-01

    This qualitative study examines membership processes in groups operating in an uncertain environment that prevents them from fully predefining new members’ roles. I describe how nine elite high-end, cutting-edge culinary groups in the U.S. and Europe, ranging from innovative restaurants to culinary R&D groups, use negotiated joining—a previously undocumented process—to systematically construct and fill these emergent, open-ended roles. I show that negotiated joining is a consistently patterned, iterative process that begins with a role that both aspirant and target group explicitly understand to be provisional. This provisional role is then jointly modified and constructed by the aspirant and target group through repeated iterations of proposition, validation through trial and evaluation, and selective integration of validated role components. The initially provisional role stabilizes and the aspirant achieves membership if enough role components are validated; otherwise the negotiated joining process is abandoned. Negotiated joining allows the aspirant and target group to learn if a mutually desirable role is likely and, if so, to construct such a role. In addition, the provisional roles in negotiated joining can support absorptive capacity by allowing novel role components to enter target groups through aspirants’ efforts to construct stable roles for themselves, while the internal adjustment involved in integrating newly validated role components can have the unintended side effect of supporting adaptation by providing opportunities for the groups to use these novel role components to modify their role structure and goals to suit a changing and uncertain environment. Negotiated joining thus reveals role ambiguity’s hitherto unexamined beneficial consequences and provides a foundation for a contingency theory of new-member acquisition. PMID:26273105

  14. Appendices to the user's manual for a computer program for the emulation/simulation of a space station environmental control and life support system

    NASA Technical Reports Server (NTRS)

    Yanosy, James L.

    1988-01-01

    A user's Manual for the Emulation Simulation Computer Model was published previously. The model consisted of a detailed model (emulation) of a SAWD CO2 removal subsystem which operated with much less detailed (simulation) models of a cabin, crew, and condensing and sensible heat exchangers. The purpose was to explore the utility of such an emulation/simulation combination in the design, development, and test of a piece of ARS hardware - SAWD. Extensions to this original effort are presented. The first extension is an update of the model to reflect changes in the SAWD control logic which resulted from the test. In addition, slight changes were also made to the SAWD model to permit restarting and to improve the iteration technique. The second extension is the development of simulation models for more pieces of air and water processing equipment. Models are presented for: EDC, Molecular Sieve, Bosch, Sabatier, a new condensing heat exchanger, SPE, SFWES, Catalytic Oxidizer, and multifiltration. The third extension is to create two system simulations using these models. The first system presented consists of one air and one water processing system, the second a potential Space Station air revitalization system.

  15. A multiplicative regularization for force reconstruction

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2017-02-01

    Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.

  16. The impact of short-term stochastic variability in solar irradiance on optimal microgrid design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schittekatte, Tim; Stadler, Michael; Cardoso, Gonçalo

    2016-07-01

    This paper proposes a new methodology to capture the impact of fast moving clouds on utility power demand charges observed in microgrids with photovoltaic (PV) arrays, generators, and electrochemical energy storage. It consists of a statistical approach to introduce sub-hourly events in the hourly economic accounting process. The methodology is implemented in the Distributed Energy Resources Customer Adoption Model (DER-CAM), a state of the art mixed integer linear model used to optimally size DER in decentralized energy systems. Results suggest that previous iterations of DER-CAM could undersize battery capacities. The improved model depicts more accurately the economic value of PVmore » as well as the synergistic benefits of pairing PV with storage.« less

  17. Particle Analysis Pitfalls

    NASA Technical Reports Server (NTRS)

    Hughes, David; Dazzo, Tony

    2007-01-01

    This viewgraph presentation reviews the use of particle analysis to assist in preparing for the 4th Hubble Space Telescope (HST) Servicing mission. During this mission the Space Telescope Imaging Spectrograph (STIS) will be repaired. The particle analysis consisted of Finite element mesh creation, Black-body viewfactors generated using I-DEAS TMG Thermal Analysis, Grey-body viewfactors calculated using Markov method, Particle distribution modeled using an iterative Monte Carlo process, (time-consuming); in house software called MASTRAM, Differential analysis performed in Excel, and Visualization provided by Tecplot and I-DEAS. Several tests were performed and are reviewed: Conformal Coat Particle Study, Card Extraction Study, Cover Fastener Removal Particle Generation Study, and E-Graf Vibration Particulate Study. The lessons learned during this analysis are also reviewed.

  18. Application of a repetitive process setting to design of monotonically convergent iterative learning control

    NASA Astrophysics Data System (ADS)

    Boski, Marcin; Paszke, Wojciech

    2015-11-01

    This paper deals with the problem of designing an iterative learning control algorithm for discrete linear systems using repetitive process stability theory. The resulting design produces a stabilizing output feedback controller in the time domain and a feedforward controller that guarantees monotonic convergence in the trial-to-trial domain. The results are also extended to limited frequency range design specification. New design procedure is introduced in terms of linear matrix inequality (LMI) representations, which guarantee the prescribed performances of ILC scheme. A simulation example is given to illustrate the theoretical developments.

  19. An Iterative Inference Procedure Applying Conditional Random Fields for Simultaneous Classification of Land Cover and Land Use

    NASA Astrophysics Data System (ADS)

    Albert, L.; Rottensteiner, F.; Heipke, C.

    2015-08-01

    Land cover and land use exhibit strong contextual dependencies. We propose a novel approach for the simultaneous classification of land cover and land use, where semantic and spatial context is considered. The image sites for land cover and land use classification form a hierarchy consisting of two layers: a land cover layer and a land use layer. We apply Conditional Random Fields (CRF) at both layers. The layers differ with respect to the image entities corresponding to the nodes, the employed features and the classes to be distinguished. In the land cover layer, the nodes represent super-pixels; in the land use layer, the nodes correspond to objects from a geospatial database. Both CRFs model spatial dependencies between neighbouring image sites. The complex semantic relations between land cover and land use are integrated in the classification process by using contextual features. We propose a new iterative inference procedure for the simultaneous classification of land cover and land use, in which the two classification tasks mutually influence each other. This helps to improve the classification accuracy for certain classes. The main idea of this approach is that semantic context helps to refine the class predictions, which, in turn, leads to more expressive context information. Thus, potentially wrong decisions can be reversed at later stages. The approach is designed for input data based on aerial images. Experiments are carried out on a test site to evaluate the performance of the proposed method. We show the effectiveness of the iterative inference procedure and demonstrate that a smaller size of the super-pixels has a positive influence on the classification result.

  20. Process control strategy for ITER central solenoid operation

    NASA Astrophysics Data System (ADS)

    Maekawa, R.; Takami, S.; Iwamoto, A.; Chang, H.-S.; Forgeas, A.; Chalifour, M.

    2016-12-01

    ITER Central Solenoid (CS) pulse operation induces significant flow disturbance in the forced-flow Supercritical Helium (SHe) cooling circuit, which could impact primarily on the operation of cold circulator (SHe centrifugal pump) in Auxiliary Cold Box (ACB). Numerical studies using Venecia®, SUPERMAGNET and 4C have identified reverse flow at the CS module inlet due to the substantial thermal energy deposition at the inner-most winding. To assess the reliable operation of ACB-CS (dedicated ACB for CS), the process analyses have been conducted with a dynamic process simulation model developed by Cryogenic Process REal-time SimulaTor (C-PREST). As implementing process control of hydrodynamic instability, several strategies have been applied to evaluate their feasibility. The paper discusses control strategy to protect the centrifugal type cold circulator/compressor operations and its impact on the CS cooling.

  1. Modelling the Probability of Landslides Impacting Road Networks

    NASA Astrophysics Data System (ADS)

    Taylor, F. E.; Malamud, B. D.

    2012-04-01

    During a landslide triggering event, the threat of landslides blocking roads poses a risk to logistics, rescue efforts and communities dependant on those road networks. Here we present preliminary results of a stochastic model we have developed to evaluate the probability of landslides intersecting a simple road network during a landslide triggering event and apply simple network indices to measure the state of the road network in the affected region. A 4000 x 4000 cell array with a 5 m x 5 m resolution was used, with a pre-defined simple road network laid onto it, and landslides 'randomly' dropped onto it. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m2 This statistical distribution was chosen based on three substantially complete triggered landslide inventories recorded in existing literature. The number of landslide areas (NL) selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. NL = 400 landslide areas chosen randomly for each iteration, and was based on several existing triggered landslide event inventories. A simple road network was chosen, in a 'T' shape configuration, with one road 1 x 4000 cells (5 m x 20 km) in a 'T' formation with another road 1 x 2000 cells (5 m x 10 km). The landslide areas were then randomly 'dropped' over the road array and indices such as the location, size (ABL) and number of road blockages (NBL) recorded. This process was performed 500 times (iterations) in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event with 400 landslides over a 400 km2 region, the number of road blocks per iteration, NBL,ranges from 0 to 7. The average blockage area for the 500 iterations (A¯ BL) is about 3000 m2, which closely matches the value of A¯ L for the triggered landslide inventories. We further find that over the 500 iterations, the probability of a given number of road blocks occurring on any given iteration, p(NBL) as a function of NBL, follows reasonably well a three-parameter inverse gamma probability density distribution with an exponential rollover (i.e., the most frequent value) at NBL = 1.3. In this paper we have begun to calculate the probability of the number of landslides blocking roads during a triggering event, and have found that this follows an inverse-gamma distribution, which is similar to that found for the statistics of landslide areas resulting from triggers. As we progress to model more realistic road networks, this work will aid in both long-term and disaster management for road networks by allowing probabilistic assessment of road network potential damage during different magnitude landslide triggering event scenarios.

  2. Improving performances of suboptimal greedy iterative biclustering heuristics via localization.

    PubMed

    Erten, Cesim; Sözdinler, Melih

    2010-10-15

    Biclustering gene expression data is the problem of extracting submatrices of genes and conditions exhibiting significant correlation across both the rows and the columns of a data matrix of expression values. Even the simplest versions of the problem are computationally hard. Most of the proposed solutions therefore employ greedy iterative heuristics that locally optimize a suitably assigned scoring function. We provide a fast and simple pre-processing algorithm called localization that reorders the rows and columns of the input data matrix in such a way as to group correlated entries in small local neighborhoods within the matrix. The proposed localization algorithm takes its roots from effective use of graph-theoretical methods applied to problems exhibiting a similar structure to that of biclustering. In order to evaluate the effectivenesss of the localization pre-processing algorithm, we focus on three representative greedy iterative heuristic methods. We show how the localization pre-processing can be incorporated into each representative algorithm to improve biclustering performance. Furthermore, we propose a simple biclustering algorithm, Random Extraction After Localization (REAL) that randomly extracts submatrices from the localization pre-processed data matrix, eliminates those with low similarity scores, and provides the rest as correlated structures representing biclusters. We compare the proposed localization pre-processing with another pre-processing alternative, non-negative matrix factorization. We show that our fast and simple localization procedure provides similar or even better results than the computationally heavy matrix factorization pre-processing with regards to H-value tests. We next demonstrate that the performances of the three representative greedy iterative heuristic methods improve with localization pre-processing when biological correlations in the form of functional enrichment and PPI verification constitute the main performance criteria. The fact that the random extraction method based on localization REAL performs better than the representative greedy heuristic methods under same criteria also confirms the effectiveness of the suggested pre-processing method. Supplementary material including code implementations in LEDA C++ library, experimental data, and the results are available at http://code.google.com/p/biclustering/ cesim@khas.edu.tr; melihsozdinler@boun.edu.tr Supplementary data are available at Bioinformatics online.

  3. A stopping criterion to halt iterations at the Richardson-Lucy deconvolution of radiographic images

    NASA Astrophysics Data System (ADS)

    Almeida, G. L.; Silvani, M. I.; Souza, E. S.; Lopes, R. T.

    2015-07-01

    Radiographic images, as any experimentally acquired ones, are affected by spoiling agents which degrade their final quality. The degradation caused by agents of systematic character, can be reduced by some kind of treatment such as an iterative deconvolution. This approach requires two parameters, namely the system resolution and the best number of iterations in order to achieve the best final image. This work proposes a novel procedure to estimate the best number of iterations, which replaces the cumbersome visual inspection by a comparison of numbers. These numbers are deduced from the image histograms, taking into account the global difference G between them for two subsequent iterations. The developed algorithm, including a Richardson-Lucy deconvolution procedure has been embodied into a Fortran program capable to plot the 1st derivative of G as the processing progresses and to stop it automatically when this derivative - within the data dispersion - reaches zero. The radiograph of a specially chosen object acquired with thermal neutrons from the Argonauta research reactor at Institutode Engenharia Nuclear - CNEN, Rio de Janeiro, Brazil, have undergone this treatment with fair results.

  4. A heuristic statistical stopping rule for iterative reconstruction in emission tomography.

    PubMed

    Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D

    2013-01-01

    We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.

  5. Modeling Design Iteration in Product Design and Development and Its Solution by a Novel Artificial Bee Colony Algorithm

    PubMed Central

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584

  6. Automated segmentation of three-dimensional MR brain images

    NASA Astrophysics Data System (ADS)

    Park, Jonggeun; Baek, Byungjun; Ahn, Choong-Il; Ku, Kyo Bum; Jeong, Dong Kyun; Lee, Chulhee

    2006-03-01

    Brain segmentation is a challenging problem due to the complexity of the brain. In this paper, we propose an automated brain segmentation method for 3D magnetic resonance (MR) brain images which are represented as a sequence of 2D brain images. The proposed method consists of three steps: pre-processing, removal of non-brain regions (e.g., the skull, meninges, other organs, etc), and spinal cord restoration. In pre-processing, we perform adaptive thresholding which takes into account variable intensities of MR brain images corresponding to various image acquisition conditions. In segmentation process, we iteratively apply 2D morphological operations and masking for the sequences of 2D sagittal, coronal, and axial planes in order to remove non-brain tissues. Next, final 3D brain regions are obtained by applying OR operation for segmentation results of three planes. Finally we reconstruct the spinal cord truncated during the previous processes. Experiments are performed with fifteen 3D MR brain image sets with 8-bit gray-scale. Experiment results show the proposed algorithm is fast, and provides robust and satisfactory results.

  7. Choosing order of operations to accelerate strip structure analysis in parameter range

    NASA Astrophysics Data System (ADS)

    Kuksenko, S. P.; Akhunov, R. R.; Gazizov, T. R.

    2018-05-01

    The paper considers the issue of using iteration methods in solving the sequence of linear algebraic systems obtained in quasistatic analysis of strip structures with the method of moments. Using the analysis of 4 strip structures, the authors have proved that additional acceleration (up to 2.21 times) of the iterative process can be obtained during the process of solving linear systems repeatedly by means of choosing a proper order of operations and a preconditioner. The obtained results can be used to accelerate the process of computer-aided design of various strip structures. The choice of the order of operations to accelerate the process is quite simple, universal and could be used not only for strip structure analysis but also for a wide range of computational problems.

  8. Evaluation of noise limits to improve image processing in soft X-ray projection microscopy.

    PubMed

    Jamsranjav, Erdenetogtokh; Kuge, Kenichi; Ito, Atsushi; Kinjo, Yasuhito; Shiina, Tatsuo

    2017-03-03

    Soft X-ray microscopy has been developed for high resolution imaging of hydrated biological specimens due to the availability of water window region. In particular, a projection type microscopy has advantages in wide viewing area, easy zooming function and easy extensibility to computed tomography (CT). The blur of projection image due to the Fresnel diffraction of X-rays, which eventually reduces spatial resolution, could be corrected by an iteration procedure, i.e., repetition of Fresnel and inverse Fresnel transformations. However, it was found that the correction is not enough to be effective for all images, especially for images with low contrast. In order to improve the effectiveness of image correction by computer processing, we in this study evaluated the influence of background noise in the iteration procedure through a simulation study. In the study, images of model specimen with known morphology were used as a substitute for the chromosome images, one of the targets of our microscope. Under the condition that artificial noise was distributed on the images randomly, we introduced two different parameters to evaluate noise effects according to each situation where the iteration procedure was not successful, and proposed an upper limit of the noise within which the effective iteration procedure for the chromosome images was possible. The study indicated that applying the new simulation and noise evaluation method was useful for image processing where background noises cannot be ignored compared with specimen images.

  9. EC power management in ITER for NTM control: the path from the commissioning phase to demonstration discharges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poli, Francesca M.; Fredrickson, Eric; Henderson, Mark A.

    Time dependent simulations that evolve consistently the magnetic equilibrium and plasma pressure profiles and the width and frequency rotation of magnetic islands under the effect of the Electron Cyclotron feedback system are used to assess whether the control of NTMs on ITER is compatible with other simulataneous functionalities of the EC system, like core heating and current profile tailoring, or sawtooth control. Furthermore, results indicate that the power needs for control can be reduced if the EC power is reserved and if pre-emptive control is used as opposed to an active search for an already developed island.

  10. ITER L-mode confinement database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaye, S.M.

    This paper describes the content of an L-mode database that has been compiled with data from Alcator C-Mod, ASDEX, DIII, DIII-D, FTU, JET, JFT-2M, JT-60, PBX-M, PDX, T-10, TEXTOR, TFTR, and Tore-Supra. The database consists of a total of 2938 entries, 1881 of which are in the L-phase while 922 are ohmically heated only (OH). Each entry contains up to 95 descriptive parameters, including global and kinetic information, machine conditioning, and configuration. The paper presents a description of the database and the variables contained therein, and it also presents global and thermal scalings along with predictions for ITER.

  11. EC power management in ITER for NTM control: the path from the commissioning phase to demonstration discharges

    DOE PAGES

    Poli, Francesca M.; Fredrickson, Eric; Henderson, Mark A.; ...

    2017-10-23

    Time dependent simulations that evolve consistently the magnetic equilibrium and plasma pressure profiles and the width and frequency rotation of magnetic islands under the effect of the Electron Cyclotron feedback system are used to assess whether the control of NTMs on ITER is compatible with other simulataneous functionalities of the EC system, like core heating and current profile tailoring, or sawtooth control. Furthermore, results indicate that the power needs for control can be reduced if the EC power is reserved and if pre-emptive control is used as opposed to an active search for an already developed island.

  12. A numerical scheme to solve unstable boundary value problems

    NASA Technical Reports Server (NTRS)

    Kalnay Derivas, E.

    1975-01-01

    A new iterative scheme for solving boundary value problems is presented. It consists of the introduction of an artificial time dependence into a modified version of the system of equations. Then explicit forward integrations in time are followed by explicit integrations backwards in time. The method converges under much more general conditions than schemes based in forward time integrations (false transient schemes). In particular it can attain a steady state solution of an elliptical system of equations even if the solution is unstable, in which case other iterative schemes fail to converge. The simplicity of its use makes it attractive for solving large systems of nonlinear equations.

  13. A hybrid multiview stereo algorithm for modeling urban scenes.

    PubMed

    Lafarge, Florent; Keriven, Renaud; Brédif, Mathieu; Vu, Hoang-Hiep

    2013-01-01

    We present an original multiview stereo reconstruction algorithm which allows the 3D-modeling of urban scenes as a combination of meshes and geometric primitives. The method provides a compact model while preserving details: Irregular elements such as statues and ornaments are described by meshes, whereas regular structures such as columns and walls are described by primitives (planes, spheres, cylinders, cones, and tori). We adopt a two-step strategy consisting first in segmenting the initial meshbased surface using a multilabel Markov Random Field-based model and second in sampling primitive and mesh components simultaneously on the obtained partition by a Jump-Diffusion process. The quality of a reconstruction is measured by a multi-object energy model which takes into account both photo-consistency and semantic considerations (i.e., geometry and shape layout). The segmentation and sampling steps are embedded into an iterative refinement procedure which provides an increasingly accurate hybrid representation. Experimental results on complex urban structures and large scenes are presented and compared to state-of-the-art multiview stereo meshing algorithms.

  14. Guidelines for Developing and Reporting Machine Learning Predictive Models in Biomedical Research: A Multidisciplinary View

    PubMed Central

    2016-01-01

    Background As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. Objective To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. Methods A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. Results The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. Conclusions A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. PMID:27986644

  15. Numerical Characterization of Piezoceramics Using Resonance Curves

    PubMed Central

    Pérez, Nicolás; Buiochi, Flávio; Brizzotti Andrade, Marco Aurélio; Adamowski, Julio Cezar

    2016-01-01

    Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT) belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM), to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods. PMID:28787875

  16. Numerical Characterization of Piezoceramics Using Resonance Curves.

    PubMed

    Pérez, Nicolás; Buiochi, Flávio; Brizzotti Andrade, Marco Aurélio; Adamowski, Julio Cezar

    2016-01-27

    Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT) belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM), to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods.

  17. Selection, periodicity and potential function for Highly Iterative Palindrome-1 (HIP1) in cyanobacterial genomes.

    PubMed

    Xu, Minli; Lawrence, Jeffrey G; Durand, Dannie

    2018-03-16

    Highly Iterated Palindrome 1 (HIP1, GCGATCGC) is hyper-abundant in most cyanobacterial genomes. In some cyanobacteria, average HIP1 abundance exceeds one motif per gene. Such high abundance suggests a significant role in cyanobacterial biology. However, 20 years of study have not revealed whether HIP1 has a function, much less what that function might be. We show that HIP1 is 15- to 300-fold over-represented in genomes analyzed. More importantly, HIP1 sites are conserved both within and between open reading frames, suggesting that their overabundance is maintained by selection rather than by continual replenishment by neutral processes, such as biased DNA repair. This evidence for selection suggests a functional role for HIP1. No evidence was found to support a functional role as a peptide or RNA motif or a role in the regulation of gene expression. Rather, we demonstrate that the distribution of HIP1 along cyanobacterial chromosomes is significantly periodic, with periods ranging from 10 to 90 kb, consistent in scale with periodicities reported for co-regulated, co-expressed and evolutionarily correlated genes. The periodicity we observe is also comparable in scale to chromosomal interaction domains previously described in other bacteria. In this context, our findings imply HIP1 functions associated with chromosome and nucleoid structure.

  18. Selection, periodicity and potential function for Highly Iterative Palindrome-1 (HIP1) in cyanobacterial genomes

    PubMed Central

    Xu, Minli; Lawrence, Jeffrey G; Durand, Dannie

    2018-01-01

    Abstract Highly Iterated Palindrome 1 (HIP1, GCGATCGC) is hyper-abundant in most cyanobacterial genomes. In some cyanobacteria, average HIP1 abundance exceeds one motif per gene. Such high abundance suggests a significant role in cyanobacterial biology. However, 20 years of study have not revealed whether HIP1 has a function, much less what that function might be. We show that HIP1 is 15- to 300-fold over-represented in genomes analyzed. More importantly, HIP1 sites are conserved both within and between open reading frames, suggesting that their overabundance is maintained by selection rather than by continual replenishment by neutral processes, such as biased DNA repair. This evidence for selection suggests a functional role for HIP1. No evidence was found to support a functional role as a peptide or RNA motif or a role in the regulation of gene expression. Rather, we demonstrate that the distribution of HIP1 along cyanobacterial chromosomes is significantly periodic, with periods ranging from 10 to 90 kb, consistent in scale with periodicities reported for co-regulated, co-expressed and evolutionarily correlated genes. The periodicity we observe is also comparable in scale to chromosomal interaction domains previously described in other bacteria. In this context, our findings imply HIP1 functions associated with chromosome and nucleoid structure. PMID:29432573

  19. Active Interaction Mapping as a tool to elucidate hierarchical functions of biological processes.

    PubMed

    Farré, Jean-Claude; Kramer, Michael; Ideker, Trey; Subramani, Suresh

    2017-07-03

    Increasingly, various 'omics data are contributing significantly to our understanding of novel biological processes, but it has not been possible to iteratively elucidate hierarchical functions in complex phenomena. We describe a general systems biology approach called Active Interaction Mapping (AI-MAP), which elucidates the hierarchy of functions for any biological process. Existing and new 'omics data sets can be iteratively added to create and improve hierarchical models which enhance our understanding of particular biological processes. The best datatypes to further improve an AI-MAP model are predicted computationally. We applied this approach to our understanding of general and selective autophagy, which are conserved in most eukaryotes, setting the stage for the broader application to other cellular processes of interest. In the particular application to autophagy-related processes, we uncovered and validated new autophagy and autophagy-related processes, expanded known autophagy processes with new components, integrated known non-autophagic processes with autophagy and predict other unexplored connections.

  20. Development of an iterative reconstruction method to overcome 2D detector low resolution limitations in MLC leaf position error detection for 3D dose verification in IMRT.

    PubMed

    Visser, R; Godart, J; Wauben, D J L; Langendijk, J A; Van't Veld, A A; Korevaar, E W

    2016-05-21

    The objective of this study was to introduce a new iterative method to reconstruct multi leaf collimator (MLC) positions based on low resolution ionization detector array measurements and to evaluate its error detection performance. The iterative reconstruction method consists of a fluence model, a detector model and an optimizer. Expected detector response was calculated using a radiotherapy treatment plan in combination with the fluence model and detector model. MLC leaf positions were reconstructed by minimizing differences between expected and measured detector response. The iterative reconstruction method was evaluated for an Elekta SLi with 10.0 mm MLC leafs in combination with the COMPASS system and the MatriXX Evolution (IBA Dosimetry) detector with a spacing of 7.62 mm. The detector was positioned in such a way that each leaf pair of the MLC was aligned with one row of ionization chambers. Known leaf displacements were introduced in various field geometries ranging from  -10.0 mm to 10.0 mm. Error detection performance was tested for MLC leaf position dependency relative to the detector position, gantry angle dependency, monitor unit dependency, and for ten clinical intensity modulated radiotherapy (IMRT) treatment beams. For one clinical head and neck IMRT treatment beam, influence of the iterative reconstruction method on existing 3D dose reconstruction artifacts was evaluated. The described iterative reconstruction method was capable of individual MLC leaf position reconstruction with millimeter accuracy, independent of the relative detector position within the range of clinically applied MU's for IMRT. Dose reconstruction artifacts in a clinical IMRT treatment beam were considerably reduced as compared to the current dose verification procedure. The iterative reconstruction method allows high accuracy 3D dose verification by including actual MLC leaf positions reconstructed from low resolution 2D measurements.

  1. Design of ITER divertor VUV spectrometer and prototype test at KSTAR tokamak

    NASA Astrophysics Data System (ADS)

    Seon, Changrae; Hong, Joohwan; Song, Inwoo; Jang, Juhyeok; Lee, Hyeonyong; An, Younghwa; Kim, Bosung; Jeon, Taemin; Park, Jaesun; Choe, Wonho; Lee, Hyeongon; Pak, Sunil; Cheon, MunSeong; Choi, Jihyeon; Kim, Hyeonseok; Biel, Wolfgang; Bernascolle, Philippe; Barnsley, Robin; O'Mullane, Martin

    2017-12-01

    Design and development of the ITER divertor VUV spectrometer have been performed from the year 1998, and it is planned to be installed in the year 2027. Currently, the design of the ITER divertor VUV spectrometer is in the phase of detail design. It is optimized for monitoring of chord-integrated VUV signals from divertor plasmas, chosen to contain representative lines emission from the tungsten as the divertor material, and other impurities. Impurity emission from overall divertor plasmas is collimated through the relay optics onto the entrance slit of a VUV spectrometer with working wavelength range of 14.6-32 nm. To validate the design of the ITER divertor VUV spectrometer, two sets of VUV spectrometers have been developed and tested at KSTAR tokamak. One set of spectrometer without the field mirror employs a survey spectrometer with the wavelength ranging from 14.6 nm to 32 nm, and it provides the same optical specification as the spectrometer part of the ITER divertor VUV spectrometer system. The other spectrometer with the wavelength range of 5-25 nm consists of a commercial spectrometer with a concave grating, and the relay mirrors with the same geometry as the relay mirrors of the ITER divertor VUV spectrometer. From test of these prototypes, alignment method using backward laser illumination could be verified. To validate the feasibility of tungsten emission measurement, furthermore, the tungsten powder was injected in KSTAR plasmas, and the preliminary result could be obtained successfully with regard to the evaluation of photon throughput. Contribution to the Topical Issue "Atomic and Molecular Data and their Applications", edited by Gordon W.F. Drake, Jung-Sik Yoon, Daiji Kato, Grzegorz Karwasz.

  2. The two-phase method for finding a great number of eigenpairs of the symmetric or weakly non-symmetric large eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dul, F.A.; Arczewski, K.

    1994-03-01

    Although it has been stated that [open quotes]an attempt to solve (very large problems) by subspace iterations seems futile[close quotes], we will show that the statement is not true, especially for extremely large eigenproblems. In this paper a new two-phase subspace iteration/Rayleigh quotient/conjugate gradient method for generalized, large, symmetric eigenproblems Ax = [lambda]Bx is presented. It has the ability of solving extremely large eigenproblems, N = 216,000, for example, and finding a large number of leftmost or rightmost eigenpairs, up to 1000 or more. Multiple eigenpairs, even those with multiplicity 100, can be easily found. The use of the proposedmore » method for solving the big full eigenproblems (N [approximately] 10[sup 3]), as well as for large weakly non-symmetric eigenproblems, have been considered also. The proposed method is fully iterative; thus the factorization of matrices ins avoided. The key idea consists in joining two methods: subspace and Rayleigh quotient iterations. The systems of indefinite and almost singular linear equations (a - [sigma]B)x = By are solved by various iterative conjugate gradient method can be used without danger of breaking down due to its property that may be called [open quotes]self-correction towards the eigenvector,[close quotes] discovered recently by us. The use of various preconditioners (SSOR and IC) has also been considered. The main features of the proposed method have been analyzed in detail. Comparisons with other methods, such as, accelerated subspace iteration, Lanczos, Davidson, TLIME, TRACMN, and SRQMCG, are presented. The results of numerical tests for various physical problems (acoustic, vibrations of structures, quantum chemistry) are presented as well. 40 refs., 12 figs., 2 tabs.« less

  3. Low-rank Atlas Image Analyses in the Presence of Pathologies

    PubMed Central

    Liu, Xiaoxiao; Niethammer, Marc; Kwitt, Roland; Singh, Nikhil; McCormick, Matt; Aylward, Stephen

    2015-01-01

    We present a common framework, for registering images to an atlas and for forming an unbiased atlas, that tolerates the presence of pathologies such as tumors and traumatic brain injury lesions. This common framework is particularly useful when a sufficient number of protocol-matched scans from healthy subjects cannot be easily acquired for atlas formation and when the pathologies in a patient cause large appearance changes. Our framework combines a low-rank-plus-sparse image decomposition technique with an iterative, diffeomorphic, group-wise image registration method. At each iteration of image registration, the decomposition technique estimates a “healthy” version of each image as its low-rank component and estimates the pathologies in each image as its sparse component. The healthy version of each image is used for the next iteration of image registration. The low-rank and sparse estimates are refined as the image registrations iteratively improve. When that framework is applied to image-to-atlas registration, the low-rank image is registered to a pre-defined atlas, to establish correspondence that is independent of the pathologies in the sparse component of each image. Ultimately, image-to-atlas registrations can be used to define spatial priors for tissue segmentation and to map information across subjects. When that framework is applied to unbiased atlas formation, at each iteration, the average of the low-rank images from the patients is used as the atlas image for the next iteration, until convergence. Since each iteration’s atlas is comprised of low-rank components, it provides a population-consistent, pathology-free appearance. Evaluations of the proposed methodology are presented using synthetic data as well as simulated and clinical tumor MRI images from the brain tumor segmentation (BRATS) challenge from MICCAI 2012. PMID:26111390

  4. Performance analysis of Rogowski coils and the measurement of the total toroidal current in the ITER machine

    NASA Astrophysics Data System (ADS)

    Quercia, A.; Albanese, R.; Fresa, R.; Minucci, S.; Arshad, S.; Vayakis, G.

    2017-12-01

    The paper carries out a comprehensive study of the performances of Rogowski coils. It describes methodologies that were developed in order to assess the capabilities of the Continuous External Rogowski (CER), which measures the total toroidal current in the ITER machine. Even though the paper mainly considers the CER, the contents are general and relevant to any Rogowski sensor. The CER consists of two concentric helical coils which are wound along a complex closed path. Modelling and computational activities were performed to quantify the measurement errors, taking detailed account of the ITER environment. The geometrical complexity of the sensor is accurately accounted for and the standard model which provides the classical expression to compute the flux linkage of Rogowski sensors is quantitatively validated. Then, in order to take into account the non-ideality of the winding, a generalized expression, formally analogue to the classical one, is presented. Models to determine the worst case and the statistical measurement accuracies are hence provided. The following sources of error are considered: effect of the joints, disturbances due to external sources of field (the currents flowing in the poloidal field coils and the ferromagnetic inserts of ITER), deviations from ideal geometry, toroidal field variations, calibration, noise and integration drift. The proposed methods are applied to the measurement error of the CER, in particular in its high and low operating ranges, as prescribed by the ITER system design description documents, and during transients, which highlight the large time constant related to the shielding of the vacuum vessel. The analyses presented in the paper show that the design of the CER diagnostic is capable of achieving the requisite performance as needed for the operation of the ITER machine.

  5. Overview of ASDEX Upgrade results

    NASA Astrophysics Data System (ADS)

    Zohm, H.; Adamek, J.; Angioni, C.; Antar, G.; Atanasiu, C. V.; Balden, M.; Becker, W.; Behler, K.; Behringer, K.; Bergmann, A.; Bertoncelli, T.; Bilato, R.; Bobkov, V.; Boom, J.; Bottino, A.; Brambilla, M.; Braun, F.; Brüdgam, M.; Buhler, A.; Chankin, A.; Classen, I.; Conway, G. D.; Coster, D. P.; de Marné, P.; D'Inca, R.; Drube, R.; Dux, R.; Eich, T.; Engelhardt, K.; Esposito, B.; Fahrbach, H.-U.; Fattorini, L.; Fink, J.; Fischer, R.; Flaws, A.; Foley, M.; Forest, C.; Fuchs, J. C.; Gál, K.; García Muñoz, M.; Gemisic Adamov, M.; Giannone, L.; Görler, T.; Gori, S.; da Graça, S.; Granucci, G.; Greuner, H.; Gruber, O.; Gude, A.; Günter, S.; Haas, G.; Hahn, D.; Harhausen, J.; Hauff, T.; Heinemann, B.; Herrmann, A.; Hicks, N.; Hobirk, J.; Hölzl, M.; Holtum, D.; Hopf, C.; Horton, L.; Huart, M.; Igochine, V.; Janzer, M.; Jenko, F.; Kallenbach, A.; Kálvin, S.; Kardaun, O.; Kaufmann, M.; Kick, M.; Kirk, A.; Klingshirn, H.-J.; Koscis, G.; Kollotzek, H.; Konz, C.; Krieger, K.; Kurki-Suonio, T.; Kurzan, B.; Lackner, K.; Lang, P. T.; Langer, B.; Lauber, P.; Laux, M.; Leuterer, F.; Likonen, J.; Liu, L.; Lohs, A.; Lunt, T.; Lyssoivan, A.; Maggi, C. F.; Manini, A.; Mank, K.; Manso, M.-E.; Mantsinen, M.; Maraschek, M.; Martin, P.; Mayer, M.; McCarthy, P.; McCormick, K.; Meister, H.; Meo, F.; Merkel, P.; Merkel, R.; Mertens, V.; Merz, F.; Meyer, H.; Mlynek, A.; Monaco, F.; Müller, H.-W.; Münich, M.; Murmann, H.; Neu, G.; Neu, R.; Neuhauser, J.; Nold, B.; Noterdaeme, J.-M.; Pautasso, G.; Pereverzev, G.; Poli, E.; Potzel, S.; Püschel, M.; Pütterich, T.; Pugno, R.; Raupp, G.; Reich, M.; Reiter, B.; Ribeiro, T.; Riedl, R.; Rohde, V.; Roth, J.; Rott, M.; Ryter, F.; Sandmann, W.; Santos, J.; Sassenberg, K.; Sauter, P.; Scarabosio, A.; Schall, G.; Schilling, H.-B.; Schirmer, J.; Schmid, A.; Schmid, K.; Schneider, W.; Schramm, G.; Schrittwieser, R.; Schustereder, W.; Schweinzer, J.; Schweizer, S.; Scott, B.; Seidel, U.; Sempf, M.; Serra, F.; Sertoli, M.; Siccinio, M.; Sigalov, A.; Silva, A.; Sips, A. C. C.; Speth, E.; Stäbler, A.; Stadler, R.; Steuer, K.-H.; Stober, J.; Streibl, B.; Strumberger, E.; Suttrop, W.; Tardini, G.; Tichmann, C.; Treutterer, W.; Tröster, C.; Urso, L.; Vainonen-Ahlgren, E.; Varela, P.; Vermare, L.; Volpe, F.; Wagner, D.; Wigger, C.; Wischmeier, M.; Wolfrum, E.; Würsching, E.; Yadikin, D.; Yu, Q.; Zasche, D.; Zehetbauer, T.; Zilker, M.

    2009-10-01

    ASDEX Upgrade was operated with a fully W-covered wall in 2007 and 2008. Stationary H-modes at the ITER target values and improved H-modes with H up to 1.2 were run without any boronization. The boundary conditions set by the full W wall (high enough ELM frequency, high enough central heating and low enough power density arriving at the target plates) require significant scenario development, but will apply to ITER as well. D retention has been reduced and stationary operation with saturated wall conditions has been found. Concerning confinement, impurity ion transport across the pedestal is neoclassical, explaining the strong inward pinch of high-Z impurities in between ELMs. In improved H-mode, the width of the temperature pedestal increases with heating power, consistent with a \\beta_{pol,ped}^{1/2} scaling. In the area of MHD instabilities, disruption mitigation experiments using massive Ne injection reach volume averaged values of the total electron density close to those required for runaway suppression in ITER. ECRH at the q = 2 surface was successfully applied to delay density limit disruptions. The characterization of fast particle losses due to MHD has shown the importance of different loss mechanisms for NTMs, TAEs and also beta-induced Alfven eigenmodes (BAEs). Specific studies addressing the first ITER operational phase show that O1 ECRH at the HFS assists reliable low-voltage breakdown. During ramp-up, additional heating can be used to vary li to fit within the ITER range. Confinement and power threshold in He are more favourable than in H, suggesting that He operation could allow us to assess H-mode operation in the non-nuclear phase of ITER operation.

  6. Assessment of conductor degradation in the ITER CS insert coil and implications for the ITER conductors

    NASA Astrophysics Data System (ADS)

    Mitchell, N.

    2007-01-01

    Nb3Sn cable in conduit-type conductors were expected to provide an efficient way of achieving large conductor currents at high field (up to 13 T) combined with good stability to electromagnetic disturbances due to the extensive helium contact area with the strands. Although ITER model coils successfully reached their design performance (Kato et al 2001 Fusion Eng. Des. 56/57 59-70), initial indications (Mitchell 2003 Fusion Eng. Des. 66-68 971-94) that there were unexplained performance shortfalls have been confirmed. Recent conductor tests (Pasztor et al 2004 IEEE Trans. Appl. Supercond. 14 1527-30) and modelling work (Mitchell 2005 Supercond. Sci. Technol. 18 396-404) suggest that the shortfalls are due to a combination of strand bending and filament fracture under the transverse magnetic loads. Using the new model, the extensive database from the ITER CS insert coil has been reassessed. A parametric fit based on a loss of filament area and n (the exponent of the power-law fit to the electric field) combined with a more rigorous consideration of the conductor field gradient has enabled the coil behaviour to be explained much more consistently than in earlier assessments, now fitting the Nb3Sn strain scaling laws when used with measurements of the conductor operating strain, including conditions when the insert coil current (and hence operating strain) were reversed. The coil superconducting performance also shows a fatigue-type behaviour consistent with recent measurements on conductor samples (Martovetsky et al 2005 IEEE Trans. Appl. Supercond. 15 1367-70). The ITER conductor design has already been modified compared to the CS insert, to increase the margin and provide increased resistance to the degradation, by using a steel jacket to provide thermal pre-compression to reduce tensile strain levels, reducing the void fraction from 36% to 33% and increasing the non-copper material by 25%. Test results are not yet available for the new design and performance predictions at present rely on models with limited verification.

  7. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    PubMed

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  8. Role of Outgassing of ITER Vacuum Vessel In-Wall Shielding Materials in Leak Detection of ITER Vacuum Vessel

    NASA Astrophysics Data System (ADS)

    Maheshwari, A.; Pathak, H. A.; Mehta, B. K.; Phull, G. S.; Laad, R.; Shaikh, M. S.; George, S.; Joshi, K.; Khan, Z.

    2017-04-01

    ITER Vacuum Vessel is a torus-shaped, double wall structure. The space between the double walls of the VV is filled with In-Wall Shielding Blocks (IWS) and Water. The main purpose of IWS is to provide neutron shielding during ITER plasma operation and to reduce ripple of Toroidal Magnetic Field (TF). Although In-Wall Shield Blocks (IWS) will be submerged in water in between the walls of the ITER Vacuum Vessel (VV), Outgassing Rate (OGR) of IWS materials plays a significant role in leak detection of Vacuum Vessel of ITER. Thermal Outgassing Rate of a material critically depends on the Surface Roughness of material. During leak detection process using RGA equipped Leak detector and tracer gas Helium, there will be a spill over of mass 3 and mass 2 to mass 4 which creates a background reading. Helium background will have contribution of Hydrogen too. So it is necessary to ensure the low OGR of Hydrogen. To achieve an effective leak test it is required to obtain a background below 1 × 10-8 mbar 1 s-1 and hence the maximum Outgassing rate of IWS Materials should comply with the maximum Outgassing rate required for hydrogen i.e. 1 x 10-10 mbar 1 s-1 cm-2 at room temperature. As IWS Materials are special materials developed for ITER project, it is necessary to ensure the compliance of Outgassing rate with the requirement. There is a possibility of diffusing the gasses in material at the time of production. So, to validate the production process of materials as well as manufacturing of final product from this material, three coupons of each IWS material have been manufactured with the same technique which is being used in manufacturing of IWS blocks. Manufacturing records of these coupons have been approved by ITER-IO (International Organization). Outgassing rates of these coupons have been measured at room temperature and found in acceptable limit to obtain the required Helium Background. On the basis of these measurements, test reports have been generated and got approved by IO. This paper will describe the preparation, characteristics and cleaning procedure of samples, description of the system, Outgassing rate Measurement of these samples to ensure the accurate leak detection.

  9. Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach.

    PubMed

    Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Brock, Kristy K; Daly, Michael J; Chan, Harley; Irish, Jonathan C; Siewerdsen, Jeffrey H

    2011-04-01

    A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values ("intensity"). A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5 +/- 2.8) mm compared to (3.5 +/- 3.0) mm with rigid registration. A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.

  10. Data Integration Tool: Permafrost Data Debugging

    NASA Astrophysics Data System (ADS)

    Wilcox, H.; Schaefer, K. M.; Jafarov, E. E.; Pulsifer, P. L.; Strawhacker, C.; Yarmey, L.; Basak, R.

    2017-12-01

    We developed a Data Integration Tool (DIT) to significantly speed up the time of manual processing needed to translate inconsistent, scattered historical permafrost data into files ready to ingest directly into the Global Terrestrial Network-Permafrost (GTN-P). The United States National Science Foundation funded this project through the National Snow and Ice Data Center (NSIDC) with the GTN-P to improve permafrost data access and discovery. We leverage this data to support science research and policy decisions. DIT is a workflow manager that divides data preparation and analysis into a series of steps or operations called widgets (https://github.com/PermaData/DIT). Each widget does a specific operation, such as read, multiply by a constant, sort, plot, and write data. DIT allows the user to select and order the widgets as desired to meet their specific needs, incrementally interact with and evolve the widget workflows, and save those workflows for reproducibility. Taking ideas from visual programming found in the art and design domain, debugging and iterative design principles from software engineering, and the scientific data processing and analysis power of Fortran and Python it was written for interactive, iterative data manipulation, quality control, processing, and analysis of inconsistent data in an easily installable application. DIT was used to completely translate one dataset (133 sites) that was successfully added to GTN-P, nearly translate three datasets (270 sites), and is scheduled to translate 10 more datasets ( 1000 sites) from the legacy inactive site data holdings of the Frozen Ground Data Center (FGDC). Iterative development has provided the permafrost and wider scientific community with an extendable tool designed specifically for the iterative process of translating unruly data.

  11. ECOMAT INC. BIOLOGICAL DENITRIFICATION PROCESS, ITER

    EPA Science Inventory

    EcoMat, Inc. of Hayward, California (EcoMat) has developed an ex situ anoxic biofilter biodenitrification (BDN) process. The process uses specific biocarriers and bacteria to treat nitrate-contaminated water and employs a patented reactor that retains biocarrier within the syste...

  12. Progress on the application of ELM control schemes to ITER scenarios from the non-active phase to DT operation

    NASA Astrophysics Data System (ADS)

    Loarte, A.; Huijsmans, G.; Futatani, S.; Baylor, L. R.; Evans, T. E.; Orlov, D. M.; Schmitz, O.; Becoulet, M.; Cahyna, P.; Gribov, Y.; Kavin, A.; Sashala Naik, A.; Campbell, D. J.; Casper, T.; Daly, E.; Frerichs, H.; Kischner, A.; Laengner, R.; Lisgo, S.; Pitts, R. A.; Saibene, G.; Wingen, A.

    2014-03-01

    Progress in the definition of the requirements for edge localized mode (ELM) control and the application of ELM control methods both for high fusion performance DT operation and non-active low-current operation in ITER is described. Evaluation of the power fluxes for low plasma current H-modes in ITER shows that uncontrolled ELMs will not lead to damage to the tungsten (W) divertor target, unlike for high-current H-modes in which divertor damage by uncontrolled ELMs is expected. Despite the lack of divertor damage at lower currents, ELM control is found to be required in ITER under these conditions to prevent an excessive contamination of the plasma by W, which could eventually lead to an increased disruptivity. Modelling with the non-linear MHD code JOREK of the physics processes determining the flow of energy from the confined plasma onto the plasma-facing components during ELMs at the ITER scale shows that the relative contribution of conductive and convective losses is intrinsically linked to the magnitude of the ELM energy loss. Modelling of the triggering of ELMs by pellet injection for DIII-D and ITER has identified the minimum pellet size required to trigger ELMs and, from this, the required fuel throughput for the application of this technique to ITER is evaluated and shown to be compatible with the installed fuelling and tritium re-processing capabilities in ITER. The evaluation of the capabilities of the ELM control coil system in ITER for ELM suppression is carried out (in the vacuum approximation) and found to have a factor of ˜2 margin in terms of coil current to achieve its design criterion, although such a margin could be substantially reduced when plasma shielding effects are taken into account. The consequences for the spatial distribution of the power fluxes at the divertor of ELM control by three-dimensional (3D) fields are evaluated and found to lead to substantial toroidal asymmetries in zones of the divertor target away from the separatrix. Therefore, specifications for the rotation of the 3D perturbation applied for ELM control in order to avoid excessive localized erosion of the ITER divertor target are derived. It is shown that a rotation frequency in excess of 1 Hz for the whole toroidally asymmetric divertor power flux pattern is required (corresponding to n Hz frequency in the variation of currents in the coils, where n is the toroidal symmetry of the perturbation applied) in order to avoid unacceptable thermal cycling of the divertor target for the highest power fluxes and worst toroidal power flux asymmetries expected. The possible use of the in-vessel vertical stability coils for ELM control as a back-up to the main ELM control systems in ITER is described and the feasibility of its application to control ELMs in low plasma current H-modes, foreseen for initial ITER operation, is evaluated and found to be viable for plasma currents up to 5-10 MA depending on modelling assumptions.

  13. Flexible Method for Developing Tactics, Techniques, and Procedures for Future Capabilities

    DTIC Science & Technology

    2009-02-01

    levels of ability, military experience, and motivation, (b) number and type of significant events, and (c) other sources of natural variability...research has developed a number of specific instruments designed to aid in this process. Second, the iterative, feed-forward nature of the method allows...FLEX method), but still lack the structured KE approach and iterative, feed-forward nature of the FLEX method. To facilitate decision making

  14. Technical Note: FreeCT_ICD: An Open Source Implementation of a Model-Based Iterative Reconstruction Method using Coordinate Descent Optimization for CT Imaging Investigations.

    PubMed

    Hoffman, John M; Noo, Frédéric; Young, Stefano; Hsieh, Scott S; McNitt-Gray, Michael

    2018-06-01

    To facilitate investigations into the impacts of acquisition and reconstruction parameters on quantitative imaging, radiomics and CAD using CT imaging, we previously released an open source implementation of a conventional weighted filtered backprojection reconstruction called FreeCT_wFBP. Our purpose was to extend that work by providing an open-source implementation of a model-based iterative reconstruction method using coordinate descent optimization, called FreeCT_ICD. Model-based iterative reconstruction offers the potential for substantial radiation dose reduction, but can impose substantial computational processing and storage requirements. FreeCT_ICD is an open source implementation of a model-based iterative reconstruction method that provides a reasonable tradeoff between these requirements. This was accomplished by adapting a previously proposed method that allows the system matrix to be stored with a reasonable memory requirement. The method amounts to describing the attenuation coefficient using rotating slices that follow the helical geometry. In the initially-proposed version, the rotating slices are themselves described using blobs. We have replaced this description by a unique model that relies on tri-linear interpolation together with the principles of Joseph's method. This model offers an improvement in memory requirement while still allowing highly accurate reconstruction for conventional CT geometries. The system matrix is stored column-wise and combined with an iterative coordinate descent (ICD) optimization. The result is FreeCT_ICD, which is a reconstruction program developed on the Linux platform using C++ libraries and the open source GNU GPL v2.0 license. The software is capable of reconstructing raw projection data of helical CT scans. In this work, the software has been described and evaluated by reconstructing datasets exported from a clinical scanner which consisted of an ACR accreditation phantom dataset and a clinical pediatric thoracic scan. For the ACR phantom, image quality was comparable to clinical reconstructions as well as reconstructions using open-source FreeCT_wFBP software. The pediatric thoracic scan also yielded acceptable results. In addition, we did not observe any deleterious impact in image quality associated with the utilization of rotating slices. These evaluations also demonstrated reasonable tradeoffs in storage requirements and computational demands. FreeCT_ICD is an open-source implementation of a model-based iterative reconstruction method that extends the capabilities of previously released open source reconstruction software and provides the ability to perform vendor-independent reconstructions of clinically acquired raw projection data. This implementation represents a reasonable tradeoff between storage and computational requirements and has demonstrated acceptable image quality in both simulated and clinical image datasets. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  15. Development of FWIGPR, an open-source package for full-waveform inversion of common-offset GPR data

    NASA Astrophysics Data System (ADS)

    Jazayeri, S.; Kruse, S.

    2017-12-01

    We introduce a package for full-waveform inversion (FWI) of Ground Penetrating Radar (GPR) data based on a combination of open-source programs. The FWI requires a good starting model, based on direct knowledge of field conditions or on traditional ray-based inversion methods. With a good starting model, the FWI can improve resolution of selected subsurface features. The package will be made available for general use in educational and research activities. The FWIGPR package consists of four main components: 3D to 2D data conversion, source wavelet estimation, forward modeling, and inversion. (These four components additionally require the development, by the user, of a good starting model.) A major challenge with GPR data is the unknown form of the waveform emitted by the transmitter held close to the ground surface. We apply a blind deconvolution method to estimate the source wavelet, based on a sparsity assumption about the reflectivity series of the subsurface model (Gholami and Sacchi 2012). The estimated wavelet is deconvolved from the data and the sparsest reflectivity series with fewest reflectors. The gprMax code (www.gprmax.com) is used as the forward modeling tool and the PEST parameter estimation package (www.pesthomepage.com) for the inversion. To reduce computation time, the field data are converted to an effective 2D equivalent, and the gprMax code can be run in 2D mode. In the first step, the user must create a good starting model of the data, presumably using ray-based methods. This estimated model will be introduced to the FWI process as an initial model. Next, the 3D data is converted to 2D, then the user estimates the source wavelet that best fits the observed data by sparsity assumption of the earth's response. Last, PEST runs gprMax with the initial model and calculates the misfit between the synthetic and observed data, and using an iterative algorithm calling gprMax several times ineach iteration, finds successive models that better fit the data. To gauge whether the iterative process has arrived at a local or global minima, the process can be repeated with a range of starting models. Tests have shown that this package can successfully improve estimates of selected subsurface model parameters for simple synthetic and real data. Ongoing research will focus on FWI of more complex scenarios.

  16. Iterative dataset optimization in automated planning: Implementation for breast and rectal cancer radiotherapy.

    PubMed

    Fan, Jiawei; Wang, Jiazhou; Zhang, Zhen; Hu, Weigang

    2017-06-01

    To develop a new automated treatment planning solution for breast and rectal cancer radiotherapy. The automated treatment planning solution developed in this study includes selection of the iterative optimized training dataset, dose volume histogram (DVH) prediction for the organs at risk (OARs), and automatic generation of clinically acceptable treatment plans. The iterative optimized training dataset is selected by an iterative optimization from 40 treatment plans for left-breast and rectal cancer patients who received radiation therapy. A two-dimensional kernel density estimation algorithm (noted as two parameters KDE) which incorporated two predictive features was implemented to produce the predicted DVHs. Finally, 10 additional new left-breast treatment plans are re-planned using the Pinnacle 3 Auto-Planning (AP) module (version 9.10, Philips Medical Systems) with the objective functions derived from the predicted DVH curves. Automatically generated re-optimized treatment plans are compared with the original manually optimized plans. By combining the iterative optimized training dataset methodology and two parameters KDE prediction algorithm, our proposed automated planning strategy improves the accuracy of the DVH prediction. The automatically generated treatment plans using the dose derived from the predicted DVHs can achieve better dose sparing for some OARs without compromising other metrics of plan quality. The proposed new automated treatment planning solution can be used to efficiently evaluate and improve the quality and consistency of the treatment plans for intensity-modulated breast and rectal cancer radiation therapy. © 2017 American Association of Physicists in Medicine.

  17. How good are the Garvey-Kelson predictions of nuclear masses?

    NASA Astrophysics Data System (ADS)

    Morales, Irving O.; López Vieyra, J. C.; Hirsch, J. G.; Frank, A.

    2009-09-01

    The Garvey-Kelson relations are used in an iterative process to predict nuclear masses in the neighborhood of nuclei with measured masses. Average errors in the predicted masses for the first three iteration shells are smaller than those obtained with the best nuclear mass models. Their quality is comparable with the Audi-Wapstra extrapolations, offering a simple and reproducible procedure for short range mass predictions. A systematic study of the way the error grows as a function of the iteration and the distance to the known masses region, shows that a correlation exists between the error and the residual neutron-proton interaction, produced mainly by the implicit assumption that V varies smoothly along the nuclear landscape.

  18. Review of the ITER diagnostics suite for erosion, deposition, dust and tritium measurements

    NASA Astrophysics Data System (ADS)

    Reichle, R.; Andrew, P.; Bates, P.; Bede, O.; Casal, N.; Choi, C. H.; Barnsley, R.; Damiani, C.; Bertalot, L.; Dubus, G.; Ferreol, J.; Jagannathan, G.; Kocan, M.; Leipold, F.; Lisgo, S. W.; Martin, V.; Palmer, J.; Pearce, R.; Philipps, V.; Pitts, R. A.; Pampin, R.; Passedat, G.; Puiu, A.; Suarez, A.; Shigin, P.; Shu, W.; Vayakis, G.; Veshchev, E.; Walsh, M.

    2015-08-01

    Dust and tritium inventories in the vacuum vessel have upper limits in ITER that are set by nuclear safety requirements. Erosion, migration and re-deposition of wall material together with fuel co-deposition will be largely responsible for these inventories. The diagnostic suite required to monitor these processes, along with the set of the corresponding measurement requirements is currently under review given the recent decision by the ITER Organization to eliminate the first carbon/tungsten (C/W) divertor and begin operations with a full-W variant Pitts et al. [1]. This paper presents the result of this review as well as the status of the chosen diagnostics.

  19. Multiple solution of linear algebraic systems by an iterative method with recomputed preconditioner in the analysis of microstrip structures

    NASA Astrophysics Data System (ADS)

    Ahunov, Roman R.; Kuksenko, Sergey P.; Gazizov, Talgat R.

    2016-06-01

    A multiple solution of linear algebraic systems with dense matrix by iterative methods is considered. To accelerate the process, the recomputing of the preconditioning matrix is used. A priory condition of the recomputing based on change of the arithmetic mean of the current solution time during the multiple solution is proposed. To confirm the effectiveness of the proposed approach, the numerical experiments using iterative methods BiCGStab and CGS for four different sets of matrices on two examples of microstrip structures are carried out. For solution of 100 linear systems the acceleration up to 1.6 times, compared to the approach without recomputing, is obtained.

  20. Iterative-method performance evaluation for multiple vectors associated with a large-scale sparse matrix

    NASA Astrophysics Data System (ADS)

    Imamura, Seigo; Ono, Kenji; Yokokawa, Mitsuo

    2016-07-01

    Ensemble computing, which is an instance of capacity computing, is an effective computing scenario for exascale parallel supercomputers. In ensemble computing, there are multiple linear systems associated with a common coefficient matrix. We improve the performance of iterative solvers for multiple vectors by solving them at the same time, that is, by solving for the product of the matrices. We implemented several iterative methods and compared their performance. The maximum performance on Sparc VIIIfx was 7.6 times higher than that of a naïve implementation. Finally, to deal with the different convergence processes of linear systems, we introduced a control method to eliminate the calculation of already converged vectors.

  1. Further investigation on "A multiplicative regularization for force reconstruction"

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.

  2. Deep learning methods to guide CT image reconstruction and reduce metal artifacts

    NASA Astrophysics Data System (ADS)

    Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge

    2017-03-01

    The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.

  3. Dynamical coupling between magnetic equilibrium and transport in tokamak scenario modelling, with application to current ramps

    NASA Astrophysics Data System (ADS)

    Fable, E.; Angioni, C.; Ivanov, A. A.; Lackner, K.; Maj, O.; Medvedev, S. Yu; Pautasso, G.; Pereverzev, G. V.; Treutterer, W.; the ASDEX Upgrade Team

    2013-07-01

    The modelling of tokamak scenarios requires the simultaneous solution of both the time evolution of the plasma kinetic profiles and of the magnetic equilibrium. Their dynamical coupling involves additional complications, which are not present when the two physical problems are solved separately. Difficulties arise in maintaining consistency in the time evolution among quantities which appear in both the transport and the Grad-Shafranov equations, specifically the poloidal and toroidal magnetic fluxes as a function of each other and of the geometry. The required consistency can be obtained by means of iteration cycles, which are performed outside the equilibrium code and which can have different convergence properties depending on the chosen numerical scheme. When these external iterations are performed, the stability of the coupled system becomes a concern. In contrast, if these iterations are not performed, the coupled system is numerically stable, but can become physically inconsistent. By employing a novel scheme (Fable E et al 2012 Nucl. Fusion submitted), which ensures stability and physical consistency among the same quantities that appear in both the transport and magnetic equilibrium equations, a newly developed version of the ASTRA transport code (Pereverzev G V et al 1991 IPP Report 5/42), which is coupled to the SPIDER equilibrium code (Ivanov A A et al 2005 32nd EPS Conf. on Plasma Physics (Tarragona, 27 June-1 July) vol 29C (ECA) P-5.063), in both prescribed- and free-boundary modes is presented here for the first time. The ASTRA-SPIDER coupled system is then applied to the specific study of the modelling of controlled current ramp-up in ASDEX Upgrade discharges.

  4. Marky: a tool supporting annotation consistency in multi-user and iterative document annotation projects.

    PubMed

    Pérez-Pérez, Martín; Glez-Peña, Daniel; Fdez-Riverola, Florentino; Lourenço, Anália

    2015-02-01

    Document annotation is a key task in the development of Text Mining methods and applications. High quality annotated corpora are invaluable, but their preparation requires a considerable amount of resources and time. Although the existing annotation tools offer good user interaction interfaces to domain experts, project management and quality control abilities are still limited. Therefore, the current work introduces Marky, a new Web-based document annotation tool equipped to manage multi-user and iterative projects, and to evaluate annotation quality throughout the project life cycle. At the core, Marky is a Web application based on the open source CakePHP framework. User interface relies on HTML5 and CSS3 technologies. Rangy library assists in browser-independent implementation of common DOM range and selection tasks, and Ajax and JQuery technologies are used to enhance user-system interaction. Marky grants solid management of inter- and intra-annotator work. Most notably, its annotation tracking system supports systematic and on-demand agreement analysis and annotation amendment. Each annotator may work over documents as usual, but all the annotations made are saved by the tracking system and may be further compared. So, the project administrator is able to evaluate annotation consistency among annotators and across rounds of annotation, while annotators are able to reject or amend subsets of annotations made in previous rounds. As a side effect, the tracking system minimises resource and time consumption. Marky is a novel environment for managing multi-user and iterative document annotation projects. Compared to other tools, Marky offers a similar visually intuitive annotation experience while providing unique means to minimise annotation effort and enforce annotation quality, and therefore corpus consistency. Marky is freely available for non-commercial use at http://sing.ei.uvigo.es/marky. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Numerical methods for solving moment equations in kinetic theory of neuronal network dynamics

    NASA Astrophysics Data System (ADS)

    Rangan, Aaditya V.; Cai, David; Tao, Louis

    2007-02-01

    Recently developed kinetic theory and related closures for neuronal network dynamics have been demonstrated to be a powerful theoretical framework for investigating coarse-grained dynamical properties of neuronal networks. The moment equations arising from the kinetic theory are a system of (1 + 1)-dimensional nonlinear partial differential equations (PDE) on a bounded domain with nonlinear boundary conditions. The PDEs themselves are self-consistently specified by parameters which are functions of the boundary values of the solution. The moment equations can be stiff in space and time. Numerical methods are presented here for efficiently and accurately solving these moment equations. The essential ingredients in our numerical methods include: (i) the system is discretized in time with an implicit Euler method within a spectral deferred correction framework, therefore, the PDEs of the kinetic theory are reduced to a sequence, in time, of boundary value problems (BVPs) with nonlinear boundary conditions; (ii) a set of auxiliary parameters is introduced to recast the original BVP with nonlinear boundary conditions as BVPs with linear boundary conditions - with additional algebraic constraints on the auxiliary parameters; (iii) a careful combination of two Newton's iterates for the nonlinear BVP with linear boundary condition, interlaced with a Newton's iterate for solving the associated algebraic constraints is constructed to achieve quadratic convergence for obtaining the solutions with self-consistent parameters. It is shown that a simple fixed-point iteration can only achieve a linear convergence for the self-consistent parameters. The practicability and efficiency of our numerical methods for solving the moment equations of the kinetic theory are illustrated with numerical examples. It is further demonstrated that the moment equations derived from the kinetic theory of neuronal network dynamics can very well capture the coarse-grained dynamical properties of integrate-and-fire neuronal networks.

  6. Quantum learning of classical stochastic processes: The completely positive realization problem

    NASA Astrophysics Data System (ADS)

    Monràs, Alex; Winter, Andreas

    2016-01-01

    Among several tasks in Machine Learning, a specially important one is the problem of inferring the latent variables of a system and their causal relations with the observed behavior. A paradigmatic instance of this is the task of inferring the hidden Markov model underlying a given stochastic process. This is known as the positive realization problem (PRP), [L. Benvenuti and L. Farina, IEEE Trans. Autom. Control 49(5), 651-664 (2004)] and constitutes a central problem in machine learning. The PRP and its solutions have far-reaching consequences in many areas of systems and control theory, and is nowadays an important piece in the broad field of positive systems theory. We consider the scenario where the latent variables are quantum (i.e., quantum states of a finite-dimensional system) and the system dynamics is constrained only by physical transformations on the quantum system. The observable dynamics is then described by a quantum instrument, and the task is to determine which quantum instrument — if any — yields the process at hand by iterative application. We take as a starting point the theory of quasi-realizations, whence a description of the dynamics of the process is given in terms of linear maps on state vectors and probabilities are given by linear functionals on the state vectors. This description, despite its remarkable resemblance with the hidden Markov model, or the iterated quantum instrument, is however devoid of any stochastic or quantum mechanical interpretation, as said maps fail to satisfy any positivity conditions. The completely positive realization problem then consists in determining whether an equivalent quantum mechanical description of the same process exists. We generalize some key results of stochastic realization theory, and show that the problem has deep connections with operator systems theory, giving possible insight to the lifting problem in quotient operator systems. Our results have potential applications in quantum machine learning, device-independent characterization and reverse-engineering of stochastic processes and quantum processors, and more generally, of dynamical processes with quantum memory [M. Guţă, Phys. Rev. A 83(6), 062324 (2011); M. Guţă and N. Yamamoto, e-print arXiv:1303.3771(2013)].

  7. Data Integration Tool: From Permafrost Data Translation Research Tool to A Robust Research Application

    NASA Astrophysics Data System (ADS)

    Wilcox, H.; Schaefer, K. M.; Jafarov, E. E.; Strawhacker, C.; Pulsifer, P. L.; Thurmes, N.

    2016-12-01

    The United States National Science Foundation funded PermaData project led by the National Snow and Ice Data Center (NSIDC) with a team from the Global Terrestrial Network for Permafrost (GTN-P) aimed to improve permafrost data access and discovery. We developed a Data Integration Tool (DIT) to significantly speed up the time of manual processing needed to translate inconsistent, scattered historical permafrost data into files ready to ingest directly into the GTN-P. We leverage this data to support science research and policy decisions. DIT is a workflow manager that divides data preparation and analysis into a series of steps or operations called widgets. Each widget does a specific operation, such as read, multiply by a constant, sort, plot, and write data. DIT allows the user to select and order the widgets as desired to meet their specific needs. Originally it was written to capture a scientist's personal, iterative, data manipulation and quality control process of visually and programmatically iterating through inconsistent input data, examining it to find problems, adding operations to address the problems, and rerunning until the data could be translated into the GTN-P standard format. Iterative development of this tool led to a Fortran/Python hybrid then, with consideration of users, licensing, version control, packaging, and workflow, to a publically available, robust, usable application. Transitioning to Python allowed the use of open source frameworks for the workflow core and integration with a javascript graphical workflow interface. DIT is targeted to automatically handle 90% of the data processing for field scientists, modelers, and non-discipline scientists. It is available as an open source tool in GitHub packaged for a subset of Mac, Windows, and UNIX systems as a desktop application with a graphical workflow manager. DIT was used to completely translate one dataset (133 sites) that was successfully added to GTN-P, nearly translate three datasets (270 sites), and is scheduled to translate 10 more datasets ( 1000 sites) from the legacy inactive site data holdings of the Frozen Ground Data Center (FGDC). Iterative development has provided the permafrost and wider scientific community with an extendable tool designed specifically for the iterative process of translating unruly data.

  8. Some error bounds for K-iterated Gaussian recursive filters

    NASA Astrophysics Data System (ADS)

    Cuomo, Salvatore; Galletti, Ardelio; Giunta, Giulio; Marcellino, Livia

    2016-10-01

    Recursive filters (RFs) have achieved a central role in several research fields over the last few years. For example, they are used in image processing, in data assimilation and in electrocardiogram denoising. More in particular, among RFs, the Gaussian RFs are an efficient computational tool for approximating Gaussian-based convolutions and are suitable for digital image processing and applications of the scale-space theory. As is a common knowledge, the Gaussian RFs, applied to signals with support in a finite domain, generate distortions and artifacts, mostly localized at the boundaries. Heuristic and theoretical improvements have been proposed in literature to deal with this issue (namely boundary conditions). They include the case in which a Gaussian RF is applied more than once, i.e. the so called K-iterated Gaussian RFs. In this paper, starting from a summary of the comprehensive mathematical background, we consider the case of the K-iterated first-order Gaussian RF and provide the study of its numerical stability and some component-wise theoretical error bounds.

  9. Iterative reactions of transient boronic acids enable sequential C-C bond formation

    NASA Astrophysics Data System (ADS)

    Battilocchio, Claudio; Feist, Florian; Hafner, Andreas; Simon, Meike; Tran, Duc N.; Allwood, Daniel M.; Blakemore, David C.; Ley, Steven V.

    2016-04-01

    The ability to form multiple carbon-carbon bonds in a controlled sequence and thus rapidly build molecular complexity in an iterative fashion is an important goal in modern chemical synthesis. In recent times, transition-metal-catalysed coupling reactions have dominated in the development of C-C bond forming processes. A desire to reduce the reliance on precious metals and a need to obtain products with very low levels of metal impurities has brought a renewed focus on metal-free coupling processes. Here, we report the in situ preparation of reactive allylic and benzylic boronic acids, obtained by reacting flow-generated diazo compounds with boronic acids, and their application in controlled iterative C-C bond forming reactions is described. Thus far we have shown the formation of up to three C-C bonds in a sequence including the final trapping of a reactive boronic acid species with an aldehyde to generate a range of new chemical structures.

  10. Modeling Data Containing Outliers using ARIMA Additive Outlier (ARIMA-AO)

    NASA Astrophysics Data System (ADS)

    Saleh Ahmar, Ansari; Guritno, Suryo; Abdurakhman; Rahman, Abdul; Awi; Alimuddin; Minggi, Ilham; Arif Tiro, M.; Kasim Aidid, M.; Annas, Suwardi; Utami Sutiksno, Dian; Ahmar, Dewi S.; Ahmar, Kurniawan H.; Abqary Ahmar, A.; Zaki, Ahmad; Abdullah, Dahlan; Rahim, Robbi; Nurdiyanto, Heri; Hidayat, Rahmat; Napitupulu, Darmawan; Simarmata, Janner; Kurniasih, Nuning; Andretti Abdillah, Leon; Pranolo, Andri; Haviluddin; Albra, Wahyudin; Arifin, A. Nurani M.

    2018-01-01

    The aim this study is discussed on the detection and correction of data containing the additive outlier (AO) on the model ARIMA (p, d, q). The process of detection and correction of data using an iterative procedure popularized by Box, Jenkins, and Reinsel (1994). By using this method we obtained an ARIMA models were fit to the data containing AO, this model is added to the original model of ARIMA coefficients obtained from the iteration process using regression methods. In the simulation data is obtained that the data contained AO initial models are ARIMA (2,0,0) with MSE = 36,780, after the detection and correction of data obtained by the iteration of the model ARIMA (2,0,0) with the coefficients obtained from the regression Zt = 0,106+0,204Z t-1+0,401Z t-2-329X 1(t)+115X 2(t)+35,9X 3(t) and MSE = 19,365. This shows that there is an improvement of forecasting error rate data.

  11. Iterative metal artifact reduction for x-ray computed tomography using unmatched projector/backprojector pairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hanming; Wang, Linyuan; Li, Lei

    2016-06-15

    Purpose: Metal artifact reduction (MAR) is a major problem and a challenging issue in x-ray computed tomography (CT) examinations. Iterative reconstruction from sinograms unaffected by metals shows promising potential in detail recovery. This reconstruction has been the subject of much research in recent years. However, conventional iterative reconstruction methods easily introduce new artifacts around metal implants because of incomplete data reconstruction and inconsistencies in practical data acquisition. Hence, this work aims at developing a method to suppress newly introduced artifacts and improve the image quality around metal implants for the iterative MAR scheme. Methods: The proposed method consists of twomore » steps based on the general iterative MAR framework. An uncorrected image is initially reconstructed, and the corresponding metal trace is obtained. The iterative reconstruction method is then used to reconstruct images from the unaffected sinogram. In the reconstruction step of this work, an iterative strategy utilizing unmatched projector/backprojector pairs is used. A ramp filter is introduced into the back-projection procedure to restrain the inconsistency components in low frequencies and generate more reliable images of the regions around metals. Furthermore, a constrained total variation (TV) minimization model is also incorporated to enhance efficiency. The proposed strategy is implemented based on an iterative FBP and an alternating direction minimization (ADM) scheme, respectively. The developed algorithms are referred to as “iFBP-TV” and “TV-FADM,” respectively. Two projection-completion-based MAR methods and three iterative MAR methods are performed simultaneously for comparison. Results: The proposed method performs reasonably on both simulation and real CT-scanned datasets. This approach could reduce streak metal artifacts effectively and avoid the mentioned effects in the vicinity of the metals. The improvements are evaluated by inspecting regions of interest and by comparing the root-mean-square errors, normalized mean absolute distance, and universal quality index metrics of the images. Both iFBP-TV and TV-FADM methods outperform other counterparts in all cases. Unlike the conventional iterative methods, the proposed strategy utilizing unmatched projector/backprojector pairs shows excellent performance in detail preservation and prevention of the introduction of new artifacts. Conclusions: Qualitative and quantitative evaluations of experimental results indicate that the developed method outperforms classical MAR algorithms in suppressing streak artifacts and preserving the edge structural information of the object. In particular, structures lying close to metals can be gradually recovered because of the reduction of artifacts caused by inconsistency effects.« less

  12. Using iterative learning to improve understanding during the informed consent process in a South African psychiatric genomics study.

    PubMed

    Campbell, Megan M; Susser, Ezra; Mall, Sumaya; Mqulwana, Sibonile G; Mndini, Michael M; Ntola, Odwa A; Nagdee, Mohamed; Zingela, Zukiswa; Van Wyk, Stephanus; Stein, Dan J

    2017-01-01

    Obtaining informed consent is a great challenge in global health research. There is a need for tools that can screen for and improve potential research participants' understanding of the research study at the time of recruitment. Limited empirical research has been conducted in low and middle income countries, evaluating informed consent processes in genomics research. We sought to investigate the quality of informed consent obtained in a South African psychiatric genomics study. A Xhosa language version of the University of California, San Diego Brief Assessment of Capacity to Consent Questionnaire (UBACC) was used to screen for capacity to consent and improve understanding through iterative learning in a sample of 528 Xhosa people with schizophrenia and 528 controls. We address two questions: firstly, whether research participants' understanding of the research study improved through iterative learning; and secondly, what were predictors for better understanding of the research study at the initial screening? During screening 290 (55%) cases and 172 (33%) controls scored below the 14.5 cut-off for acceptable understanding of the research study elements, however after iterative learning only 38 (7%) cases and 13 (2.5%) controls continued to score below this cut-off. Significant variables associated with increased understanding of the consent included the psychiatric nurse recruiter conducting the consent screening, higher participant level of education, and being a control. The UBACC proved an effective tool to improve understanding of research study elements during consent, for both cases and controls. The tool holds utility for complex studies such as those involving genomics, where iterative learning can be used to make significant improvements in understanding of research study elements. The UBACC may be particularly important in groups with severe mental illness and lower education levels. Study recruiters play a significant role in managing the quality of the informed consent process.

  13. A methodology for accident analysis of fusion breeder blankets and its application to helium-cooled lead–lithium blanket

    DOE PAGES

    Panayotov, Dobromir; Poitevin, Yves; Grief, Andrew; ...

    2016-09-23

    'Fusion for Energy' (F4E) is designing, developing, and implementing the European Helium-Cooled Lead-Lithium (HCLL) and Helium-Cooled Pebble-Bed (HCPB) Test Blanket Systems (TBSs) for ITER (Nuclear Facility INB-174). Safety demonstration is an essential element for the integration of these TBSs into ITER and accident analysis is one of its critical components. A systematic approach to accident analysis has been developed under the F4E contract on TBS safety analyses. F4E technical requirements, together with Amec Foster Wheeler and INL efforts, have resulted in a comprehensive methodology for fusion breeding blanket accident analysis that addresses the specificity of the breeding blanket designs, materials,more » and phenomena while remaining consistent with the approach already applied to ITER accident analyses. Furthermore, the methodology phases are illustrated in the paper by its application to the EU HCLL TBS using both MELCOR and RELAP5 codes.« less

  14. A stepladder approach to a tokamak fusion power plant

    NASA Astrophysics Data System (ADS)

    Zohm, H.; Träuble, F.; Biel, W.; Fable, E.; Kemp, R.; Lux, H.; Siccinio, M.; Wenninger, R.

    2017-08-01

    We present an approach to design in a consistent way a stepladder connecting ITER, DEMO and an FPP, starting from an attractive FPP and then locating DEMO such that main similarity parameters for the core scenario are constant. The approach presented suggests how to use ITER such that DEMO can be extrapolated with maximum confidence and a development path for plasma scenarios in ITER follows from our approach, moving from low β N and q typical for the present Q  =  10 scenario to higher values needed for steady state. A numerical example is given, indicative of the feasibility of the approach, and it is backed up by more detailed 1.5-D calculation using the ASTRA code. We note that ideal MHD stability analysis of the DEMO operating point indicates that it is located between the no-wall and the ideal wall β-limit, which may require active stabilization. The DEMO design could also be a pulsed fallback solution should a stationary operation turn out to be impossible.

  15. Optimal control of a coupled partial and ordinary differential equations system for the assimilation of polarimetry Stokes vector measurements in tokamak free-boundary equilibrium reconstruction with application to ITER

    NASA Astrophysics Data System (ADS)

    Faugeras, Blaise; Blum, Jacques; Heumann, Holger; Boulbe, Cédric

    2017-08-01

    The modelization of polarimetry Faraday rotation measurements commonly used in tokamak plasma equilibrium reconstruction codes is an approximation to the Stokes model. This approximation is not valid for the foreseen ITER scenarios where high current and electron density plasma regimes are expected. In this work a method enabling the consistent resolution of the inverse equilibrium reconstruction problem in the framework of non-linear free-boundary equilibrium coupled to the Stokes model equation for polarimetry is provided. Using optimal control theory we derive the optimality system for this inverse problem. A sequential quadratic programming (SQP) method is proposed for its numerical resolution. Numerical experiments with noisy synthetic measurements in the ITER tokamak configuration for two test cases, the second of which is an H-mode plasma, show that the method is efficient and that the accuracy of the identification of the unknown profile functions is improved compared to the use of classical Faraday measurements.

  16. Fast and Epsilon-Optimal Discretized Pursuit Learning Automata.

    PubMed

    Zhang, JunQi; Wang, Cheng; Zhou, MengChu

    2015-10-01

    Learning automata (LA) are powerful tools for reinforcement learning. A discretized pursuit LA is the most popular one among them. During an iteration its operation consists of three basic phases: 1) selecting the next action; 2) finding the optimal estimated action; and 3) updating the state probability. However, when the number of actions is large, the learning becomes extremely slow because there are too many updates to be made at each iteration. The increased updates are mostly from phases 1 and 3. A new fast discretized pursuit LA with assured ε -optimality is proposed to perform both phases 1 and 3 with the computational complexity independent of the number of actions. Apart from its low computational complexity, it achieves faster convergence speed than the classical one when operating in stationary environments. This paper can promote the applications of LA toward the large-scale-action oriented area that requires efficient reinforcement learning tools with assured ε -optimality, fast convergence speed, and low computational complexity for each iteration.

  17. Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags

    NASA Astrophysics Data System (ADS)

    ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu

    2017-05-01

    Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.

  18. a Weighted Closed-Form Solution for Rgb-D Data Registration

    NASA Astrophysics Data System (ADS)

    Vestena, K. M.; Dos Santos, D. R.; Oilveira, E. M., Jr.; Pavan, N. L.; Khoshelham, K.

    2016-06-01

    Existing 3D indoor mapping of RGB-D data are prominently point-based and feature-based methods. In most cases iterative closest point (ICP) and its variants are generally used for pairwise registration process. Considering that the ICP algorithm requires an relatively accurate initial transformation and high overlap a weighted closed-form solution for RGB-D data registration is proposed. In this solution, we weighted and normalized the 3D points based on the theoretical random errors and the dual-number quaternions are used to represent the 3D rigid body motion. Basically, dual-number quaternions provide a closed-form solution by minimizing a cost function. The most important advantage of the closed-form solution is that it provides the optimal transformation in one-step, it does not need to calculate good initial estimates and expressively decreases the demand for computer resources in contrast to the iterative method. Basically, first our method exploits RGB information. We employed a scale invariant feature transformation (SIFT) for extracting, detecting, and matching features. It is able to detect and describe local features that are invariant to scaling and rotation. To detect and filter outliers, we used random sample consensus (RANSAC) algorithm, jointly with an statistical dispersion called interquartile range (IQR). After, a new RGB-D loop-closure solution is implemented based on the volumetric information between pair of point clouds and the dispersion of the random errors. The loop-closure consists to recognize when the sensor revisits some region. Finally, a globally consistent map is created to minimize the registration errors via a graph-based optimization. The effectiveness of the proposed method is demonstrated with a Kinect dataset. The experimental results show that the proposed method can properly map the indoor environment with an absolute accuracy around 1.5% of the travel of a trajectory.

  19. Usability and feasibility of a tablet-based Decision-Support and Integrated Record-keeping (DESIRE) tool in the nurse management of hypertension in rural western Kenya.

    PubMed

    Vedanthan, Rajesh; Blank, Evan; Tuikong, Nelly; Kamano, Jemima; Misoi, Lawrence; Tulienge, Deborah; Hutchinson, Claire; Ascheim, Deborah D; Kimaiyo, Sylvester; Fuster, Valentin; Were, Martin C

    2015-03-01

    Mobile health (mHealth) applications have recently proliferated, especially in low- and middle-income countries, complementing task-redistribution strategies with clinical decision support. Relatively few studies address usability and feasibility issues that may impact success or failure of implementation, and few have been conducted for non-communicable diseases such as hypertension. To conduct iterative usability and feasibility testing of a tablet-based Decision Support and Integrated Record-keeping (DESIRE) tool, a technology intended to assist rural clinicians taking care of hypertension patients at the community level in a resource-limited setting in western Kenya. Usability testing consisted of "think aloud" exercises and "mock patient encounters" with five nurses, as well as one focus group discussion. Feasibility testing consisted of semi-structured interviews of five nurses and two members of the implementation team, and one focus group discussion with nurses. Content analysis was performed using both deductive codes and significant inductive codes. Critical incidents were identified and ranked according to severity. A cause-of-error analysis was used to develop corresponding design change suggestions. Fifty-seven critical incidents were identified in usability testing, 21 of which were unique. The cause-of-error analysis yielded 23 design change suggestions. Feasibility themes included barriers to implementation along both human and technical axes, facilitators to implementation, provider issues, patient issues and feature requests. This participatory, iterative human-centered design process revealed previously unaddressed usability and feasibility issues affecting the implementation of the DESIRE tool in western Kenya. In addition to well-known technical issues, we highlight the importance of human factors that can impact implementation of mHealth interventions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Usability and Feasibility of a Tablet-Based Decision-Support and Integrated Record-Keeping (DESIRE) Tool in the Nurse Management of Hypertension in Rural Western Kenya

    PubMed Central

    Vedanthan, Rajesh; Blank, Evan; Tuikong, Nelly; Kamano, Jemima; Misoi, Lawrence; Tulienge, Deborah; Hutchinson, Claire; Ascheim, Deborah D.; Kimaiyo, Sylvester; Fuster, Valentin; Were, Martin C.

    2015-01-01

    Background Mobile health (mHealth) applications have recently proliferated, especially in low- and middle-income countries, complementing task-redistribution strategies with clinical decision support. Relatively few studies address usability and feasibility issues that may impact success or failure of implementation, and few have been conducted for non-communicable diseases such as hypertension. Objective To conduct iterative usability and feasibility testing of a tablet-based Decision Support and Integrated Record-keeping (DESIRE) tool, a technology intended to assist rural clinicians taking care of hypertension patients at the community level in a resource-limited setting in western Kenya. Methods Usability testing consisted of “think aloud” exercises and “mock patient encounters” with five nurses, as well as one focus group discussion. Feasibility testing consisted of semi-structured interviews of five nurses and two members of the implementation team, and one focus group discussion with nurses. Content analysis was performed using both deductive codes and significant inductive codes. Critical incidents were identified and ranked according to severity. A cause-of-error analysis was used to develop corresponding design change suggestions. Results Fifty-seven critical incidents were identified in usability testing, 21 of which were unique. The cause-of-error analysis yielded 23 design change suggestions. Feasibility themes included barriers to implementation along both human and technical axes, facilitators to implementation, provider issues, patient issues and feature requests. Conclusions This participatory, iterative human-centered design process revealed previously unaddressed usability and feasibility issues affecting the implementation of the DESIRE tool in western Kenya. In addition to well-known technical issues, we highlight the importance of human factors that can impact implementation of mHealth interventions. PMID:25612791

  1. Development of a Mobile Clinical Prediction Tool to Estimate Future Depression Severity and Guide Treatment in Primary Care: User-Centered Design.

    PubMed

    Wachtler, Caroline; Coe, Amy; Davidson, Sandra; Fletcher, Susan; Mendoza, Antonette; Sterling, Leon; Gunn, Jane

    2018-04-23

    Around the world, depression is both under- and overtreated. The diamond clinical prediction tool was developed to assist with appropriate treatment allocation by estimating the 3-month prognosis among people with current depressive symptoms. Delivering clinical prediction tools in a way that will enhance their uptake in routine clinical practice remains challenging; however, mobile apps show promise in this respect. To increase the likelihood that an app-delivered clinical prediction tool can be successfully incorporated into clinical practice, it is important to involve end users in the app design process. The aim of the study was to maximize patient engagement in an app designed to improve treatment allocation for depression. An iterative, user-centered design process was employed. Qualitative data were collected via 2 focus groups with a community sample (n=17) and 7 semistructured interviews with people with depressive symptoms. The results of the focus groups and interviews were used by the computer engineering team to modify subsequent protoypes of the app. Iterative development resulted in 3 prototypes and a final app. The areas requiring the most substantial changes following end-user input were related to the iconography used and the way that feedback was provided. In particular, communicating risk of future depressive symptoms proved difficult; these messages were consistently misinterpreted and negatively viewed and were ultimately removed. All participants felt positively about seeing their results summarized after completion of the clinical prediction tool, but there was a need for a personalized treatment recommendation made in conjunction with a consultation with a health professional. User-centered design led to valuable improvements in the content and design of an app designed to improve allocation of and engagement in depression treatment. Iterative design allowed us to develop a tool that allows users to feel hope, engage in self-reflection, and motivate them to treatment. The tool is currently being evaluated in a randomized controlled trial. ©Caroline Wachtler, Amy Coe, Sandra Davidson, Susan Fletcher, Antonette Mendoza, Leon Sterling, Jane Gunn. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 23.04.2018.

  2. Performance of extended Lagrangian schemes for molecular dynamics simulations with classical polarizable force fields and density functional theory

    NASA Astrophysics Data System (ADS)

    Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; Niklasson, Anders M. N.; Head-Gordon, Teresa; Skylaris, Chris-Kriton

    2017-03-01

    Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities are treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.

  3. Performance of extended Lagrangian schemes for molecular dynamics simulations with classical polarizable force fields and density functional theory.

    PubMed

    Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; Niklasson, Anders M N; Head-Gordon, Teresa; Skylaris, Chris-Kriton

    2017-03-28

    Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities are treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes-in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.

  4. Performance of extended Lagrangian schemes for molecular dynamics simulations with classical polarizable force fields and density functional theory

    DOE PAGES

    Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; ...

    2017-03-28

    Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities aremore » treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Furthermore, both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.« less

  5. Performance of extended Lagrangian schemes for molecular dynamics simulations with classical polarizable force fields and density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex

    Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities aremore » treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Furthermore, both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.« less

  6. Extended Lagrangian Excited State Molecular Dynamics

    DOE PAGES

    Bjorgaard, Josiah August; Sheppard, Daniel Glen; Tretiak, Sergei; ...

    2018-01-09

    In this work, an extended Lagrangian framework for excited state molecular dynamics (XL-ESMD) using time-dependent self-consistent field theory is proposed. The formulation is a generalization of the extended Lagrangian formulations for ground state Born–Oppenheimer molecular dynamics [Phys. Rev. Lett. 2008 100, 123004]. The theory is implemented, demonstrated, and evaluated using a time-dependent semiempirical model, though it should be generally applicable to ab initio theory. The simulations show enhanced energy stability and a significantly reduced computational cost associated with the iterative solutions of both the ground state and the electronically excited states. Relaxed convergence criteria can therefore be used both formore » the self-consistent ground state optimization and for the iterative subspace diagonalization of the random phase approximation matrix used to calculate the excited state transitions. In conclusion, the XL-ESMD approach is expected to enable numerically efficient excited state molecular dynamics for such methods as time-dependent Hartree–Fock (TD-HF), Configuration Interactions Singles (CIS), and time-dependent density functional theory (TD-DFT).« less

  7. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression.

    PubMed

    Meng, Yilin; Roux, Benoît

    2015-08-11

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.

  8. Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range

    NASA Technical Reports Server (NTRS)

    Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.

  9. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression

    PubMed Central

    2015-01-01

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost. PMID:26574437

  10. Extended Lagrangian Excited State Molecular Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bjorgaard, Josiah August; Sheppard, Daniel Glen; Tretiak, Sergei

    In this work, an extended Lagrangian framework for excited state molecular dynamics (XL-ESMD) using time-dependent self-consistent field theory is proposed. The formulation is a generalization of the extended Lagrangian formulations for ground state Born–Oppenheimer molecular dynamics [Phys. Rev. Lett. 2008 100, 123004]. The theory is implemented, demonstrated, and evaluated using a time-dependent semiempirical model, though it should be generally applicable to ab initio theory. The simulations show enhanced energy stability and a significantly reduced computational cost associated with the iterative solutions of both the ground state and the electronically excited states. Relaxed convergence criteria can therefore be used both formore » the self-consistent ground state optimization and for the iterative subspace diagonalization of the random phase approximation matrix used to calculate the excited state transitions. In conclusion, the XL-ESMD approach is expected to enable numerically efficient excited state molecular dynamics for such methods as time-dependent Hartree–Fock (TD-HF), Configuration Interactions Singles (CIS), and time-dependent density functional theory (TD-DFT).« less

  11. Extended Lagrangian Excited State Molecular Dynamics.

    PubMed

    Bjorgaard, J A; Sheppard, D; Tretiak, S; Niklasson, A M N

    2018-02-13

    An extended Lagrangian framework for excited state molecular dynamics (XL-ESMD) using time-dependent self-consistent field theory is proposed. The formulation is a generalization of the extended Lagrangian formulations for ground state Born-Oppenheimer molecular dynamics [Phys. Rev. Lett. 2008 100, 123004]. The theory is implemented, demonstrated, and evaluated using a time-dependent semiempirical model, though it should be generally applicable to ab initio theory. The simulations show enhanced energy stability and a significantly reduced computational cost associated with the iterative solutions of both the ground state and the electronically excited states. Relaxed convergence criteria can therefore be used both for the self-consistent ground state optimization and for the iterative subspace diagonalization of the random phase approximation matrix used to calculate the excited state transitions. The XL-ESMD approach is expected to enable numerically efficient excited state molecular dynamics for such methods as time-dependent Hartree-Fock (TD-HF), Configuration Interactions Singles (CIS), and time-dependent density functional theory (TD-DFT).

  12. Bridging single and multireference coupled cluster theories with universal state selective formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhaskaran-Nair, Kiran; Kowalski, Karol

    2013-05-28

    The universal state selective (USS) multireference approach is used to construct new energy functionals which offers a unique possibility of bridging single and multireference coupled cluster theories (SR/MRCC). These functionals, which can be used to develop iterative and non-iterative approaches, utilize a special form of the trial wavefunctions, which assure additive separability (or size-consistency) of the USS energies in the non-interacting subsystem limit. When the USS formalism is combined with approximate SRCC theories, the resulting formalism can be viewed as a size-consistent version of the method of moments of coupled cluster equations (MMCC) employing a MRCC trial wavefunction. Special casesmore » of the USS formulations, which utilize single reference state specific CC (V.V. Ivanov, D.I. Lyakh, L. Adamowicz, Phys. Chem. Chem. Phys. 11, 2355 (2009)) and tailored CC (T. Kinoshita, O. Hino, R.J. Bartlett, J. Chem. Phys. 123, 074106 (2005)) expansions are also discussed.« less

  13. Conceptual design of ACB-CP for ITER cryogenic system

    NASA Astrophysics Data System (ADS)

    Jiang, Yongcheng; Xiong, Lianyou; Peng, Nan; Tang, Jiancheng; Liu, Liqiang; Zhang, Liang

    2012-06-01

    ACB-CP (Auxiliary Cold Box for Cryopumps) is used to supply the cryopumps system with necessary cryogen in ITER (International Thermonuclear Experimental Reactor) cryogenic distribution system. The conceptual design of ACB-CP contains thermo-hydraulic analysis, 3D structure design and strength checking. Through the thermohydraulic analysis, the main specifications of process valves, pressure safety valves, pipes, heat exchangers can be decided. During the 3D structure design process, vacuum requirement, adiabatic requirement, assembly constraints and maintenance requirement have been considered to arrange the pipes, valves and other components. The strength checking has been performed to crosscheck if the 3D design meets the strength requirements for the ACB-CP.

  14. DSMC simulation of rarefied gas flows under cooling conditions using a new iterative wall heat flux specifying technique

    NASA Astrophysics Data System (ADS)

    Akhlaghi, H.; Roohi, E.; Myong, R. S.

    2012-11-01

    Micro/nano geometries with specified wall heat flux are widely encountered in electronic cooling and micro-/nano-fluidic sensors. We introduce a new technique to impose the desired (positive/negative) wall heat flux boundary condition in the DSMC simulations. This technique is based on an iterative progress on the wall temperature magnitude. It is found that the proposed iterative technique has a good numerical performance and could implement both positive and negative values of wall heat flux rates accurately. Using present technique, rarefied gas flow through micro-/nanochannels under specified wall heat flux conditions is simulated and unique behaviors are observed in case of channels with cooling walls. For example, contrary to the heating process, it is observed that cooling of micro/nanochannel walls would result in small variations in the density field. Upstream thermal creep effects in the cooling process decrease the velocity slip despite of the Knudsen number increase along the channel. Similarly, cooling process decreases the curvature of the pressure distribution below the linear incompressible distribution. Our results indicate that flow cooling increases the mass flow rate through the channel, and vice versa.

  15. Concurrent analysis: towards generalisable qualitative research.

    PubMed

    Snowden, Austyn; Martin, Colin R

    2011-10-01

    This study develops an original method of qualitative analysis coherent with its interpretivist principles. The objective is to increase the likelihood of achieving generalisability and so improve the chance of the findings being translated into practice. Good qualitative research depends on coherent analysis of different types of data. The limitations of existing methodologies are first discussed to justify the need for a novel approach. To illustrate this approach, primary evidence is presented using the new methodology. The primary evidence consists of a constructivist grounded theory of how mental health nurses with prescribing authority integrate prescribing into practice. This theory is built concurrently from interviews, reflective accounts and case study data from the literature. Concurrent analysis. Ten research articles and 13 semi-structured interviews were sampled purposively and then theoretically and analysed concurrently using constructivist grounded theory. A theory of the process of becoming competent in mental health nurse prescribing was generated through this process. This theory was validated by 32 practising mental health nurse prescribers as an accurate representation of their experience. The methodology generated a coherent and generalisable theory. It is therefore claimed that concurrent analysis engenders consistent and iterative treatment of different sources of qualitative data in a manageable manner. This process supports facilitation of the highest standard of qualitative research. Concurrent analysis removes the artificial delineation of relevant literature from other forms of constructed data. This gives researchers clear direction to treat qualitative data consistently raising the chances of generalisability of the findings. Raising the generalisability of qualitative research will increase its chances of informing clinical practice. © 2010 Blackwell Publishing Ltd.

  16. Sequencing the Cortical Processing of Pitch-Evoking Stimuli using EEG Analysis and Source Estimation

    PubMed Central

    Butler, Blake E.; Trainor, Laurel J.

    2012-01-01

    Cues to pitch include spectral cues that arise from tonotopic organization and temporal cues that arise from firing patterns of auditory neurons. fMRI studies suggest a common pitch center is located just beyond primary auditory cortex along the lateral aspect of Heschl’s gyrus, but little work has examined the stages of processing for the integration of pitch cues. Using electroencephalography, we recorded cortical responses to high-pass filtered iterated rippled noise (IRN) and high-pass filtered complex harmonic stimuli, which differ in temporal and spectral content. The two stimulus types were matched for pitch saliency, and a mismatch negativity (MMN) response was elicited by infrequent pitch changes. The P1 and N1 components of event-related potentials (ERPs) are thought to arise from primary and secondary auditory areas, respectively, and to result from simple feature extraction. MMN is generated in secondary auditory cortex and is thought to act on feature-integrated auditory objects. We found that peak latencies of both P1 and N1 occur later in response to IRN stimuli than to complex harmonic stimuli, but found no latency differences between stimulus types for MMN. The location of each ERP component was estimated based on iterative fitting of regional sources in the auditory cortices. The sources of both the P1 and N1 components elicited by IRN stimuli were located dorsal to those elicited by complex harmonic stimuli, whereas no differences were observed for MMN sources across stimuli. Furthermore, the MMN component was located between the P1 and N1 components, consistent with fMRI studies indicating a common pitch region in lateral Heschl’s gyrus. These results suggest that while the spectral and temporal processing of different pitch-evoking stimuli involves different cortical areas during early processing, by the time the object-related MMN response is formed, these cues have been integrated into a common representation of pitch. PMID:22740836

  17. Plasma-surface interaction in the Be/W environment: Conclusions drawn from the JET-ILW for ITER

    NASA Astrophysics Data System (ADS)

    Brezinsek, S.; JET-EFDA contributors

    2015-08-01

    The JET ITER-Like Wall experiment (JET-ILW) provides an ideal test bed to investigate plasma-surface interaction (PSI) and plasma operation with the ITER plasma-facing material selection employing beryllium in the main chamber and tungsten in the divertor. The main PSI processes: material erosion and migration, (b) fuel recycling and retention, (c) impurity concentration and radiation have be1en studied and compared between JET-C and JET-ILW. The current physics understanding of these key processes in the JET-ILW revealed that both interpretation of previously obtained carbon results (JET-C) and predictions to ITER need to be revisited. The impact of the first-wall material on the plasma was underestimated. Main observations are: (a) low primary erosion source in H-mode plasmas and reduction of the material migration from the main chamber to the divertor (factor 7) as well as within the divertor from plasma-facing to remote areas (factor 30 - 50). The energetic threshold for beryllium sputtering minimises the primary erosion source and inhibits multi-step re-erosion in the divertor. The physical sputtering yield of tungsten is low as 10-5 and determined by beryllium ions. (b) Reduction of the long-term fuel retention (factor 10 - 20) in JET-ILW with respect to JET-C. The remaining retention is caused by implantation and co-deposition with beryllium and residual impurities. Outgassing has gained importance and impacts on the recycling properties of beryllium and tungsten. (c) The low effective plasma charge (Zeff = 1.2) and low radiation capability of beryllium reveal the bare deuterium plasma physics. Moderate nitrogen seeding, reaching Zeff = 1.6 , restores in particular the confinement and the L-H threshold behaviour. ITER-compatible divertor conditions with stable semi-detachment were obtained owing to a higher density limit with ILW. Overall JET demonstrated successful plasma operation in the Be/W material combination and confirms its advantageous PSI behaviour and gives strong support to the ITER material selection.

  18. Why and how Mastering an Incremental and Iterative Software Development Process

    NASA Astrophysics Data System (ADS)

    Dubuc, François; Guichoux, Bernard; Cormery, Patrick; Mescam, Jean Christophe

    2004-06-01

    One of the key issues regularly mentioned in the current software crisis of the space domain is related to the software development process that must be performed while the system definition is not yet frozen. This is especially true for complex systems like launchers or space vehicles.Several more or less mature solutions are under study by EADS SPACE Transportation and are going to be presented in this paper. The basic principle is to develop the software through an iterative and incremental process instead of the classical waterfall approach, with the following advantages:- It permits systematic management and incorporation of requirements changes over the development cycle with a minimal cost. As far as possible the most dimensioning requirements are analyzed and developed in priority for validating very early the architecture concept without the details.- A software prototype is very quickly available. It improves the communication between system and software teams, as it enables to check very early and efficiently the common understanding of the system requirements.- It allows the software team to complete a whole development cycle very early, and thus to become quickly familiar with the software development environment (methodology, technology, tools...). This is particularly important when the team is new, or when the environment has changed since the previous development. Anyhow, it improves a lot the learning curve of the software team.These advantages seem very attractive, but mastering efficiently an iterative development process is not so easy and induces a lot of difficulties such as:- How to freeze one configuration of the system definition as a development baseline, while most of thesystem requirements are completely and naturally unstable?- How to distinguish stable/unstable and dimensioning/standard requirements?- How to plan the development of each increment?- How to link classical waterfall development milestones with an iterative approach: when should theclassical reviews be performed: Software Specification Review? Preliminary Design Review? CriticalDesign Review? Code Review? Etc...Several solutions envisaged or already deployed by EADS SPACE Transportation will be presented, both from a methodological and technological point of view:- How the MELANIE EADS ST internal methodology improves the concurrent engineering activitiesbetween GNC, software and simulation teams in a very iterative and reactive way.- How the CMM approach can help by better formalizing Requirements Management and Planningprocesses.- How the Automatic Code Generation with "certified" tools (SCADE) can still dramatically shorten thedevelopment cycle.Then the presentation will conclude by showing an evaluation of the cost and planning reduction based on a pilot application by comparing figures on two similar projects: one with the classical waterfall process, the other one with an iterative and incremental approach.

  19. A Dynamic Model of the Initial Spares Support List Development Process

    DTIC Science & Technology

    1979-06-01

    S117Z1NOTE NREI -NOT READI END ITERS IIT7INOTE GPEI -QUANTITY OF PARTS M. END ITER 11775NOTE FUSERF -PARTS USE RATE FACTOR U8WOTE OP U -OTHER PARTS USE...FAILURES ’I 1675R PtJER.L=(NREI.K) (QPEI) (PUSERF.K)+OPUR II7HNOTE PUSER -PARTS USE RATE II7t5NOTE NREI -NOT READY END ITEMS II756NOTE GPEI -QUANTITY

  20. The development and implementation of a decision-making capacity assessment model.

    PubMed

    Parmar, Jasneet; Brémault-Phillips, Suzette; Charles, Lesley

    2015-03-01

    Decision-making capacity assessment (DMCA) is an issue of increasing importance for older adults. Current challenges need to be explored, and potential processes and strategies considered in order to address issues of DMCA in a more coordinated manner. An iterative process was used to address issues related to DMCA. This began with recognition of challenges associated with capacity assessments (CAs) by staff at Covenant Health (CH). Review of the literature, as well as discussions with and a survey of staff at three CH sites, resulted in determination of issues related to DMCA. Development of a DMCA Model and demonstration of its feasibility followed. A process was proposed with front-end screening/problem- solving, a well-defined standard assessment, and definition of team member roles. A Capacity Assessment Care Map was formulated based on the process. Documentation was developed consisting of a Capacity Assessment Process Worksheet, Capacity Interview Worksheet, and a brochure. Interactive workshops were delivered to familiarize staff with the DMCA Model. A successful demonstration project led to implementation across all sites in the Capital Health region, and eventual provincial endorsement. Concerns identified in the survey and in the literature regarding CA were addressed through the holistic interdisciplinary approach offered by the DMCA Model.

  1. A stochastic estimation procedure for intermittently-observed semi-Markov multistate models with back transitions.

    PubMed

    Aralis, Hilary; Brookmeyer, Ron

    2017-01-01

    Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.

  2. The truncated conjugate gradient (TCG), a non-iterative/fixed-cost strategy for computing polarization in molecular dynamics: Fast evaluation of analytical forces

    NASA Astrophysics Data System (ADS)

    Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip

    2017-10-01

    In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.

  3. The truncated conjugate gradient (TCG), a non-iterative/fixed-cost strategy for computing polarization in molecular dynamics: Fast evaluation of analytical forces.

    PubMed

    Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip

    2017-10-28

    In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.

  4. Improving cluster-based missing value estimation of DNA microarray data.

    PubMed

    Brás, Lígia P; Menezes, José C

    2007-06-01

    We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.

  5. Investigation of a Parabolic Iterative Solver for Three-dimensional Configurations

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Watson, Willie R.; Mani, Ramani

    2007-01-01

    A parabolic iterative solution procedure is investigated that seeks to extend the parabolic approximation used within the internal propagation module of the duct noise propagation and radiation code CDUCT-LaRC. The governing convected Helmholtz equation is split into a set of coupled equations governing propagation in the positive and negative directions. The proposed method utilizes an iterative procedure to solve the coupled equations in an attempt to account for possible reflections from internal bifurcations, impedance discontinuities, and duct terminations. A geometry consistent with the NASA Langley Curved Duct Test Rig is considered and the effects of acoustic treatment and non-anechoic termination are included. Two numerical implementations are studied and preliminary results indicate that improved accuracy in predicted amplitude and phase can be obtained for modes at a cut-off ratio of 1.7. Further predictions for modes at a cut-off ratio of 1.1 show improvement in predicted phase at the expense of increased amplitude error. Possible methods of improvement are suggested based on analytic and numerical analysis. It is hoped that coupling the parabolic iterative approach with less efficient, high fidelity finite element approaches will ultimately provide the capability to perform efficient, higher fidelity acoustic calculations within complex 3-D geometries for impedance eduction and noise propagation and radiation predictions.

  6. Predicting rotation for ITER via studies of intrinsic torque and momentum transport in DIII-D

    DOE PAGES

    Chrystal, C.; Grierson, B. A.; Staebler, G. M.; ...

    2017-03-30

    Here, experiments at the DIII-D tokamak have used dimensionless parameter scans to investigate the dependencies of intrinsic torque and momentum transport in order to inform a prediction of the rotation profile in ITER. Measurements of intrinsic torque profiles and momentum confinement time in dimensionless parameter scans of normalized gyroradius and collisionality are used to predict the amount of intrinsic rotation in the pedestal of ITER. Additional scans of T e/T i and safety factor are used to determine the accuracy of momentum flux predictions of the quasi-linear gyrokinetic code TGLF. In these scans, applications of modulated torque are used tomore » measure the incremental momentum diffusivity, and results are consistent with the E x B shear suppression of turbulent transport. These incremental transport measurements are also compared with the TGLF results. In order to form a prediction of the rotation profile for ITER, the pedestal prediction is used as a boundary condition to a simulation that uses TGLF to determine the transport in the core of the plasma. The predicted rotation is ≈20 krad/s in the core, lower than in many current tokamak operating scenarios. TGLF predictions show that this rotation is still significant enough to have a strong effect on confinement via E x B shear.« less

  7. Evaluation of the Fretting Resistance of the High Voltage Insulation on the ITER Magnet Feeder Busbars

    NASA Astrophysics Data System (ADS)

    Clayton, N.; Crouchen, M.; Evans, D.; Gung, C.-Y.; Su, M.; Devred, A.; Piccin, R.

    2017-12-01

    The high voltage (HV) insulation on the ITER magnet feeder superconducting busbars and current leads will be prepared from S-glass fabric, pre-impregnated with an epoxy resin, which is interleaved with polyimide film and wrapped onto the components and cured during feeder manufacture. The insulation architecture consists of nine half-lapped layers of glass/Kapton, which is then enveloped in a ground-screen, and two further half-lapped layers of glass pre-preg for mechanical protection. The integrity of the HV insulation is critical in order to inhibit electrical arcs within the feeders. The insulation over the entire length of the HV components (bus bar, current leads and joints) must provide a level of voltage isolation of 30 kV. In operation, the insulation on ITER busbars will be subjected to high mechanical loads, arising from Lorentz forces, and in addition will be subjected to fretting erosion against stainless steel clamps, as the pulsed nature of some magnets results in longitudinal movement of the busbar. This work was aimed at assessing the wear on, and the changes in, the electrical properties of the insulation when subjected to typical ITER operating conditions. High voltage tests demonstrated that the electrical isolation of the insulation was intact after the fretting test.

  8. Study of the tectonic evolution of the South-Eastern Alpine and Western Dinaric Foredeep by means of tomographic analysis from multichannel seismic reflection data in the Gulf of Trieste (North Adriatic Sea)

    NASA Astrophysics Data System (ADS)

    Dal Cin, Michela; Böhm, Gualtiero; Busetti, Martina; Zgur, Fabrizio

    2017-04-01

    The Gulf of Trieste (GOT) is located south of the intersection between the External Dinarides and the South-Eastern Alps. It is considered the foredeep of both the orogens and its sedimentary sequence consists of the Mesozoic-Paleogenic Carbonate Platform, the Eocene turbiditic sediments of the Flysch, the Late Oligocene-Miocenic continental to coastal units of Molassa, the Plio-Quaternary continental and marine deposits. The area underwent a multiphase tectonic activity that started in the Mesozoic, when an extensional regime, with NW-SE oriented normal faults, allowed the aggradation of the Carbonate Platform. In the Late Cretaceous-Paleogene, the Dinaric fold-thrust system gradually migrated towards SW, deflecting the Carbonate Platform E-ward. The main frontal ramp of the External Dinarides is the Karst Thrust that extends along the eastern and rocky coastline of the GOT and that separates the hanging-wall, topographically expressed by the Karst highland, from the footwall lying in the gulf. In the Oligocene-Miocene, the convergence that generated the S-ward vergent Southern Alpine orogen, caused a N-ward deepening of the platform and reactivated the inherited Mesozoic and Cenozoic structures with a dextral transcurrent motion. In the last decade, a dense geophysical dataset has been acquired in the GOT: it consists of 632 km of multichannel seismic (MCS) reflection and sub-bottom profiles, that have been processed and interpreted in time domain by OGS. The data evidenced fault systems related to the extensional Mesozoic and compressional Cenozoic phases and their reactivation with transcurrent kinematics, due to the ongoing N-ward motion of the Adria plate. The transcurrent fault systems show evidence of neotectonic activity and are often the preferential way along which fluids migrate from the carbonates to the seafloor. The MCS lines were used in this work to perform a tomographic analysis providing a detailed velocity model that can enhance seismic imaging and depth conversion and migration, for a deeper understanding of the tectonic evolution of the GOT. The tomographic method started from the identification of the main reflected and refracted events on common shot gathers. The related travel times were used in an iterative process that uses SIRT method (Simultaneous Iterative Reconstruction Technique) for the evaluation of the velocity field and an algorithm, based on the principle of the minimum dispersion of the estimated reflection/refraction points, for the definition of the interface's depth and geometry. The iterative process was stopped when the last model reached a minimum difference from the previous model. The time residuals were then computed to estimate the reliability of the results. The tomography provided us crucial information about the structural setting of the gulf, such as a vertical displacement for the Karst Thrust bigger than 1500 m.

  9. Acceleration of linear stationary iterative processes in multiprocessor computers. II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romm, Ya.E.

    1982-05-01

    For pt.I, see Kibernetika, vol.18, no.1, p.47 (1982). For pt.I, see Cybernetics, vol.18, no.1, p.54 (1982). Considers a reduced system of linear algebraic equations x=ax+b, where a=(a/sub ij/) is a real n*n matrix; b is a real vector with common euclidean norm >>>. It is supposed that the existence and uniqueness of solution det (0-a) not equal to e is given, where e is a unit matrix. The linear iterative process converging to x x/sup (k+1)/=fx/sup (k)/, k=0, 1, 2, ..., where the operator f translates r/sup n/ into r/sup n/. In considering implementation of the iterative process (ip) inmore » a multiprocessor system, it is assumed that the number of processors is constant, and are various values of the latter investigated; it is assumed in addition, that the processors perform elementary binary arithmetic operations of addition and multiestimates only include the time of execution of arithmetic operations. With any paralleling of individual iteration, the execution time of the ip is proportional to the number of sequential steps k+1. The author sets the task of reducing the number of sequential steps in the ip so as to execute it in a time proportional to a value smaller than k+1. He also sets the goal of formulating a method of accelerated bit serial-parallel execution of each successive step of the ip, with, in the modification sought, a reduced number of steps in a time comparable to the operation time of logical elements. 6 references.« less

  10. Development and validation of a new survey: Perceptions of Teaching as a Profession (PTaP)

    NASA Astrophysics Data System (ADS)

    Adams, Wendy

    2017-01-01

    To better understand the impact of efforts to train more science teachers such as the PhysTEC Project and to help with early identification of future teachers, we are developing the survey of Perceptions of Teaching as a Profession (PTaP) to measure students' views of teaching as a career, their interest in teaching and the perceived climate of physics departments towards teaching as a profession. The instrument consists of a series of statements which require a response using a 5-point Likert-scale and can be easily administered online. The survey items were drafted by a team of researchers and physics teacher candidates and then reviewed by an advisory committee of 20 physics teacher educators and practicing teachers. We conducted 27 interviews with both teacher candidates and non-teaching STEM majors. The survey was refined through an iterative process of student interviews and item clarification until all items were interpreted consistently and answered for consistent reasons. In this presentation the preliminary results from the student interviews as well as the results of item analysis and a factor analysis on 900 student responses will be shared.

  11. Joint Transmit Power Allocation and Splitting for SWIPT Aided OFDM-IDMA in Wireless Sensor Networks

    PubMed Central

    Li, Shanshan; Zhou, Xiaotian; Wang, Cheng-Xiang; Yuan, Dongfeng; Zhang, Wensheng

    2017-01-01

    In this paper, we propose to combine Orthogonal Frequency Division Multiplexing-Interleave Division Multiple Access (OFDM-IDMA) with Simultaneous Wireless Information and Power Transfer (SWIPT), resulting in SWIPT aided OFDM-IDMA scheme for power-limited sensor networks. In the proposed system, the Receive Node (RN) applies Power Splitting (PS) to coordinate the Energy Harvesting (EH) and Information Decoding (ID) process, where the harvested energy is utilized to guarantee the iterative Multi-User Detection (MUD) of IDMA to work under sufficient number of iterations. Our objective is to minimize the total transmit power of Source Node (SN), while satisfying the requirements of both minimum harvested energy and Bit Error Rate (BER) performance from individual receive nodes. We formulate such a problem as a joint power allocation and splitting one, where the iteration number of MUD is also taken into consideration as the key parameter to affect both EH and ID constraints. To solve it, a sub-optimal algorithm is proposed to determine the power profile, PS ratio and iteration number of MUD in an iterative manner. Simulation results verify that the proposed algorithm can provide significant performance improvement. PMID:28677636

  12. Joint Transmit Power Allocation and Splitting for SWIPT Aided OFDM-IDMA in Wireless Sensor Networks.

    PubMed

    Li, Shanshan; Zhou, Xiaotian; Wang, Cheng-Xiang; Yuan, Dongfeng; Zhang, Wensheng

    2017-07-04

    In this paper, we propose to combine Orthogonal Frequency Division Multiplexing-Interleave Division Multiple Access (OFDM-IDMA) with Simultaneous Wireless Information and Power Transfer (SWIPT), resulting in SWIPT aided OFDM-IDMA scheme for power-limited sensor networks. In the proposed system, the Receive Node (RN) applies Power Splitting (PS) to coordinate the Energy Harvesting (EH) and Information Decoding (ID) process, where the harvested energy is utilized to guarantee the iterative Multi-User Detection (MUD) of IDMA to work under sufficient number of iterations. Our objective is to minimize the total transmit power of Source Node (SN), while satisfying the requirements of both minimum harvested energy and Bit Error Rate (BER) performance from individual receive nodes. We formulate such a problem as a joint power allocation and splitting one, where the iteration number of MUD is also taken into consideration as the key parameter to affect both EH and ID constraints. To solve it, a sub-optimal algorithm is proposed to determine the power profile, PS ratio and iteration number of MUD in an iterative manner. Simulation results verify that the proposed algorithm can provide significant performance improvement.

  13. Pseudo-time methods for constrained optimization problems governed by PDE

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1995-01-01

    In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.

  14. FAST TRACK PAPER: Non-iterative multiple-attenuation methods: linear inverse solutions to non-linear inverse problems - II. BMG approximation

    NASA Astrophysics Data System (ADS)

    Ikelle, Luc T.; Osen, Are; Amundsen, Lasse; Shen, Yunqing

    2004-12-01

    The classical linear solutions to the problem of multiple attenuation, like predictive deconvolution, τ-p filtering, or F-K filtering, are generally fast, stable, and robust compared to non-linear solutions, which are generally either iterative or in the form of a series with an infinite number of terms. These qualities have made the linear solutions more attractive to seismic data-processing practitioners. However, most linear solutions, including predictive deconvolution or F-K filtering, contain severe assumptions about the model of the subsurface and the class of free-surface multiples they can attenuate. These assumptions limit their usefulness. In a recent paper, we described an exception to this assertion for OBS data. We showed in that paper that a linear and non-iterative solution to the problem of attenuating free-surface multiples which is as accurate as iterative non-linear solutions can be constructed for OBS data. We here present a similar linear and non-iterative solution for attenuating free-surface multiples in towed-streamer data. For most practical purposes, this linear solution is as accurate as the non-linear ones.

  15. Enhancement of event related potentials by iterative restoration algorithms

    NASA Astrophysics Data System (ADS)

    Pomalaza-Raez, Carlos A.; McGillem, Clare D.

    1986-12-01

    An iterative procedure for the restoration of event related potentials (ERP) is proposed and implemented. The method makes use of assumed or measured statistical information about latency variations in the individual ERP components. The signal model used for the restoration algorithm consists of a time-varying linear distortion and a positivity/negativity constraint. Additional preprocessing in the form of low-pass filtering is needed in order to mitigate the effects of additive noise. Numerical results obtained with real data show clearly the presence of enhanced and regenerated components in the restored ERP's. The procedure is easy to implement which makes it convenient when compared to other proposed techniques for the restoration of ERP signals.

  16. E-Learning Quality Assurance: A Process-Oriented Lifecycle Model

    ERIC Educational Resources Information Center

    Abdous, M'hammed

    2009-01-01

    Purpose: The purpose of this paper is to propose a process-oriented lifecycle model for ensuring quality in e-learning development and delivery. As a dynamic and iterative process, quality assurance (QA) is intertwined with the e-learning development process. Design/methodology/approach: After reviewing the existing literature, particularly…

  17. Quantifying the buildup in extent and complexity of free exploration in mice

    PubMed Central

    Benjamini, Yoav; Fonio, Ehud; Galili, Tal; Havkin, Gregor Z.; Golani, Ilan

    2011-01-01

    To obtain a perspective on an animal's own functional world, we study its behavior in situations that allow the animal to regulate the growth rate of its behavior and provide us with the opportunity to quantify its moment-by-moment developmental dynamics. Thus, we are able to show that mouse exploratory behavior consists of sequences of repeated motion: iterative processes that increase in extent and complexity, whose presumed function is a systematic active management of input acquired during the exploration of a novel environment. We use this study to demonstrate our approach to quantifying behavior: targeting aspects of behavior that are shown to be actively managed by the animal, and using measures that are discriminative across strains and treatments and replicable across laboratories. PMID:21383149

  18. Ultrathin metasurface with high absorptance for waterborne sound

    NASA Astrophysics Data System (ADS)

    Mei, Jun; Zhang, Xiujuan; Wu, Ying

    2018-03-01

    We present a design for an acoustic metasurface which can efficiently absorb low-frequency sound energy in water. The metasurface has a simple structure and consists of only two common materials: i.e., water and silicone rubber. The optimized material and geometrical parameters of the designed metasurface are determined by an analytic formula in conjunction with an iterative process based on the retrieval method. Although the metasurface is as thin as 0.15 of the wavelength, it can absorb 99.7% of the normally incident sound wave energy. Furthermore, the metasurface maintains a substantially high absorptance over a relatively broad bandwidth, and also works well for oblique incidence with an incident angle of up to 50°. Potential applications in the field of underwater sound isolation are expected.

  19. Virtual reality and gaming systems to improve walking and mobility for people with musculoskeletal and neuromuscular conditions.

    PubMed

    Deutsch, Judith E

    2009-01-01

    Improving walking for individuals with musculoskeletal and neuromuscular conditions is an important aspect of rehabilitation. The capabilities of clinicians who address these rehabilitation issues could be augmented with innovations such as virtual reality gaming based technologies. The chapter provides an overview of virtual reality gaming based technologies currently being developed and tested to improve motor and cognitive elements required for ambulation and mobility in different patient populations. Included as well is a detailed description of a single VR system, consisting of the rationale for development and iterative refinement of the system based on clinical science. These concepts include: neural plasticity, part-task training, whole task training, task specific training, principles of exercise and motor learning, sensorimotor integration, and visual spatial processing.

  20. Collaborating with human factors when designing an electronic textbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ratner, J.A.; Zadoks, R.I.; Attaway, S.W.

    The development of on-line engineering textbooks presents new challenges to authors to effectively integrate text and tools in an electronic environment. By incorporating human factors principles of interface design and cognitive psychology early in the design process, a team at Sandia National Laboratories was able to make the end product more usable and shorten the prototyping and editing phases. A critical issue was simultaneous development of paper and on-line versions of the textbook. In addition, interface consistency presented difficulties with distinct goals and limitations for each media. Many of these problems were resolved swiftly with human factors input using templates,more » style guides and iterative usability testing of both paper and on-line versions. Writing style continuity was also problematic with numerous authors contributing to the text.« less

  1. Groundwater Model Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process ofmore » stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation data to constrain model input parameters is shown for the second case study using a Bayesian approach known as Markov Chain Monte Carlo. The approach shows a great potential to be helpful in the validation process and in incorporating prior knowledge with new field data to derive posterior distributions for both model input and output.« less

  2. A Study of Morrison's Iterative Noise Removal Method. Final Report M. S. Thesis

    NASA Technical Reports Server (NTRS)

    Ioup, G. E.; Wright, K. A. R.

    1985-01-01

    Morrison's iterative noise removal method is studied by characterizing its effect upon systems of differing noise level and response function. The nature of data acquired from a linear shift invariant instrument is discussed so as to define the relationship between the input signal, the instrument response function, and the output signal. Fourier analysis is introduced, along with several pertinent theorems, as a tool to more thorough understanding of the nature of and difficulties with deconvolution. In relation to such difficulties the necessity of a noise removal process is discussed. Morrison's iterative noise removal method and the restrictions upon its application are developed. The nature of permissible response functions is discussed, as is the choice of the response functions used.

  3. Planning as an Iterative Process

    NASA Technical Reports Server (NTRS)

    Smith, David E.

    2012-01-01

    Activity planning for missions such as the Mars Exploration Rover mission presents many technical challenges, including oversubscription, consideration of time, concurrency, resources, preferences, and uncertainty. These challenges have all been addressed by the research community to varying degrees, but significant technical hurdles still remain. In addition, the integration of these capabilities into a single planning engine remains largely unaddressed. However, I argue that there is a deeper set of issues that needs to be considered namely the integration of planning into an iterative process that begins before the goals, objectives, and preferences are fully defined. This introduces a number of technical challenges for planning, including the ability to more naturally specify and utilize constraints on the planning process, the ability to generate multiple qualitatively different plans, and the ability to provide deep explanation of plans.

  4. Realization of high quality production schedules: Structuring quality factors via iteration of user specification processes

    NASA Technical Reports Server (NTRS)

    Hamazaki, Takashi

    1992-01-01

    This paper describes an architecture for realizing high quality production schedules. Although quality is one of the most important aspects of production scheduling, it is difficult, even for a user, to specify precisely. However, it is also true that the decision as to whether a scheduler is good or bad can only be made by the user. This paper proposes the following: (1) the quality of a schedule can be represented in the form of quality factors, i.e. constraints and objectives of the domain, and their structure; (2) quality factors and their structure can be used for decision making at local decision points during the scheduling process; and (3) that they can be defined via iteration of user specification processes.

  5. 2009 Space Shuttle Probabilistic Risk Assessment Overview

    NASA Technical Reports Server (NTRS)

    Hamlin, Teri L.; Canga, Michael A.; Boyer, Roger L.; Thigpen, Eric B.

    2010-01-01

    Loss of a Space Shuttle during flight has severe consequences, including loss of a significant national asset; loss of national confidence and pride; and, most importantly, loss of human life. The Shuttle Probabilistic Risk Assessment (SPRA) is used to identify risk contributors and their significance; thus, assisting management in determining how to reduce risk. In 2006, an overview of the SPRA Iteration 2.1 was presented at PSAM 8 [1]. Like all successful PRAs, the SPRA is a living PRA and has undergone revisions since PSAM 8. The latest revision to the SPRA is Iteration 3. 1, and it will not be the last as the Shuttle program progresses and more is learned. This paper discusses the SPRA scope, overall methodology, and results, as well as provides risk insights. The scope, assumptions, uncertainties, and limitations of this assessment provide risk-informed perspective to aid management s decision-making process. In addition, this paper compares the Iteration 3.1 analysis and results to the Iteration 2.1 analysis and results presented at PSAM 8.

  6. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.

    2012-06-15

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less

  7. Non-iterative distance constraints enforcement for cloth drapes simulation

    NASA Astrophysics Data System (ADS)

    Hidajat, R. L. L. G.; Wibowo, Arifin, Z.; Suyitno

    2016-03-01

    A cloth simulation represents the behavior of cloth objects such as flag, tablecloth, or even garments has application in clothing animation for games and virtual shops. Elastically deformable models have widely used to provide realistic and efficient simulation, however problem of overstretching is encountered. We introduce a new cloth simulation algorithm that replaces iterative distance constraint enforcement steps with non-iterative ones for preventing over stretching in a spring-mass system for cloth modeling. Our method is based on a simple position correction procedure applied at one end of a spring. In our experiments, we developed a rectangle cloth model which is initially at a horizontal position with one point is fixed, and it is allowed to drape by its own weight. Our simulation is able to achieve a plausible cloth drapes as in reality. This paper aims to demonstrate the reliability of our approach to overcome overstretches while decreasing the computational cost of the constraint enforcement process due to an iterative procedure that is eliminated.

  8. Error Control Coding Techniques for Space and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    2000-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  9. An Interactive Concatenated Turbo Coding System

    NASA Technical Reports Server (NTRS)

    Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc

    1999-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  10. Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR).

    PubMed

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Robins, Marthony; Colsher, James; Samei, Ehsan

    2013-11-01

    Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables. Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision. Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A. The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of accuracy on reconstruction algorithms, such that volumes quantified from scans of different reconstruction algorithms can be compared. The little difference found between the precision of FBP and iterative reconstructions could be a result of both iterative reconstruction's diminished noise reduction at the edge of the nodules as well as the loss of resolution at high noise levels with iterative reconstruction. The findings do not rule out potential advantage of IR that might be evident in a study that uses a larger number of nodules or repeated scans.

  11. Learning Objects: A User-Centered Design Process

    ERIC Educational Resources Information Center

    Branon, Rovy F., III

    2011-01-01

    Design research systematically creates or improves processes, products, and programs through an iterative progression connecting practice and theory (Reinking, 2008; van den Akker, 2006). Developing a new instructional systems design (ISD) processes through design research is necessary when new technologies emerge that challenge existing practices…

  12. Comparing Freshman and doctoral engineering students in design: mapping with a descriptive framework

    NASA Astrophysics Data System (ADS)

    Carmona Marques, P.

    2017-11-01

    This paper reports the results of a study of engineering students' approaches to an open-ended design problem. To carry out this, sketches and interviews were collected from 9 freshmen (first year) and 10 doctoral engineering students, when they designed solutions for orange squeezers. Sketches and interviews were analysed and mapped with a descriptive 'ideation framework' (IF) of the design process, to document and compare their design creativity (Carmona Marques, P., A. Silva, E. Henriques, and C. Magee. 2014. "A Descriptive Framework of the Design Process from a Dual Cognitive Engineering Perspective." International Journal of Design Creativity and Innovation 2 (3): 142-164). The results show that the designers worked in a manner largely consistent with the IF for generalisation and specialisation loops. Also, doctoral students produced more alternative solutions during the ideation process. In addition, compared to freshman, doctoral used the generalisation loop of the IF, working at higher levels of abstraction. The iterative nature of design is highlighted during this study - a potential contribution to decrease the gap between both groups in engineering education.

  13. Recommendations on basic requirements for intensive care units: structural and organizational aspects.

    PubMed

    Valentin, Andreas; Ferdinande, Patrick

    2011-10-01

    To provide guidance and recommendations for the planning or renovation of intensive care units (ICUs) with respect to the specific characteristics relevant to organizational and structural aspects of intensive care medicine. The Working Group on Quality Improvement (WGQI) of the European Society of Intensive Care Medicine (ESICM) identified the basic requirements for ICUs by a comprehensive literature search and an iterative process with several rounds of consensus finding with the participation of 47 intensive care physicians from 23 countries. The starting point of this process was an ESICM recommendation published in 1997 with the need for an updated version. The document consists of operational guidelines and design recommendations for ICUs. In the first part it covers the definition and objectives of an ICU, functional criteria, activity criteria, and the management of equipment. The second part deals with recommendations with respect to the planning process, floorplan and connections, accommodation, fire safety, central services, and the necessary communication systems. This document provides a detailed framework for the planning or renovation of ICUs based on a multinational consensus within the ESICM.

  14. An Implementation Methodology and Software Tool for an Entropy Based Engineering Model for Evolving Systems

    DTIC Science & Technology

    2003-06-01

    delivery Data Access (1980s) "What were unit sales in New England last March?" Relational databases (RDBMS), Structured Query Language ( SQL ...macros written in Visual Basic for Applications ( VBA ). 32 Iteration Two: Class Diagram Tech OASIS Export ScriptImport Filter Data ProcessingMethod 1...MS Excel * 1 VBA Macro*1 contains sends data to co nt ai ns executes * * 1 1 contains contains Figure 20. Iteration two class diagram The

  15. Recent advances in Lanczos-based iterative methods for nonsymmetric linear systems

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Golub, Gene H.; Nachtigal, Noel M.

    1992-01-01

    In recent years, there has been a true revival of the nonsymmetric Lanczos method. On the one hand, the possible breakdowns in the classical algorithm are now better understood, and so-called look-ahead variants of the Lanczos process have been developed, which remedy this problem. On the other hand, various new Lanczos-based iterative schemes for solving nonsymmetric linear systems have been proposed. This paper gives a survey of some of these recent developments.

  16. Increasing feasibility of the field-programmable gate array implementation of an iterative image registration using a kernel-warping algorithm

    NASA Astrophysics Data System (ADS)

    Nguyen, An Hung; Guillemette, Thomas; Lambert, Andrew J.; Pickering, Mark R.; Garratt, Matthew A.

    2017-09-01

    Image registration is a fundamental image processing technique. It is used to spatially align two or more images that have been captured at different times, from different sensors, or from different viewpoints. There have been many algorithms proposed for this task. The most common of these being the well-known Lucas-Kanade (LK) and Horn-Schunck approaches. However, the main limitation of these approaches is the computational complexity required to implement the large number of iterations necessary for successful alignment of the images. Previously, a multi-pass image interpolation algorithm (MP-I2A) was developed to considerably reduce the number of iterations required for successful registration compared with the LK algorithm. This paper develops a kernel-warping algorithm (KWA), a modified version of the MP-I2A, which requires fewer iterations to successfully register two images and less memory space for the field-programmable gate array (FPGA) implementation than the MP-I2A. These reductions increase feasibility of the implementation of the proposed algorithm on FPGAs with very limited memory space and other hardware resources. A two-FPGA system rather than single FPGA system is successfully developed to implement the KWA in order to compensate insufficiency of hardware resources supported by one FPGA, and increase parallel processing ability and scalability of the system.

  17. Application of an improved maximum correlated kurtosis deconvolution method for fault diagnosis of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo

    2017-08-01

    The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.

  18. Combined dry plasma etching and online metrology for manufacturing highly focusing x-ray mirrors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berujon, S., E-mail: berujon@esrf.eu; Ziegler, E., E-mail: ziegler@esrf.eu; Cunha, S. da

    A new figuring station was designed and installed at the ESRF beamline BM05. It allows the figuring of mirrors within an iterative process combining the advantage of online metrology with dry etching. The complete process takes place under a vacuum environment to minimize surface contamination while non-contact surfacing tools open up the possibility of performing at-wavelength metrology and eliminating placement errors. The aim is to produce mirrors whose slopes do not deviate from the stigmatic profile by more than 0.1 µrad rms while keeping surface roughness in the acceptable limit of 0.1-0.2 nm rms. The desired elliptical mirror surface shapemore » can be achieved in a few iterations in about a one day time span. This paper describes some of the important aspects of the process regarding both the online metrology and the etching process.« less

  19. A superior edge preserving filter with a systematic analysis

    NASA Technical Reports Server (NTRS)

    Holladay, Kenneth W.; Rickman, Doug

    1991-01-01

    A new, adaptive, edge preserving filter for use in image processing is presented. It had superior performance when compared to other filters. Termed the contiguous K-average, it aggregates pixels by examining all pixels contiguous to an existing cluster and adding the pixel closest to the mean of the existing cluster. The process is iterated until K pixels were accumulated. Rather than simply compare the visual results of processing with this operator to other filters, some approaches were developed which allow quantitative evaluation of how well and filter performs. Particular attention is given to the standard deviation of noise within a feature and the stability of imagery under iterative processing. Demonstrations illustrate the performance of several filters to discriminate against noise and retain edges, the effect of filtering as a preprocessing step, and the utility of the contiguous K-average filter when used with remote sensing data.

  20. Convergence of quasiparticle self-consistent G W calculations of transition-metal monoxides

    NASA Astrophysics Data System (ADS)

    Das, Suvadip; Coulter, John E.; Manousakis, Efstratios

    2015-03-01

    Finding an accurate ab initio approach for calculating the electronic properties of transition-metal oxides has been a problem for several decades. In this paper, we investigate the electronic structure of the transition-metal monoxides MnO, CoO, and NiO in their undistorted rocksalt structure within a fully iterated quasiparticle self-consistent G W (QPsc G W ) scheme. We study the convergence of the QPsc G W method, i.e., how the quasiparticle energy eigenvalues and wave functions converge as a function of the QPsc G W iterations, and we compare the converged outputs obtained from different starting wave functions. We find that the convergence is slow and that a one-shot G0W0 calculation does not significantly improve the initial eigenvalues and states. It is important to notice that in some cases the "path" to convergence may go through energy band reordering which cannot be captured by the simple initial unperturbed Hamiltonian. When we reach a fully iterated solution, the converged density of states, band gaps, and magnetic moments of these oxides are found to be only weakly dependent on the choice of the starting wave functions and in reasonably good agreement with the experiment. Finally, this approach provides a clear picture of the interplay between the various orbitals near the Fermi level of these simple transition-metal monoxides. The results of these accurate ab initio calculations can provide input for models aiming at describing the low-energy physics in these materials.

  1. Acceleration of GPU-based Krylov solvers via data transfer reduction

    DOE PAGES

    Anzt, Hartwig; Tomov, Stanimire; Luszczek, Piotr; ...

    2015-04-08

    Krylov subspace iterative solvers are often the method of choice when solving large sparse linear systems. At the same time, hardware accelerators such as graphics processing units continue to offer significant floating point performance gains for matrix and vector computations through easy-to-use libraries of computational kernels. However, as these libraries are usually composed of a well optimized but limited set of linear algebra operations, applications that use them often fail to reduce certain data communications, and hence fail to leverage the full potential of the accelerator. In this study, we target the acceleration of Krylov subspace iterative methods for graphicsmore » processing units, and in particular the Biconjugate Gradient Stabilized solver that significant improvement can be achieved by reformulating the method to reduce data-communications through application-specific kernels instead of using the generic BLAS kernels, e.g. as provided by NVIDIA’s cuBLAS library, and by designing a graphics processing unit specific sparse matrix-vector product kernel that is able to more efficiently use the graphics processing unit’s computing power. Furthermore, we derive a model estimating the performance improvement, and use experimental data to validate the expected runtime savings. Finally, considering that the derived implementation achieves significantly higher performance, we assert that similar optimizations addressing algorithm structure, as well as sparse matrix-vector, are crucial for the subsequent development of high-performance graphics processing units accelerated Krylov subspace iterative methods.« less

  2. A semi-automatic method for analysis of landscape elements using Shuttle Radar Topography Mission and Landsat ETM+ data

    NASA Astrophysics Data System (ADS)

    Ehsani, Amir Houshang; Quiel, Friedrich

    2009-02-01

    In this paper, we demonstrate artificial neural networks—self-organizing map (SOM)—as a semi-automatic method for extraction and analysis of landscape elements in the man and biosphere reserve "Eastern Carpathians". The Shuttle Radar Topography Mission (SRTM) collected data to produce generally available digital elevation models (DEM). Together with Landsat Thematic Mapper data, this provides a unique, consistent and nearly worldwide data set. To integrate the DEM with Landsat data, it was re-projected from geographic coordinates to UTM with 28.5 m spatial resolution using cubic convolution interpolation. To provide quantitative morphometric parameters, first-order (slope) and second-order derivatives of the DEM—minimum curvature, maximum curvature and cross-sectional curvature—were calculated by fitting a bivariate quadratic surface with a window size of 9×9 pixels. These surface curvatures are strongly related to landform features and geomorphological processes. Four morphometric parameters and seven Landsat-enhanced thematic mapper (ETM+) bands were used as input for the SOM algorithm. Once the network weights have been randomly initialized, different learning parameter sets, e.g. initial radius, final radius and number of iterations, were investigated. An optimal SOM with 20 classes using 1000 iterations and a final neighborhood radius of 0.05 provided a low average quantization error of 0.3394 and was used for further analysis. The effect of randomization of initial weights for optimal SOM was also studied. Feature space analysis, three-dimensional inspection and auxiliary data facilitated the assignment of semantic meaning to the output classes in terms of landform, based on morphometric analysis, and land use, based on spectral properties. Results were displayed as thematic map of landscape elements according to form, cover and slope. Spectral and morphometric signature analysis with corresponding zoom samples superimposed by contour lines were compared in detail to clarify the role of morphometric parameters to separate landscape elements. The results revealed the efficiency of SOM to integrate SRTM and Landsat data in landscape analysis. Despite the stochastic nature of SOM, the results in this particular study are not sensitive to randomization of initial weight vectors if many iterations are used. This procedure is reproducible for the same application with consistent results.

  3. Solving Boltzmann and Fokker-Planck Equations Using Sparse Representation

    DTIC Science & Technology

    2011-05-31

    material science. We have com- puted the electronic structure of 2D quantum dot system, and compared the efficiency with the benchmark software OCTOPUS . For...one self-consistent iteration step with 512 electrons, OCTOPUS costs 1091 sec, and selected inversion costs 9.76 sec. The algorithm exhibits

  4. Methodology for CFD Design Analysis of National Launch System Nozzle Manifold

    NASA Technical Reports Server (NTRS)

    Haire, Scot L.

    1993-01-01

    The current design environment dictates that high technology CFD (Computational Fluid Dynamics) analysis produce quality results in a timely manner if it is to be integrated into the design process. The design methodology outlined describes the CFD analysis of an NLS (National Launch System) nozzle film cooling manifold. The objective of the analysis was to obtain a qualitative estimate for the flow distribution within the manifold. A complex, 3D, multiple zone, structured grid was generated from a 3D CAD file of the geometry. A Euler solution was computed with a fully implicit compressible flow solver. Post processing consisted of full 3D color graphics and mass averaged performance. The result was a qualitative CFD solution that provided the design team with relevant information concerning the flow distribution in and performance characteristics of the film cooling manifold within an effective time frame. Also, this design methodology was the foundation for a quick turnaround CFD analysis of the next iteration in the manifold design.

  5. Efficient generation of discontinuity-preserving adaptive triangulations from range images.

    PubMed

    Garcia, Miguel Angel; Sappa, Angel Domingo

    2004-10-01

    This paper presents an efficient technique for generating adaptive triangular meshes from range images. The algorithm consists of two stages. First, a user-defined number of points is adaptively sampled from the given range image. Those points are chosen by taking into account the surface shapes represented in the range image in such a way that points tend to group in areas of high curvature and to disperse in low-variation regions. This selection process is done through a noniterative, inherently parallel algorithm in order to gain efficiency. Once the image has been subsampled, the second stage applies a two and one half-dimensional Delaunay triangulation to obtain an initial triangular mesh. To favor the preservation of surface and orientation discontinuities (jump and crease edges) present in the original range image, the aforementioned triangular mesh is iteratively modified by applying an efficient edge flipping technique. Results with real range images show accurate triangular approximations of the given range images with low processing times.

  6. Parameter Identification Of Multilayer Thermal Insulation By Inverse Problems

    NASA Astrophysics Data System (ADS)

    Nenarokomov, Aleksey V.; Alifanov, Oleg M.; Gonzalez, Vivaldo M.

    2012-07-01

    The purpose of this paper is to introduce an iterative regularization method in the research of radiative and thermal properties of materials with further applications in the design of Thermal Control Systems (TCS) of spacecrafts. In this paper the radiative and thermal properties (heat capacity, emissivity and thermal conductance) of a multilayered thermal-insulating blanket (MLI), which is a screen-vacuum thermal insulation as a part of the (TCS) for perspective spacecrafts, are estimated. Properties of the materials under study are determined in the result of temperature and heat flux measurement data processing based on the solution of the Inverse Heat Transfer Problem (IHTP) technique. Given are physical and mathematical models of heat transfer processes in a specimen of the multilayered thermal-insulating blanket located in the experimental facility. A mathematical formulation of the IHTP, based on sensitivity function approach, is presented too. The practical testing was performed for specimen of the real MLI. This paper consists of recent researches, which developed the approach suggested at [1].

  7. A comparison of representations for discrete multi-criteria decision problems☆

    PubMed Central

    Gettinger, Johannes; Kiesling, Elmar; Stummer, Christian; Vetschera, Rudolf

    2013-01-01

    Discrete multi-criteria decision problems with numerous Pareto-efficient solution candidates place a significant cognitive burden on the decision maker. An interactive, aspiration-based search process that iteratively progresses toward the most preferred solution can alleviate this task. In this paper, we study three ways of representing such problems in a DSS, and compare them in a laboratory experiment using subjective and objective measures of the decision process as well as solution quality and problem understanding. In addition to an immediate user evaluation, we performed a re-evaluation several weeks later. Furthermore, we consider several levels of problem complexity and user characteristics. Results indicate that different problem representations have a considerable influence on search behavior, although long-term consistency appears to remain unaffected. We also found interesting discrepancies between subjective evaluations and objective measures. Conclusions from our experiments can help designers of DSS for large multi-criteria decision problems to fit problem representations to the goals of their system and the specific task at hand. PMID:24882912

  8. Profile negotiation - A concept for integrating airborne and ground-based automation for managing arrival traffic

    NASA Technical Reports Server (NTRS)

    Green, Steven M.; Den Braven, Wim; Williams, David H.

    1991-01-01

    The profile negotiation process (PNP) concept as applied to the management of arrival traffic within the extended terminal area is presented, focusing on functional issues from the ground-based perspective. The PNP is an interactive process between an aircraft and air traffic control (ATC) which combines airborne and ground-based automation capabilities to determine conflict-free trajectories that are as close to an aircraft's preference as possible. Preliminary results from a real-time simulation study show that the controller teams are able to consistently and effectively negotiate conflict-free vertical profiles with 4D-equipped aircraft. The ability of the airborne 4D flight management system to adapt to ATC specified 4D trajectory constraints is found to be a requirement for successful execution of the PNP. It is recommended that the conventional method of cost index iteration for obtaining the minimum fuel 4D trajectory be supplemented by a method which constrains the profile speeds to those desired by ATC.

  9. Quantum supercharger library: hyper-parallelism of the Hartree-Fock method.

    PubMed

    Fernandes, Kyle D; Renison, C Alicia; Naidoo, Kevin J

    2015-07-05

    We present here a set of algorithms that completely rewrites the Hartree-Fock (HF) computations common to many legacy electronic structure packages (such as GAMESS-US, GAMESS-UK, and NWChem) into a massively parallel compute scheme that takes advantage of hardware accelerators such as Graphical Processing Units (GPUs). The HF compute algorithm is core to a library of routines that we name the Quantum Supercharger Library (QSL). We briefly evaluate the QSL's performance and report that it accelerates a HF 6-31G Self-Consistent Field (SCF) computation by up to 20 times for medium sized molecules (such as a buckyball) when compared with mature Central Processing Unit algorithms available in the legacy codes in regular use by researchers. It achieves this acceleration by massive parallelization of the one- and two-electron integrals and optimization of the SCF and Direct Inversion in the Iterative Subspace routines through the use of GPU linear algebra libraries. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  10. Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach

    PubMed Central

    Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J.; Stayman, J. Webster; Zbijewski, Wojciech; Brock, Kristy K.; Daly, Michael J.; Chan, Harley; Irish, Jonathan C.; Siewerdsen, Jeffrey H.

    2011-01-01

    Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (“intensity”). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and∕or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5±2.8) mm compared to (3.5±3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance. PMID:21626913

  11. Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali

    2011-04-15

    Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (''intensity''). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specificmore » intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5{+-}2.8) mm compared to (3.5{+-}3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.« less

  12. Low dose dynamic myocardial CT perfusion using advanced iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Eck, Brendan L.; Fahmi, Rachid; Fuqua, Christopher; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.

    2015-03-01

    Dynamic myocardial CT perfusion (CTP) can provide quantitative functional information for the assessment of coronary artery disease. However, x-ray dose in dynamic CTP is high, typically from 10mSv to >20mSv. We compared the dose reduction potential of advanced iterative reconstruction, Iterative Model Reconstruction (IMR, Philips Healthcare, Cleveland, Ohio) to hybrid iterative reconstruction (iDose4) and filtered back projection (FBP). Dynamic CTP scans were obtained using a porcine model with balloon-induced ischemia in the left anterior descending coronary artery to prescribed fractional flow reserve values. High dose dynamic CTP scans were acquired at 100kVp/100mAs with effective dose of 23mSv. Low dose scans at 75mAs, 50mAs, and 25mAs were simulated by adding x-ray quantum noise and detector electronic noise to the projection space data. Images were reconstructed with FBP, iDose4, and IMR at each dose level. Image quality in static CTP images was assessed by SNR and CNR. Blood flow was obtained using a dynamic CTP analysis pipeline and blood flow image quality was assessed using flow-SNR and flow-CNR. IMR showed highest static image quality according to SNR and CNR. Blood flow in FBP was increasingly over-estimated at reduced dose. Flow was more consistent for iDose4 from 100mAs to 50mAs, but was over-estimated at 25mAs. IMR was most consistent from 100mAs to 25mAs. Static images and flow maps for 100mAs FBP, 50mAs iDose4, and 25mAs IMR showed comparable, clear ischemia, CNR, and flow-CNR values. These results suggest that IMR can enable dynamic CTP at significantly reduced dose, at 5.8mSv or 25% of the comparable 23mSv FBP protocol.

  13. A clinical reasoning model focused on clients' behaviour change with reference to physiotherapists: its multiphase development and validation.

    PubMed

    Elvén, Maria; Hochwälder, Jacek; Dean, Elizabeth; Söderlund, Anne

    2015-05-01

    A biopsychosocial approach and behaviour change strategies have long been proposed to serve as a basis for addressing current multifaceted health problems. This emphasis has implications for clinical reasoning of health professionals. This study's aim was to develop and validate a conceptual model to guide physiotherapists' clinical reasoning focused on clients' behaviour change. Phase 1 consisted of the exploration of existing research and the research team's experiences and knowledge. Phases 2a and 2b consisted of validation and refinement of the model based on input from physiotherapy students in two focus groups (n = 5 per group) and from experts in behavioural medicine (n = 9). Phase 1 generated theoretical and evidence bases for the first version of a model. Phases 2a and 2b established the validity and value of the model. The final model described clinical reasoning focused on clients' behaviour change as a cognitive, reflective, collaborative and iterative process with multiple interrelated levels that included input from the client and physiotherapist, a functional behavioural analysis of the activity-related target behaviour and the selection of strategies for behaviour change. This unique model, theory- and evidence-informed, has been developed to help physiotherapists to apply clinical reasoning systematically in the process of behaviour change with their clients.

  14. Generalized emission functions for photon emission from quark-gluon plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suryanarayana, S. V.

    The Landau-Pomeranchuk-Migdal effects on photon emission from the quark-gluon plasma have been studied as a function of photon mass, at a fixed temperature of the plasma. The integral equations for the transverse vector function [f-tilde)(p-tilde){sub (perpendicular)})] and the longitudinal function [g-tilde)(p-tilde){sub (perpendicular)})] consisting of multiple scattering effects are solved by the self-consistent iterations method and also by the variational method for the variable set {l_brace}p{sub 0},q{sub 0},Q{sup 2}{r_brace}. We considered the bremsstrahlung and the off shell annihilation (aws) processes. We define two new dynamical scaling variables, x{sub T},x{sub L}, for bremsstrahlung and aws processes which are functions of variables p{submore » 0},q{sub 0},Q{sup 2}. We define four new emission functions for massive photon emission represented by g{sub T}{sup b},g{sub T}{sup a},g{sub L}{sup b},g{sub L}{sup a} and we constructed these using the exact numerical solutions of the integral equations. These four emission functions have been parametrized by suitable simple empirical fits. Using the empirical emission functions, we calculated the imaginary part of the photon polarization tensor as a function of photon mass and energy.« less

  15. CONTRIBUTIONS OF CHEMICAL AND DIFFUSIVE EXCHANGE TO T1ρ DISPERSION

    PubMed Central

    Cobb, Jared Guthrie; Xie, Jingping; Gore, John C.

    2012-01-01

    Variations in local magnetic susceptibility may induce magnetic field gradients that affect the signals acquired for MR imaging. Under appropriate diffusion conditions, such fields produce effects similar to slow chemical exchange. These effects may also be found in combination with other chemical exchange processes at multiple time scales. We investigate these effects with simulations and measurements to determine their contributions to rotating frame (R1ρ) relaxation in model systems. Simulations of diffusive and chemical exchange effects on R1ρ dispersion were performed using the Bloch equations. Additionally, R1ρ dispersion was measured in suspensions of Sephadex and latex beads with varying spin locking fields at 9.4T. A novel analysis method was used to iteratively fit for apparent chemical and diffusive exchange rates with a model by Chopra et al. Single- and double-inflection points in R1ρ dispersion profiles were observed, respectively, in simulations of slow diffusive exchange alone and when combined with rapid chemical exchange. These simulations were consistent with measurements of R1ρ in latex bead suspensions and small-diameter Sephadex beads that showed single- and double-inflection points, respectively. These observations, along with measurements following changes in temperature and pH, are consistent with the combined effects of slow diffusion and rapid −OH exchange processes. PMID:22791589

  16. Iterative categorization (IC): a systematic technique for analysing qualitative data

    PubMed Central

    2016-01-01

    Abstract The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. PMID:26806155

  17. A Model For Selecting An Environmentally Responsive Trait: Evaluating Micro-scale Fitness Through UV-C Resistance and Exposure in Escherichia coli.

    NASA Astrophysics Data System (ADS)

    Schenone, D. J.; Igama, S.; Marash-Whitman, D.; Sloan, C.; Okansinski, A.; Moffet, A.; Grace, J. M.; Gentry, D.

    2015-12-01

    Experimental evolution of microorganisms in controlled microenvironments serves as a powerful tool for understanding the relationship between micro-scale microbial interactions as well as local-to global-scale environmental factors. In response to iterative and targeted environmental pressures, mutagenesis drives the emergence of novel phenotypes. Current methods to induce expression of these phenotypes require repetitive and time intensive procedures and do not allow for the continuous monitoring of conditions such as optical density, pH and temperature. To address this shortcoming, an Automated Dynamic Directed Evolution Chamber is being developed. It will initially produce Escherichia coli cells with an elevated UV-C resistance phenotype that will ultimately be adapted for different organisms as well as studying environmental effects. A useful phenotype and environmental factor for examining this relationship is UV-C resistance and exposure. In order to build a baseline for the device's operational parameters, a UV-C assay was performed on six E. coli replicates with three exposure fluxes across seven iterations. The fluxes included a 0 second exposure (control), 6 seconds at 3.3 J/m2/s and 40 seconds at 0.5 J/m2/s. After each iteration the cells were regrown and tested for UV-C resistance. We sought to quantify the increase and variability of UV-C resistance among different fluxes, and observe changes in each replicate at each iteration in terms of variance. Under different fluxes, we observed that the 0s control showed no significant increase in resistance, while the 6s/40s fluxes showed increased resistance as the number of iterations increased. A one-million fold increase in survivability was observed after seven iterations. Through statistical analysis using Spearman's rank correlation, the 40s exposure showed signs of more consistently increased resistance, but seven iterations was insufficient to demonstrate statistical significance; to test this further, our experiments will include more iterations. Furthermore, we plan to sequence all the replicants. As adaptation dynamics under intense UV exposure leads to high rate of change, it would be useful to observe differences in tolerance-related and non-tolerance-related genes between the original and UV resistant strains.

  18. Towards the low-dose characterization of beam sensitive nanostructures via implementation of sparse image acquisition in scanning transmission electron microscopy

    NASA Astrophysics Data System (ADS)

    Hwang, Sunghwan; Han, Chang Wan; Venkatakrishnan, Singanallur V.; Bouman, Charles A.; Ortalan, Volkan

    2017-04-01

    Scanning transmission electron microscopy (STEM) has been successfully utilized to investigate atomic structure and chemistry of materials with atomic resolution. However, STEM’s focused electron probe with a high current density causes the electron beam damages including radiolysis and knock-on damage when the focused probe is exposed onto the electron-beam sensitive materials. Therefore, it is highly desirable to decrease the electron dose used in STEM for the investigation of biological/organic molecules, soft materials and nanomaterials in general. With the recent emergence of novel sparse signal processing theories, such as compressive sensing and model-based iterative reconstruction, possibilities of operating STEM under a sparse acquisition scheme to reduce the electron dose have been opened up. In this paper, we report our recent approach to implement a sparse acquisition in STEM mode executed by a random sparse-scan and a signal processing algorithm called model-based iterative reconstruction (MBIR). In this method, a small portion, such as 5% of randomly chosen unit sampling areas (i.e. electron probe positions), which corresponds to pixels of a STEM image, within the region of interest (ROI) of the specimen are scanned with an electron probe to obtain a sparse image. Sparse images are then reconstructed using the MBIR inpainting algorithm to produce an image of the specimen at the original resolution that is consistent with an image obtained using conventional scanning methods. Experimental results for down to 5% sampling show consistency with the full STEM image acquired by the conventional scanning method. Although, practical limitations of the conventional STEM instruments, such as internal delays of the STEM control electronics and the continuous electron gun emission, currently hinder to achieve the full potential of the sparse acquisition STEM in realizing the low dose imaging condition required for the investigation of beam-sensitive materials, the results obtained in our experiments demonstrate the sparse acquisition STEM imaging is potentially capable of reducing the electron dose by at least 20 times expanding the frontiers of our characterization capabilities for investigation of biological/organic molecules, polymers, soft materials and nanostructures in general.

  19. Iterating between Tools to Create and Edit Visualizations.

    PubMed

    Bigelow, Alex; Drucker, Steven; Fisher, Danyel; Meyer, Miriah

    2017-01-01

    A common workflow for visualization designers begins with a generative tool, like D3 or Processing, to create the initial visualization; and proceeds to a drawing tool, like Adobe Illustrator or Inkscape, for editing and cleaning. Unfortunately, this is typically a one-way process: once a visualization is exported from the generative tool into a drawing tool, it is difficult to make further, data-driven changes. In this paper, we propose a bridge model to allow designers to bring their work back from the drawing tool to re-edit in the generative tool. Our key insight is to recast this iteration challenge as a merge problem - similar to when two people are editing a document and changes between them need to reconciled. We also present a specific instantiation of this model, a tool called Hanpuku, which bridges between D3 scripts and Illustrator. We show several examples of visualizations that are iteratively created using Hanpuku in order to illustrate the flexibility of the approach. We further describe several hypothetical tools that bridge between other visualization tools to emphasize the generality of the model.

  20. Dynamics of a new family of iterative processes for quadratic polynomials

    NASA Astrophysics Data System (ADS)

    Gutiérrez, J. M.; Hernández, M. A.; Romero, N.

    2010-03-01

    In this work we show the presence of the well-known Catalan numbers in the study of the convergence and the dynamical behavior of a family of iterative methods for solving nonlinear equations. In fact, we introduce a family of methods, depending on a parameter . These methods reach the order of convergence m+2 when they are applied to quadratic polynomials with different roots. Newton's and Chebyshev's methods appear as particular choices of the family appear for m=0 and m=1, respectively. We make both analytical and graphical studies of these methods, which give rise to rational functions defined in the extended complex plane. Firstly, we prove that the coefficients of the aforementioned family of iterative processes can be written in terms of the Catalan numbers. Secondly, we make an incursion into its dynamical behavior. In fact, we show that the rational maps related to these methods can be written in terms of the entries of the Catalan triangle. Next we analyze its general convergence, by including some computer plots showing the intricate structure of the Universal Julia sets associated with the methods.

  1. A finite element analysis modeling tool for solid oxide fuel cell development: coupled electrochemistry, thermal and flow analysis in MARC®

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khaleel, Mohammad A.; Lin, Zijing; Singh, Prabhakar

    2004-05-03

    A 3D simulation tool for modeling solid oxide fuel cells is described. The tool combines the versatility and efficiency of a commercial finite element analysis code, MARC{reg_sign}, with an in-house developed robust and flexible electrochemical (EC) module. Based upon characteristic parameters obtained experimentally and assigned by the user, the EC module calculates the current density distribution, heat generation, and fuel and oxidant species concentration, taking the temperature profile provided by MARC{reg_sign} and operating conditions such as the fuel and oxidant flow rate and the total stack output voltage or current as the input. MARC{reg_sign} performs flow and thermal analyses basedmore » on the initial and boundary thermal and flow conditions and the heat generation calculated by the EC module. The main coupling between MARC{reg_sign} and EC is for MARC{reg_sign} to supply the temperature field to EC and for EC to give the heat generation profile to MARC{reg_sign}. The loosely coupled, iterative scheme is advantageous in terms of memory requirement, numerical stability and computational efficiency. The coupling is iterated to self-consistency for a steady-state solution. Sample results for steady states as well as the startup process for stacks with different flow designs are presented to illustrate the modeling capability and numerical performance characteristic of the simulation tool.« less

  2. Fuel Burn Estimation Using Real Track Data

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano B.

    2011-01-01

    A procedure for estimating fuel burned based on actual flight track data, and drag and fuel-flow models is described. The procedure consists of estimating aircraft and wind states, lift, drag and thrust. Fuel-flow for jet aircraft is determined in terms of thrust, true airspeed and altitude as prescribed by the Base of Aircraft Data fuel-flow model. This paper provides a theoretical foundation for computing fuel-flow with most of the information derived from actual flight data. The procedure does not require an explicit model of thrust and calibrated airspeed/Mach profile which are typically needed for trajectory synthesis. To validate the fuel computation method, flight test data provided by the Federal Aviation Administration were processed. Results from this method show that fuel consumed can be estimated within 1% of the actual fuel consumed in the flight test. Next, fuel consumption was estimated with simplified lift and thrust models. Results show negligible difference with respect to the full model without simplifications. An iterative takeoff weight estimation procedure is described for estimating fuel consumption, when takeoff weight is unavailable, and for establishing fuel consumption uncertainty bounds. Finally, the suitability of using radar-based position information for fuel estimation is examined. It is shown that fuel usage could be estimated within 5.4% of the actual value using positions reported in the Airline Situation Display to Industry data with simplified models and iterative takeoff weight computation.

  3. Erosion and deposition in the JET divertor during the second ITER-like wall campaign

    NASA Astrophysics Data System (ADS)

    Mayer, M.; Krat, S.; Baron-Wiechec, A.; Gasparyan, Yu; Heinola, K.; Koivuranta, S.; Likonen, J.; Ruset, C.; de Saint-Aubin, G.; Widdowson, A.; Contributors, JET

    2017-12-01

    Erosion of plasma-facing materials and successive transport and redeposition of eroded material are crucial processes determining the lifetime of plasma-facing components and the trapped tritium inventory in redeposited material layers. Erosion and deposition in the JET divertor were studied during the second JET ITER-like wall campaign ILW-2 in 2013-2014 by using a poloidal row of specially prepared divertor marker tiles including the tungsten bulk tile 5. The marker tiles were analyzed using elastic backscattering with 3-4.5 MeV incident protons and nuclear reaction analysis using 0.8-4.5 MeV 3He ions before and after the campaign. The erosion/deposition pattern observed during ILW-2 is qualitatively comparable to the first campaign ILW-1 in 2011-2012: deposits consist mainly of beryllium with 5-20 at.% of carbon and oxygen and small amounts of Ni and W. The highest deposition with deposited layer thicknesses up to 30 μm per campaign is still observed on the upper and horizontal parts of the inner divertor. Outer divertor tiles 5, 6, 7 and 8 are net W erosion areas. The observed D inventory is roughly comparable to the inventory observed during ILW-1. The results obtained during ILW-2 therefore confirm the positive results observed in ILW-1 with respect to reduced material deposition and hydrogen isotopes retention in the divertor.

  4. Model reduction in integrated controls-structures design

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.

    1993-01-01

    It is the objective of this paper to present a model reduction technique developed for the integrated controls-structures design of flexible structures. Integrated controls-structures design problems are typically posed as nonlinear mathematical programming problems, where the design variables consist of both structural and control parameters. In the solution process, both structural and control design variables are constantly changing; therefore, the dynamic characteristics of the structure are also changing. This presents a problem in obtaining a reduced-order model for active control design and analysis which will be valid for all design points within the design space. In other words, the frequency and number of the significant modes of the structure (modes that should be included) may vary considerably throughout the design process. This is also true as the locations and/or masses of the sensors and actuators change. Moreover, since the number of design evaluations in the integrated design process could easily run into thousands, any feasible order-reduction method should not require model reduction analysis at every design iteration. In this paper a novel and efficient technique for model reduction in the integrated controls-structures design process, which addresses these issues, is presented.

  5. 'The biggest thing is trying to live for two people': Spousal experiences of supporting decision-making participation for partners with TBI.

    PubMed

    Knox, Lucy; Douglas, Jacinta M; Bigby, Christine

    2015-01-01

    To understand how the spouses of individuals with severe TBI experience the process of supporting their partners with decision-making. This study adopted a constructivist grounded theory approach, with data consisting of in-depth interviews conducted with spouses over a 12-month period. Data were analysed through an iterative process of open and focused coding, identification of emergent categories and exploration of relationships between categories. Participants were four spouses of individuals with severe TBI (with moderate-severe disability). Spouses had shared committed relationships (marriage or domestic partnerships) for at least 4 years at initial interview. Three spouses were in relationships that had commenced following injury. Two main themes emerged from the data. The first identified the saliency of the relational space in which decision-making took place. The second revealed the complex nature of decision-making within the spousal relationship. Spouses experience decision-making as a complex multi-stage process underpinned by a number of relational factors. Increased understanding of this process can guide health professionals in their provision of support for couples in exploring decision-making participation after injury.

  6. A Block Preconditioned Conjugate Gradient-type Iterative Solver for Linear Systems in Thermal Reservoir Simulation

    NASA Astrophysics Data System (ADS)

    Betté, Srinivas; Diaz, Julio C.; Jines, William R.; Steihaug, Trond

    1986-11-01

    A preconditioned residual-norm-reducing iterative solver is described. Based on a truncated form of the generalized-conjugate-gradient method for nonsymmetric systems of linear equations, the iterative scheme is very effective for linear systems generated in reservoir simulation of thermal oil recovery processes. As a consequence of employing an adaptive implicit finite-difference scheme to solve the model equations, the number of variables per cell-block varies dynamically over the grid. The data structure allows for 5- and 9-point operators in the areal model, 5-point in the cross-sectional model, and 7- and 11-point operators in the three-dimensional model. Block-diagonal-scaling of the linear system, done prior to iteration, is found to have a significant effect on the rate of convergence. Block-incomplete-LU-decomposition (BILU) and block-symmetric-Gauss-Seidel (BSGS) methods, which result in no fill-in, are used as preconditioning procedures. A full factorization is done on the well terms, and the cells are ordered in a manner which minimizes the fill-in in the well-column due to this factorization. The convergence criterion for the linear (inner) iteration is linked to that of the nonlinear (Newton) iteration, thereby enhancing the efficiency of the computation. The algorithm, with both BILU and BSGS preconditioners, is evaluated in the context of a variety of thermal simulation problems. The solver is robust and can be used with little or no user intervention.

  7. Numerical simulation and comparison of nonlinear self-focusing based on iteration and ray tracing

    NASA Astrophysics Data System (ADS)

    Li, Xiaotong; Chen, Hao; Wang, Weiwei; Ruan, Wangchao; Zhang, Luwei; Cen, Zhaofeng

    2017-05-01

    Self-focusing is observed in nonlinear materials owing to the interaction between laser and matter when laser beam propagates. Some of numerical simulation strategies such as the beam propagation method (BPM) based on nonlinear Schrödinger equation and ray tracing method based on Fermat's principle have applied to simulate the self-focusing process. In this paper we present an iteration nonlinear ray tracing method in that the nonlinear material is also cut into massive slices just like the existing approaches, but instead of paraxial approximation and split-step Fourier transform, a large quantity of sampled real rays are traced step by step through the system with changing refractive index and laser intensity by iteration. In this process a smooth treatment is employed to generate a laser density distribution at each slice to decrease the error caused by the under-sampling. The characteristics of this method is that the nonlinear refractive indices of the points on current slice are calculated by iteration so as to solve the problem of unknown parameters in the material caused by the causal relationship between laser intensity and nonlinear refractive index. Compared with the beam propagation method, this algorithm is more suitable for engineering application with lower time complexity, and has the calculation capacity for numerical simulation of self-focusing process in the systems including both of linear and nonlinear optical media. If the sampled rays are traced with their complex amplitudes and light paths or phases, it will be possible to simulate the superposition effects of different beam. At the end of the paper, the advantages and disadvantages of this algorithm are discussed.

  8. Key achievements in elementary R&D on water-cooled solid breeder blanket for ITER test blanket module in JAERI

    NASA Astrophysics Data System (ADS)

    Suzuki, S.; Enoeda, M.; Hatano, T.; Hirose, T.; Hayashi, K.; Tanigawa, H.; Ochiai, K.; Nishitani, T.; Tobita, K.; Akiba, M.

    2006-02-01

    This paper presents the significant progress made in the research and development (R&D) of key technologies on the water-cooled solid breeder blanket for the ITER test blanket modules in JAERI. Development of module fabrication technology, bonding technology of armours, measurement of thermo-mechanical properties of pebble beds, neutronics studies on a blanket module mockup and tritium release behaviour from a Li2TiO3 pebble bed under neutron-pulsed operation conditions are summarized. With the improvement of the heat treatment process for blanket module fabrication, a fine-grained microstructure of F82H can be obtained by homogenizing it at 1150 °C followed by normalizing it at 930 °C after the hot isostatic pressing process. Moreover, a promising bonding process for a tungsten armour and an F82H structural material was developed using a solid-state bonding method based on uniaxial hot compression without any artificial compliant layer. As a result of high heat flux tests of F82H first wall mockups, it has been confirmed that a fatigue lifetime correlation, which was developed for the ITER divertor, can be made applicable for the F82H first wall mockup. As for R&D on the breeder material, Li2TiO3, the effect of compression loads on effective thermal conductivity of pebble beds has been clarified for the Li2TiO3 pebble bed. The tritium breeding ratio of a simulated multi-layer blanket structure has successfully been measured using 14 MeV neutrons with an accuracy of 10%. The tritium release rate from the Li2TiO3 pebble has also been successfully measured with pulsed neutron irradiation, which simulates ITER operation.

  9. Propagation-based x-ray phase contrast imaging using an iterative phase diversity technique

    NASA Astrophysics Data System (ADS)

    Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.

    2018-03-01

    Through the use of a phase diversity technique, we demonstrate a near-field in-line x-ray phase contrast algorithm that provides improved object reconstruction when compared to our previous iterative methods for a homogeneous sample. Like our previous methods, the new technique uses the sample refractive index distribution during the reconstruction process. The technique complements existing monochromatic and polychromatic methods and is useful in situations where experimental phase contrast data is affected by noise.

  10. Micromagnetic Simulation of Thermal Effects in Magnetic Nanostructures

    DTIC Science & Technology

    2003-01-01

    NiFe magnetic nano- elements are calculated. INTRODUCTION With decreasing size of magnetic nanostructures thermal effects become increasingly important...thermal field. The thermal field is assumed to be a Gaussian random process with the following statistical properties : (H,,,(t))=0 and (H,I.(t),H,.1(t...following property DI " =VE(M’’) - [VE(M"’)• t] t =0, for k =1.m (12) 186 The optimal path can be found using an iterative scheme. In each iteration step the

  11. Convergence of Proximal Iteratively Reweighted Nuclear Norm Algorithm for Image Processing.

    PubMed

    Sun, Tao; Jiang, Hao; Cheng, Lizhi

    2017-08-25

    The nonsmooth and nonconvex regularization has many applications in imaging science and machine learning research due to its excellent recovery performance. A proximal iteratively reweighted nuclear norm algorithm has been proposed for the nonsmooth and nonconvex matrix minimizations. In this paper, we aim to investigate the convergence of the algorithm. With the Kurdyka-Łojasiewicz property, we prove the algorithm globally converges to a critical point of the objective function. The numerical results presented in this paper coincide with our theoretical findings.

  12. Iterative repair for scheduling and rescheduling

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Deale, Michael

    1991-01-01

    An iterative repair search method is described called constraint based simulated annealing. Simulated annealing is a hill climbing search technique capable of escaping local minima. The utility of the constraint based framework is shown by comparing search performance with and without the constraint framework on a suite of randomly generated problems. Results are also shown of applying the technique to the NASA Space Shuttle ground processing problem. These experiments show that the search methods scales to complex, real world problems and reflects interesting anytime behavior.

  13. Multigrid-based reconstruction algorithm for quantitative photoacoustic tomography

    PubMed Central

    Li, Shengfu; Montcel, Bruno; Yuan, Zhen; Liu, Wanyu; Vray, Didier

    2015-01-01

    This paper proposes a multigrid inversion framework for quantitative photoacoustic tomography reconstruction. The forward model of optical fluence distribution and the inverse problem are solved at multiple resolutions. A fixed-point iteration scheme is formulated for each resolution and used as a cost function. The simulated and experimental results for quantitative photoacoustic tomography reconstruction show that the proposed multigrid inversion can dramatically reduce the required number of iterations for the optimization process without loss of reliability in the results. PMID:26203371

  14. Inferring the demographic history from DNA sequences: An importance sampling approach based on non-homogeneous processes.

    PubMed

    Ait Kaci Azzou, S; Larribe, F; Froda, S

    2016-10-01

    In Ait Kaci Azzou et al. (2015) we introduced an Importance Sampling (IS) approach for estimating the demographic history of a sample of DNA sequences, the skywis plot. More precisely, we proposed a new nonparametric estimate of a population size that changes over time. We showed on simulated data that the skywis plot can work well in typical situations where the effective population size does not undergo very steep changes. In this paper, we introduce an iterative procedure which extends the previous method and gives good estimates under such rapid variations. In the iterative calibrated skywis plot we approximate the effective population size by a piecewise constant function, whose values are re-estimated at each step. These piecewise constant functions are used to generate the waiting times of non homogeneous Poisson processes related to a coalescent process with mutation under a variable population size model. Moreover, the present IS procedure is based on a modified version of the Stephens and Donnelly (2000) proposal distribution. Finally, we apply the iterative calibrated skywis plot method to a simulated data set from a rapidly expanding exponential model, and we show that the method based on this new IS strategy correctly reconstructs the demographic history. Copyright © 2016. Published by Elsevier Inc.

  15. The development and preliminary testing of a multimedia patient-provider survivorship communication module for breast cancer survivors.

    PubMed

    Wen, Kuang-Yi; Miller, Suzanne M; Stanton, Annette L; Fleisher, Linda; Morra, Marion E; Jorge, Alexandra; Diefenbach, Michael A; Ropka, Mary E; Marcus, Alfred C

    2012-08-01

    This paper describes the development of a theory-guided and evidence-based multimedia training module to facilitate breast cancer survivors' preparedness for effective communication with their health care providers after active treatment. The iterative developmental process used included: (1) theory and evidence-based content development and vetting; (2) user testing; (3) usability testing; and (4) participant module utilization. Formative evaluation of the training module prototype occurred through user testing (n = 12), resulting in modification of the content and layout. Usability testing (n = 10) was employed to improve module functionality. Preliminary web usage data (n = 256, mean age = 53, 94.5% White, 75% college graduate and above) showed that 59% of the participants accessed the communication module, for an average of 7 min per login. The iterative developmental process was informative in enhancing the relevance of the communication module. Preliminary web usage results demonstrate the potential feasibility of such a program. Our study demonstrates survivors' openness to the use of a web-based communication skills training module and outlines a systematic iterative user and interface program development and testing process, which can serve as a prototype for others considering such an approach. Copyright © 2012. Published by Elsevier Ireland Ltd.

  16. The development and preliminary testing of a multimedia patient–provider survivorship communication module for breast cancer survivors

    PubMed Central

    Wen, Kuang-Yi; Miller, Suzanne M.; Stanton, Annette L.; Fleisher, Linda; Morra, Marion E.; Jorge, Alexandra; Diefenbach, Michael A.; Ropka, Mary E.; Marcus, Alfred C.

    2012-01-01

    Objective This paper describes the development of a theory-guided and evidence-based multimedia training module to facilitate breast cancer survivors’ preparedness for effective communication with their health care providers after active treatment. Methods The iterative developmental process used included: (1) theory and evidence-based content development and vetting; (2) user testing; (3) usability testing; and (4) participant module utilization. Results Formative evaluation of the training module prototype occurred through user testing (n = 12), resulting in modification of the content and layout. Usability testing (n = 10) was employed to improve module functionality. Preliminary web usage data (n = 256, mean age = 53, 94.5% White, 75% college graduate and above) showed that 59% of the participants accessed the communication module, for an average of 7 min per login. Conclusion The iterative developmental process was informative in enhancing the relevance of the communication module. Preliminary web usage results demonstrate the potential feasibility of such a program. Practice implications Our study demonstrates survivors’ openness to the use of a web-based communication skills training module and outlines a systematic iterative user and interface program development and testing process, which can serve as a prototype for others considering such an approach. PMID:22770812

  17. Learning to Teach Elementary Science Through Iterative Cycles of Enactment in Culturally and Linguistically Diverse Contexts

    NASA Astrophysics Data System (ADS)

    Bottoms, SueAnn I.; Ciechanowski, Kathryn M.; Hartman, Brian

    2015-12-01

    Iterative cycles of enactment embedded in culturally and linguistically diverse contexts provide rich opportunities for preservice teachers (PSTs) to enact core practices of science. This study is situated in the larger Families Involved in Sociocultural Teaching and Science, Technology, Engineering and Mathematics (FIESTAS) project, which weaves together cycles of enactment, core practices in science education and culturally relevant pedagogies. The theoretical foundation draws upon situated learning theory and communities of practice. Using video analysis by PSTs and course artifacts, the authors studied how the iterative process of these cycles guided PSTs development as teachers of elementary science. Findings demonstrate how PSTs were drawing on resources to inform practice, purposefully noticing their practice, renegotiating their roles in teaching, and reconsidering "professional blindness" through cultural practice.

  18. A law of the iterated logarithm for Grenander’s estimator

    PubMed Central

    Dümbgen, Lutz; Wellner, Jon A.; Wolff, Malcolm

    2016-01-01

    In this note we prove the following law of the iterated logarithm for the Grenander estimator of a monotone decreasing density: If f(t0) > 0, f′(t0) < 0, and f′ is continuous in a neighborhood of t0, then lim supn→∞(n2log logn)1/3(fn^(t0)−f(t0))=|f(t0)f′(t0)/2|1/32Malmost surely where M≡supg∈GTg=(3/4)1/3andTg≡argmaxu{g(u)−u2};here G is the two-sided Strassen limit set on R. The proof relies on laws of the iterated logarithm for local empirical processes, Groeneboom’s switching relation, and properties of Strassen’s limit set analogous to distributional properties of Brownian motion. PMID:28042197

  19. An iterative approach to region growing using associative memories

    NASA Technical Reports Server (NTRS)

    Snyder, W. E.; Cowart, A.

    1983-01-01

    Region growing, often given as a classical example of the recursive control structures used in image processing which are often awkward to implement in hardware where the intent is the segmentation of an image at raster scan rates, is addressed in light of the postulate that any computation which can be performed recursively can be performed easily and efficiently by iteration coupled with association. Attention is given to an algorithm and hardware structure able to perform region labeling iteratively at scan rates. Every pixel is individually labeled with an identifier which signifies the region to which it belongs. Difficulties otherwise requiring recursion are handled by maintaining an equivalence table in hardware transparent to the computer, which reads the labeled pixels. A simulation of the associative memory has demonstrated its effectiveness.

  20. Nonlinear BCJR equalizer for suppression of intrachannel nonlinearities in 40 Gb/s optical communications systems.

    PubMed

    Djordjevic, Ivan B; Vasic, Bane

    2006-05-29

    A maximum a posteriori probability (MAP) symbol decoding supplemented with iterative decoding is proposed as an effective mean for suppression of intrachannel nonlinearities. The MAP detector, based on Bahl-Cocke-Jelinek-Raviv algorithm, operates on the channel trellis, a dynamical model of intersymbol interference, and provides soft-decision outputs processed further in an iterative decoder. A dramatic performance improvement is demonstrated. The main reason is that the conventional maximum-likelihood sequence detector based on Viterbi algorithm provides hard-decision outputs only, hence preventing the soft iterative decoding. The proposed scheme operates very well in the presence of strong intrachannel intersymbol interference, when other advanced forward error correction schemes fail, and it is also suitable for 40 Gb/s upgrade over existing 10 Gb/s infrastructure.

  1. Noniterative Multireference Coupled Cluster Methods on Heterogeneous CPU-GPU Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhaskaran-Nair, Kiran; Ma, Wenjing; Krishnamoorthy, Sriram

    2013-04-09

    A novel parallel algorithm for non-iterative multireference coupled cluster (MRCC) theories, which merges recently introduced reference-level parallelism (RLP) [K. Bhaskaran-Nair, J.Brabec, E. Aprà, H.J.J. van Dam, J. Pittner, K. Kowalski, J. Chem. Phys. 137, 094112 (2012)] with the possibility of accelerating numerical calculations using graphics processing unit (GPU) is presented. We discuss the performance of this algorithm on the example of the MRCCSD(T) method (iterative singles and doubles and perturbative triples), where the corrections due to triples are added to the diagonal elements of the MRCCSD (iterative singles and doubles) effective Hamiltonian matrix. The performance of the combined RLP/GPU algorithmmore » is illustrated on the example of the Brillouin-Wigner (BW) and Mukherjee (Mk) state-specific MRCCSD(T) formulations.« less

  2. SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, P; Mao, T; Gong, S

    2016-06-15

    Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimizationmore » trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less

  3. Universal Design and Multimethod Approaches to Item Review

    ERIC Educational Resources Information Center

    Johnstone, Christopher J.; Thompson, Sandra J.; Bottsford-Miller, Nicole A.; Thurlow, Martha L.

    2008-01-01

    Test items undergo multiple iterations of review before states and vendors deem them acceptable to be placed in a live statewide assessment. This article reviews three approaches that can add validity evidence to states' item review processes. The first process is a structured sensitivity review process that focuses on universal design…

  4. Twin-Screw Extruder Development for the ITER Pellet Injection System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meitner, Steven J; Baylor, Larry R; Combs, Stephen Kirk

    The ITER pellet injection system is comprised of devices to form and accelerate pellets, and will be connected to inner wall guide tubes for fueling, and outer wall guide tubes for ELM pacing. An extruder will provide a stream of solid hydrogen isotopes to a secondary section, where pellets are cut and accelerated with a gas gun into the plasma. The ITER pellet injection system is required to provide a plasma fueling rate of 120 Pa-m3/s (900 mbar-L/s) and durations of up to 3000 s. The fueling pellets will be injected at a rate up to 10 Hz and pelletsmore » used to trigger ELMs will be injected at higher rates up to 20 Hz. A twin-screw extruder for the ITER pellet injection system is under development at the Oak Ridge National Laboratory. A one-fifth ITER scale prototype has been built and has demonstrated the production of a continuous solid deuterium extrusion. The 27 mm diameter, intermeshed, counter-rotating extruder screws are rotated at a rate up to ≈5 rpm. Deuterium gas is pre-cooled and liquefied and solidified in separate extruder barrels. The precooler consists of a deuterium gas filled copper coil suspended in a separate stainless steel vessel containing liquid nitrogen. The liquefier is comprised of a copper barrel connected to a Cryomech AL330 cryocooler, which has a machined helical groove surrounded by a copper jacket, through which the pre-cooled deuterium condenses. The lower extruder barrel is connected to a Cryomech GB-37 cryocooler to solidify the deuterium (at ≈15 K) before it is forced through the extruder die. The die forms the extrusion to a 3 mm x 4 mm rectangular cross section. Design improvements have been made to improve the pre-cooler and liquefier heat exchangers, to limit the loss of extrusion through gaps in the screws. This paper will describe the design improvements for the next iteration of the extruder prototype.« less

  5. Comparison of different filter methods for data assimilation in the unsaturated zone

    NASA Astrophysics Data System (ADS)

    Lange, Natascha; Berkhahn, Simon; Erdal, Daniel; Neuweiler, Insa

    2016-04-01

    The unsaturated zone is an important compartment, which plays a role for the division of terrestrial water fluxes into surface runoff, groundwater recharge and evapotranspiration. For data assimilation in coupled systems it is therefore important to have a good representation of the unsaturated zone in the model. Flow processes in the unsaturated zone have all the typical features of flow in porous media: Processes can have long memory and as observations are scarce, hydraulic model parameters cannot be determined easily. However, they are important for the quality of model predictions. On top of that, the established flow models are highly non-linear. For these reasons, the use of the popular Ensemble Kalman filter as a data assimilation method to estimate state and parameters in unsaturated zone models could be questioned. With respect to the long process memory in the subsurface, it has been suggested that iterative filters and smoothers may be more suitable for parameter estimation in unsaturated media. We test the performance of different iterative filters and smoothers for data assimilation with a focus on parameter updates in the unsaturated zone. In particular we compare the Iterative Ensemble Kalman Filter and Smoother as introduced by Bocquet and Sakov (2013) as well as the Confirming Ensemble Kalman Filter and the modified Restart Ensemble Kalman Filter proposed by Song et al. (2014) to the original Ensemble Kalman Filter (Evensen, 2009). This is done with simple test cases generated numerically. We consider also test examples with layering structure, as a layering structure is often found in natural soils. We assume that observations are water content, obtained from TDR probes or other observation methods sampling relatively small volumes. Particularly in larger data assimilation frameworks, a reasonable balance between computational effort and quality of results has to be found. Therefore, we compare computational costs of the different methods as well as the quality of open loop model predictions and the estimated parameters. Bocquet, M. and P. Sakov, 2013: Joint state and parameter estimation with an iterative ensemble Kalman smoother, Nonlinear Processes in Geophysics 20(5): 803-818. Evensen, G., 2009: Data assimilation: The ensemble Kalman filter. Springer Science & Business Media. Song, X.H., L.S. Shi, M. Ye, J.Z. Yang and I.M. Navon, 2014: Numerical comparison of iterative ensemble Kalman filters for unsaturated flow inverse modeling. Vadose Zone Journal 13(2), 10.2136/vzj2013.05.0083.

  6. Self consistent MHD modeling of the solar wind from coronal holes with distinct geometries

    NASA Technical Reports Server (NTRS)

    Stewart, G. A.; Bravo, S.

    1995-01-01

    Utilizing an iterative scheme, a self-consistent axisymmetric MHD model for the solar wind has been developed. We use this model to evaluate the properties of the solar wind issuing from the open polar coronal hole regions of the Sun, during solar minimum. We explore the variation of solar wind parameters across the extent of the hole and we investigate how these variations are affected by the geometry of the hole and the strength of the field at the coronal base.

  7. ELM-induced transient tungsten melting in the JET divertor

    NASA Astrophysics Data System (ADS)

    Coenen, J. W.; Arnoux, G.; Bazylev, B.; Matthews, G. F.; Autricque, A.; Balboa, I.; Clever, M.; Dejarnac, R.; Coffey, I.; Corre, Y.; Devaux, S.; Frassinetti, L.; Gauthier, E.; Horacek, J.; Jachmich, S.; Komm, M.; Knaup, M.; Krieger, K.; Marsen, S.; Meigs, A.; Mertens, Ph.; Pitts, R. A.; Puetterich, T.; Rack, M.; Stamp, M.; Sergienko, G.; Tamain, P.; Thompson, V.; Contributors, JET-EFDA

    2015-02-01

    The original goals of the JET ITER-like wall included the study of the impact of an all W divertor on plasma operation (Coenen et al 2013 Nucl. Fusion 53 073043) and fuel retention (Brezinsek et al 2013 Nucl. Fusion 53 083023). ITER has recently decided to install a full-tungsten (W) divertor from the start of operations. One of the key inputs required in support of this decision was the study of the possibility of W melting and melt splashing during transients. Damage of this type can lead to modifications of surface topology which could lead to higher disruption frequency or compromise subsequent plasma operation. Although every effort will be made to avoid leading edges, ITER plasma stored energies are sufficient that transients can drive shallow melting on the top surfaces of components. JET is able to produce ELMs large enough to allow access to transient melting in a regime of relevance to ITER. Transient W melt experiments were performed in JET using a dedicated divertor module and a sequence of IP = 3.0 MA/BT = 2.9 T H-mode pulses with an input power of PIN = 23 MW, a stored energy of ˜6 MJ and regular type I ELMs at ΔWELM = 0.3 MJ and fELM ˜ 30 Hz. By moving the outer strike point onto a dedicated leading edge in the W divertor the base temperature was raised within ˜1 s to a level allowing transient, ELM-driven melting during the subsequent 0.5 s. Such ELMs (δW ˜ 300 kJ per ELM) are comparable to mitigated ELMs expected in ITER (Pitts et al 2011 J. Nucl. Mater. 415 (Suppl.) S957-64). Although significant material losses in terms of ejections into the plasma were not observed, there is indirect evidence that some small droplets (˜80 µm) were released. Almost 1 mm (˜6 mm3) of W was moved by ˜150 ELMs within 7 subsequent discharges. The impact on the main plasma parameters was minor and no disruptions occurred. The W-melt gradually moved along the leading edge towards the high-field side, driven by j × B forces. The evaporation rate determined from spectroscopy is 100 times less than expected from steady state melting and is thus consistent only with transient melting during the individual ELMs. Analysis of IR data and spectroscopy together with modelling using the MEMOS code Bazylev et al 2009 J. Nucl. Mater. 390-391 810-13 point to transient melting as the main process. 3D MEMOS simulations on the consequences of multiple ELMs on damage of tungsten castellated armour have been performed. These experiments provide the first experimental evidence for the absence of significant melt splashing at transient events resembling mitigated ELMs on ITER and establish a key experimental benchmark for the MEMOS code.

  8. An iterative sinogram gap-filling method with object- and scanner-dedicated discrete cosine transform (DCT)-domain filters for high resolution PET scanners.

    PubMed

    Kim, Kwangdon; Lee, Kisung; Lee, Hakjae; Joo, Sungkwan; Kang, Jungwon

    2018-01-01

    We aimed to develop a gap-filling algorithm, in particular the filter mask design method of the algorithm, which optimizes the filter to the imaging object by an adaptive and iterative process, rather than by manual means. Two numerical phantoms (Shepp-Logan and Jaszczak) were used for sinogram generation. The algorithm works iteratively, not only on the gap-filling iteration but also on the mask generation, to identify the object-dedicated low frequency area in the DCT-domain that is to be preserved. We redefine the low frequency preserving region of the filter mask at every gap-filling iteration, and the region verges on the property of the original image in the DCT domain. The previous DCT2 mask for each phantom case had been manually well optimized, and the results show little difference from the reference image and sinogram. We observed little or no difference between the results of the manually optimized DCT2 algorithm and those of the proposed algorithm. The proposed algorithm works well for various types of scanning object and shows results that compare to those of the manually optimized DCT2 algorithm without perfect or full information of the imaging object.

  9. Multigrid techniques for nonlinear eigenvalue probems: Solutions of a nonlinear Schroedinger eigenvalue problem in 2D and 3D

    NASA Technical Reports Server (NTRS)

    Costiner, Sorin; Taasan, Shlomo

    1994-01-01

    This paper presents multigrid (MG) techniques for nonlinear eigenvalue problems (EP) and emphasizes an MG algorithm for a nonlinear Schrodinger EP. The algorithm overcomes the mentioned difficulties combining the following techniques: an MG projection coupled with backrotations for separation of solutions and treatment of difficulties related to clusters of close and equal eigenvalues; MG subspace continuation techniques for treatment of the nonlinearity; an MG simultaneous treatment of the eigenvectors at the same time with the nonlinearity and with the global constraints. The simultaneous MG techniques reduce the large number of self consistent iterations to only a few or one MG simultaneous iteration and keep the solutions in a right neighborhood where the algorithm converges fast.

  10. Projection matrix acquisition for cone-beam computed tomography iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Shi, Wenlong; Zhang, Caixin; Gao, Zongzhao

    2017-02-01

    Projection matrix is an essential and time-consuming part in computed tomography (CT) iterative reconstruction. In this article a novel calculation algorithm of three-dimensional (3D) projection matrix is proposed to quickly acquire the matrix for cone-beam CT (CBCT). The CT data needed to be reconstructed is considered as consisting of the three orthogonal sets of equally spaced and parallel planes, rather than the individual voxels. After getting the intersections the rays with the surfaces of the voxels, the coordinate points and vertex is compared to obtain the index value that the ray traversed. Without considering ray-slope to voxel, it just need comparing the position of two points. Finally, the computer simulation is used to verify the effectiveness of the algorithm.

  11. Static shape of an acoustically levitated drop with wave-drop interaction

    NASA Astrophysics Data System (ADS)

    Lee, C. P.; Anilkumar, A. V.; Wang, T. G.

    1994-11-01

    The static shape of a drop levitated and flattened by an acoustic standing wave field in air is calculated, requiring self-consistency between the drop shape and the wave. The wave is calculated for a given shape using the boundary integral method. From the resulting radiation stress on the drop surface, the shape is determined by solving the Young-Laplace equation, completing an iteration cycle. The iteration is continued until both the shape and the wave converge. Of particular interest are the shapes of large drops that sustain equilibrium, beyond a certain degree of flattening, by becoming more flattened at a decreasing sound pressure level. The predictions for flattening versus acoustic radiation stress, for drops of different sizes, compare favorably with experimental data.

  12. Intelligent model-based OPC

    NASA Astrophysics Data System (ADS)

    Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Chih, M. H.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.

    2006-03-01

    Optical proximity correction is the technique of pre-distorting mask layouts so that the printed patterns are as close to the desired shapes as possible. For model-based optical proximity correction, a lithographic model to predict the edge position (contour) of patterns on the wafer after lithographic processing is needed. Generally, segmentation of edges is performed prior to the correction. Pattern edges are dissected into several small segments with corresponding target points. During the correction, the edges are moved back and forth from the initial drawn position, assisted by the lithographic model, to finally settle on the proper positions. When the correction converges, the intensity predicted by the model in every target points hits the model-specific threshold value. Several iterations are required to achieve the convergence and the computation time increases with the increase of the required iterations. An artificial neural network is an information-processing paradigm inspired by biological nervous systems, such as how the brain processes information. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. A neural network can be a powerful data-modeling tool that is able to capture and represent complex input/output relationships. The network can accurately predict the behavior of a system via the learning procedure. A radial basis function network, a variant of artificial neural network, is an efficient function approximator. In this paper, a radial basis function network was used to build a mapping from the segment characteristics to the edge shift from the drawn position. This network can provide a good initial guess for each segment that OPC has carried out. The good initial guess reduces the required iterations. Consequently, cycle time can be shortened effectively. The optimization of the radial basis function network for this system was practiced by genetic algorithm, which is an artificially intelligent optimization method with a high probability to obtain global optimization. From preliminary results, the required iterations were reduced from 5 to 2 for a simple dumbbell-shape layout.

  13. Accelerated iterative beam angle selection in IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bangert, Mark, E-mail: m.bangert@dkfz.de; Unkelbach, Jan

    2016-03-15

    Purpose: Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n − 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based onmore » surrogates for the FMO objective function value. Methods: We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Results: Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. Conclusions: We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.« less

  14. Accelerated iterative beam angle selection in IMRT.

    PubMed

    Bangert, Mark; Unkelbach, Jan

    2016-03-01

    Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n - 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based on surrogates for the FMO objective function value. We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.

  15. Wishful Thinking? Inside the Black Box of Exposure Assessment.

    PubMed

    Money, Annemarie; Robinson, Christine; Agius, Raymond; de Vocht, Frank

    2016-05-01

    Decision-making processes used by experts when undertaking occupational exposure assessment are relatively unknown, but it is often assumed that there is a common underlying method that experts employ. However, differences in training and experience of assessors make it unlikely that one general method for expert assessment would exist. Therefore, there are concerns about formalizing, validating, and comparing expert estimates within and between studies that are difficult, if not impossible, to characterize. Heuristics on the other hand (the processes involved in decision making) have been extensively studied. Heuristics are deployed by everyone as short-cuts to make the often complex process of decision-making simpler, quicker, and less burdensome. Experts' assessments are often subject to various simplifying heuristics as a way to reach a decision in the absence of sufficient data. Therefore, investigating the underlying heuristics or decision-making processes involved may help to shed light on the 'black box' of exposure assessment. A mixed method study was conducted utilizing both a web-based exposure assessment exercise incorporating quantitative and semiqualitative elements of data collection, and qualitative semi-structured interviews with exposure assessors. Qualitative data were analyzed using thematic analysis. Twenty-five experts completed the web-based exposure assessment exercise and 8 of these 25 were randomly selected to participate in the follow-up interview. Familiar key themes relating to the exposure assessment exercise emerged; 'intensity'; 'probability'; 'agent'; 'process'; and 'duration' of exposure. However, an important aspect of the detailed follow-up interviews revealed a lack of structure and order with which participants described their decision making. Participants mostly described some form of an iterative process, heavily relying on the anchoring and adjustment heuristic, which differed between experts. In spite of having undertaken comparable training (in occupational hygiene or exposure assessment), experts use different methods to assess exposure. Decision making appears to be an iterative process with heavy reliance on the key heuristic of anchoring and adjustment. Using multiple experts to assess exposure while providing some form of anchoring scenario to build from, and additional training in understanding the impact of simple heuristics on the process of decision making, is likely to produce a more methodical approach to assessment; thereby improving consistency and transparency in expert exposure assessment. © The Author 2016. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  16. Assessment of Preconditioner for a USM3D Hierarchical Adaptive Nonlinear Method (HANIM) (Invited)

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.

    2016-01-01

    Enhancements to the previously reported mixed-element USM3D Hierarchical Adaptive Nonlinear Iteration Method (HANIM) framework have been made to further improve robustness, efficiency, and accuracy of computational fluid dynamic simulations. The key enhancements include a multi-color line-implicit preconditioner, a discretely consistent symmetry boundary condition, and a line-mapping method for the turbulence source term discretization. The USM3D iterative convergence for the turbulent flows is assessed on four configurations. The configurations include a two-dimensional (2D) bump-in-channel, the 2D NACA 0012 airfoil, a three-dimensional (3D) bump-in-channel, and a 3D hemisphere cylinder. The Reynolds Averaged Navier Stokes (RANS) solutions have been obtained using a Spalart-Allmaras turbulence model and families of uniformly refined nested grids. Two types of HANIM solutions using line- and point-implicit preconditioners have been computed. Additional solutions using the point-implicit preconditioner alone (PA) method that broadly represents the baseline solver technology have also been computed. The line-implicit HANIM shows superior iterative convergence in most cases with progressively increasing benefits on finer grids.

  17. Time Dependent Predictive Modeling of DIII-D ITER Baseline Scenario using Predictive TRANSP

    NASA Astrophysics Data System (ADS)

    Grierson, B. A.; Andre, R. G.; Budny, R. V.; Solomon, W. M.; Yuan, X.; Candy, J.; Pinsker, R. I.; Staebler, G. M.; Holland, C.; Rafiq, T.

    2015-11-01

    ITER baseline scenario discharges on DIII-D are modeled with TGLF and MMM transitioning from combined ECH (3.3MW) +NBI(2.8MW) heating to NBI only (3.0 MW) heating maintaining βN = 2.0 on DIII-D predicting temperature, density and rotation for comparison to experimental measurements. These models capture the reduction of confinement associated with direct electron heating H98y2 = 0.89 vs. 1.0) consistent with stiff electron transport. Reasonable agreement between experimental and modeled temperature profiles is achieved for both heating methods, whereas density and momentum predictions differ significantly. Transport fluxes from TGLF indicate that on DIII-D the electron energy flux has reached a transition from low-k to high-k turbulence with more stiff high-k transport that inhibits an increase in core electron stored energy with additional electron heating. Projections to ITER also indicate high electron stiffness. Supported by US DOE DE-AC02-09CH11466, DE-FC02-04ER54698, DE-FG02-07ER54917, DE-FG02-92-ER54141.

  18. Qualification of a cyanate ester epoxy blend supplied by Japanese industry for the ITER TF coil insulation

    NASA Astrophysics Data System (ADS)

    Prokopec, R.; Humer, K.; Fillunger, H.; Maix, R. K.; Weber, H. W.; Knaster, J.; Savary, F.

    2012-06-01

    During the last years, two cyanate ester epoxy blends supplied by European and US industry have been successfully qualified for the ITER TF coil insulation. The results of the qualification of a third CE blend supplied by Industrial Summit Technology (IST, Japan) will be presented in this paper. Sets of test samples were fabricated exactly under the same conditions as used before. The reinforcement of the composite consists of wrapped R-glass / polyimide tapes, which are vacuum pressure impregnated with the resin. The mechanical properties of this material were characterized prior to and after reactor irradiation to a fast neutron fluence of 2×1022m-2 (E>0.1 MeV), i.e. twice the ITER design fluence. Static and dynamic tensile as well as static short beam shear tests were carried out at 77 K. In addition, stress strain relations were recorded to determine the Young's modulus at room temperature and at 77 K. The results are compared in detail with the previously qualified materials from other suppliers.

  19. RMP ELM Suppression in DIII-D Plasmas with ITER Similar Shapes and Collisionalities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, T.E.; Fenstermacher, M. E.; Moyer, R.A.

    2008-01-01

    Large Type-I edge localized modes (ELMs) are completely eliminated with small n = 3 resonant magnetic perturbations (RMP) in low average triangularity, = 0.26, plasmas and in ITER similar shaped (ISS) plasmas, = 0.53, with ITER relevant collisionalities ve 0.2. Significant differences in the RMP requirements and in the properties of the ELM suppressed plasmas are found when comparing the two triangularities. In ISS plasmas, the current required to suppress ELMs is approximately 25% higher than in low average triangularity plasmas. It is also found that the width of the resonant q95 window required for ELM suppression is smaller inmore » ISS plasmas than in low average triangularity plasmas. An analysis of the positions and widths of resonant magnetic islands across the pedestal region, in the absence of resonant field screening or a self-consistent plasma response, indicates that differences in the shape of the q profile may explain the need for higher RMP coil currents during ELM suppression in ISS plasmas. Changes in the pedestal profiles are compared for each plasma shape as well as with changes in the injected neutral beam power and the RMP amplitude. Implications of these results are discussed in terms of requirements for optimal ELM control coil designs and for establishing the physics basis needed in order to scale this approach to future burning plasma devices such as ITER.« less

  20. Anderson acceleration and application to the three-temperature energy equations

    NASA Astrophysics Data System (ADS)

    An, Hengbin; Jia, Xiaowei; Walker, Homer F.

    2017-10-01

    The Anderson acceleration method is an algorithm for accelerating the convergence of fixed-point iterations, including the Picard method. Anderson acceleration was first proposed in 1965 and, for some years, has been used successfully to accelerate the convergence of self-consistent field iterations in electronic-structure computations. Recently, the method has attracted growing attention in other application areas and among numerical analysts. Compared with a Newton-like method, an advantage of Anderson acceleration is that there is no need to form the Jacobian matrix. Thus the method is easy to implement. In this paper, an Anderson-accelerated Picard method is employed to solve the three-temperature energy equations, which are a type of strong nonlinear radiation-diffusion equations. Two strategies are used to improve the robustness of the Anderson acceleration method. One strategy is to adjust the iterates when necessary to satisfy the physical constraint. Another strategy is to monitor and, if necessary, reduce the matrix condition number of the least-squares problem in the Anderson-acceleration implementation so that numerical stability can be guaranteed. Numerical results show that the Anderson-accelerated Picard method can solve the three-temperature energy equations efficiently. Compared with the Picard method without acceleration, Anderson acceleration can reduce the number of iterations by at least half. A comparison between a Jacobian-free Newton-Krylov method, the Picard method, and the Anderson-accelerated Picard method is conducted in this paper.

Top