Sample records for involves iterative cycles

  1. Learning to Teach Elementary Science through Iterative Cycles of Enactment in Culturally and Linguistically Diverse Contexts

    ERIC Educational Resources Information Center

    Bottoms, SueAnn I.; Ciechanowski, Kathryn M.; Hartman, Brian

    2015-01-01

    Iterative cycles of enactment embedded in culturally and linguistically diverse contexts provide rich opportunities for preservice teachers (PSTs) to enact core practices of science. This study is situated in the larger Families Involved in Sociocultural Teaching and Science, Technology, Engineering and Mathematics (FIESTAS) project, which weaves…

  2. Learning to Teach Elementary Science Through Iterative Cycles of Enactment in Culturally and Linguistically Diverse Contexts

    NASA Astrophysics Data System (ADS)

    Bottoms, SueAnn I.; Ciechanowski, Kathryn M.; Hartman, Brian

    2015-12-01

    Iterative cycles of enactment embedded in culturally and linguistically diverse contexts provide rich opportunities for preservice teachers (PSTs) to enact core practices of science. This study is situated in the larger Families Involved in Sociocultural Teaching and Science, Technology, Engineering and Mathematics (FIESTAS) project, which weaves together cycles of enactment, core practices in science education and culturally relevant pedagogies. The theoretical foundation draws upon situated learning theory and communities of practice. Using video analysis by PSTs and course artifacts, the authors studied how the iterative process of these cycles guided PSTs development as teachers of elementary science. Findings demonstrate how PSTs were drawing on resources to inform practice, purposefully noticing their practice, renegotiating their roles in teaching, and reconsidering "professional blindness" through cultural practice.

  3. Developing a Virtual Physics World

    ERIC Educational Resources Information Center

    Wegener, Margaret; McIntyre, Timothy J.; McGrath, Dominic; Savage, Craig M.; Williamson, Michael

    2012-01-01

    In this article, the successful implementation of a development cycle for a physics teaching package based on game-like virtual reality software is reported. The cycle involved several iterations of evaluating students' use of the package followed by instructional and software development. The evaluation used a variety of techniques, including…

  4. FENTON-DRIVEN CHEMICAL REGENERATION OF MTBE-SPENT GAC

    EPA Science Inventory

    Methyl tert-butyl ether (MTBE)-spent granular activated carbon (GAC) was chemically regenerated utilizing the Fenton mechanism. Two successive GAC regeneration cycles were performed involving iterative adsorption and oxidation processes: MTBE was adsorbed to the GAC, oxidized, r...

  5. Challenges and status of ITER conductor production

    NASA Astrophysics Data System (ADS)

    Devred, A.; Backbier, I.; Bessette, D.; Bevillard, G.; Gardner, M.; Jong, C.; Lillaz, F.; Mitchell, N.; Romano, G.; Vostner, A.

    2014-04-01

    Taking the relay of the large Hadron collider (LHC) at CERN, ITER has become the largest project in applied superconductivity. In addition to its technical complexity, ITER is also a management challenge as it relies on an unprecedented collaboration of seven partners, representing more than half of the world population, who provide 90% of the components as in-kind contributions. The ITER magnet system is one of the most sophisticated superconducting magnet systems ever designed, with an enormous stored energy of 51 GJ. It involves six of the ITER partners. The coils are wound from cable-in-conduit conductors (CICCs) made up of superconducting and copper strands assembled into a multistage cable, inserted into a conduit of butt-welded austenitic steel tubes. The conductors for the toroidal field (TF) and central solenoid (CS) coils require about 600 t of Nb3Sn strands while the poloidal field (PF) and correction coil (CC) and busbar conductors need around 275 t of Nb-Ti strands. The required amount of Nb3Sn strands far exceeds pre-existing industrial capacity and has called for a significant worldwide production scale up. The TF conductors are the first ITER components to be mass produced and are more than 50% complete. During its life time, the CS coil will have to sustain several tens of thousands of electromagnetic (EM) cycles to high current and field conditions, way beyond anything a large Nb3Sn coil has ever experienced. Following a comprehensive R&D program, a technical solution has been found for the CS conductor, which ensures stable performance versus EM and thermal cycling. Productions of PF, CC and busbar conductors are also underway. After an introduction to the ITER project and magnet system, we describe the ITER conductor procurements and the quality assurance/quality control programs that have been implemented to ensure production uniformity across numerous suppliers. Then, we provide examples of technical challenges that have been encountered and we present the status of ITER conductor production worldwide.

  6. Sometimes "Newton's Method" Always "Cycles"

    ERIC Educational Resources Information Center

    Latulippe, Joe; Switkes, Jennifer

    2012-01-01

    Are there functions for which Newton's method cycles for all non-trivial initial guesses? We construct and solve a differential equation whose solution is a real-valued function that two-cycles under Newton iteration. Higher-order cycles of Newton's method iterates are explored in the complex plane using complex powers of "x." We find a class of…

  7. Researchers Apply Lesson Study: A Cycle of Lesson Planning, Implementation, and Revision

    ERIC Educational Resources Information Center

    Regan, Kelley S.; Evmenova, Anya S.; Kurz, Leigh Ann; Hughes, Melissa D.; Sacco, Donna; Ahn, Soo Y.; MacVittie, Nichole; Good, Kevin; Boykin, Andrea; Schwartzer, Jessica; Chirinos, David S.

    2016-01-01

    Scripted lesson plans and/or professional development alone may not be sufficient to encourage teachers to reflect on the quality of their teaching and improve their teaching. One learning tool that teachers may use to improve their teaching is Lesson Study (LS). LS is a collaborative process involving educators, based on concepts of iteration and…

  8. Control software for two dimensional airfoil tests using a self-streamlining flexible walled transonic test section

    NASA Technical Reports Server (NTRS)

    Wolf, S. W. D.; Goodyer, M. J.

    1982-01-01

    Operation of the Transonic Self-Streamlining Wind Tunnel (TSWT) involved on-line data acquisition with automatic wall adjustment. A tunnel run consisted of streamlining the walls from known starting contours in iterative steps and acquiring model data. Each run performs what is described as a streamlining cycle. The associated software is presented.

  9. A Holistic Approach to Systems Development

    NASA Technical Reports Server (NTRS)

    Wong, Douglas T.

    2008-01-01

    Introduces a Holistic and Iterative Design Process. Continuous process but can be loosely divided into four stages. More effort spent early on in the design. Human-centered and Multidisciplinary. Emphasis on Life-Cycle Cost. Extensive use of modeling, simulation, mockups, human subjects, and proven technologies. Human-centered design doesn t mean the human factors discipline is the most important Disciplines should be involved in the design: Subsystem vendors, configuration management, operations research, manufacturing engineering, simulation/modeling, cost engineering, hardware engineering, software engineering, test and evaluation, human factors, electromagnetic compatibility, integrated logistics support, reliability/maintainability/availability, safety engineering, test equipment, training systems, design-to-cost, life cycle cost, application engineering etc. 9

  10. Iterative LQG Controller Design Through Closed-Loop Identification

    NASA Technical Reports Server (NTRS)

    Hsiao, Min-Hung; Huang, Jen-Kuang; Cox, David E.

    1996-01-01

    This paper presents an iterative Linear Quadratic Gaussian (LQG) controller design approach for a linear stochastic system with an uncertain open-loop model and unknown noise statistics. This approach consists of closed-loop identification and controller redesign cycles. In each cycle, the closed-loop identification method is used to identify an open-loop model and a steady-state Kalman filter gain from closed-loop input/output test data obtained by using a feedback LQG controller designed from the previous cycle. Then the identified open-loop model is used to redesign the state feedback. The state feedback and the identified Kalman filter gain are used to form an updated LQC controller for the next cycle. This iterative process continues until the updated controller converges. The proposed controller design is demonstrated by numerical simulations and experiments on a highly unstable large-gap magnetic suspension system.

  11. Using Iterative Plan-Do-Study-Act Cycles to Improve Teaching Pedagogy.

    PubMed

    Murray, Elizabeth J

    2018-01-15

    Most students entering nursing programs today are members of Generation Y or the Millennial generation, and they learn differently than previous generations. Nurse educators must consider implementing innovative teaching strategies that appeal to the newest generation of learners. The Plan-Do-Study-Act cycle is a framework that can be helpful when planning, assessing, and continually improving teaching pedagogy. This article describes the use of iterative Plan-Do-Study-Act cycles to implement a change in teaching pedagogy.

  12. Social and Personal Factors in Semantic Infusion Projects

    NASA Astrophysics Data System (ADS)

    West, P.; Fox, P. A.; McGuinness, D. L.

    2009-12-01

    As part of our semantic data framework activities across multiple, diverse disciplines we required the involvement of domain scientists, computer scientists, software engineers, data managers, and often, social scientists. This involvement from a cross-section of disciplines turns out to be a social exercise as much as it is a technical and methodical activity. Each member of the team is used to different modes of working, expectations, vocabularies, levels of participation, and incentive and reward systems. We will examine how both roles and personal responsibilities play in the development of semantic infusion projects, and how an iterative development cycle can contribute to the successful completion of such a project.

  13. Modeling and Simulation of a Parametrically Resonant Micromirror With Duty-Cycled Excitation.

    PubMed

    Shahid, Wajiha; Qiu, Zhen; Duan, Xiyu; Li, Haijun; Wang, Thomas D; Oldham, Kenn R

    2014-12-01

    High frequency large scanning angle electrostatically actuated microelectromechanical systems (MEMS) mirrors are used in a variety of applications involving fast optical scanning. A 1-D parametrically resonant torsional micromirror for use in biomedical imaging is analyzed here with respect to operation by duty-cycled square waves. Duty-cycled square wave excitation can have significant advantages for practical mirror regulation and/or control. The mirror's nonlinear dynamics under such excitation is analyzed in a Hill's equation form. This form is used to predict stability regions (the voltage-frequency relationship) of parametric resonance behavior over large scanning angles using iterative approximations for nonlinear capacitance behavior of the mirror. Numerical simulations are also performed to obtain the mirror's frequency response over several voltages for various duty cycles. Frequency sweeps, stability results, and duty cycle trends from both analytical and simulation methods are compared with experimental results. Both analytical models and simulations show good agreement with experimental results over the range of duty cycled excitations tested. This paper discusses the implications of changing amplitude and phase with duty cycle for robust open-loop operation and future closed-loop operating strategies.

  14. Iterative management of heat early warning systems in a changing climate.

    PubMed

    Hess, Jeremy J; Ebi, Kristie L

    2016-10-01

    Extreme heat is a leading weather-related cause of morbidity and mortality, with heat exposure becoming more widespread, frequent, and intense as climates change. The use of heat early warning and response systems (HEWSs) that integrate weather forecasts with risk assessment, communication, and reduction activities is increasingly widespread. HEWSs are frequently touted as an adaptation to climate change, but little attention has been paid to the question of how best to ensure effectiveness of HEWSs as climates change further. In this paper, we discuss findings showing that HEWSs satisfy the tenets of an intervention that facilitates adaptation, but climate change poses challenges infrequently addressed in heat action plans, particularly changes in the onset, duration, and intensity of dangerously warm temperatures, and changes over time in the relationships between temperature and health outcomes. Iterative management should be central to a HEWS, and iteration cycles should be of 5 years or less. Climate change adaptation and implementation science research frameworks can be used to identify HEWS modifications to improve their effectiveness as temperature continues to rise, incorporating scientific insights and new understanding of effective interventions. We conclude that, at a minimum, iterative management activities should involve planned reassessment at least every 5 years of hazard distribution, population-level vulnerability, and HEWS effectiveness. © 2016 New York Academy of Sciences.

  15. Global strength assessment in oblique waves of a large gas carrier ship, based on a non-linear iterative method

    NASA Astrophysics Data System (ADS)

    Domnisoru, L.; Modiga, A.; Gasparotti, C.

    2016-08-01

    At the ship's design, the first step of the hull structural assessment is based on the longitudinal strength analysis, with head wave equivalent loads by the ships' classification societies’ rules. This paper presents an enhancement of the longitudinal strength analysis, considering the general case of the oblique quasi-static equivalent waves, based on the own non-linear iterative procedure and in-house program. The numerical approach is developed for the mono-hull ships, without restrictions on 3D-hull offset lines non-linearities, and involves three interlinked iterative cycles on floating, pitch and roll trim equilibrium conditions. Besides the ship-wave equilibrium parameters, the ship's girder wave induced loads are obtained. As numerical study case we have considered a large LPG liquefied petroleum gas carrier. The numerical results of the large LPG are compared with the statistical design values from several ships' classification societies’ rules. This study makes possible to obtain the oblique wave conditions that are inducing the maximum loads into the large LPG ship's girder. The numerical results of this study are pointing out that the non-linear iterative approach is necessary for the computation of the extreme loads induced by the oblique waves, ensuring better accuracy of the large LPG ship's longitudinal strength assessment.

  16. Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems

    DOE PAGES

    Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.; ...

    2018-04-30

    The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less

  17. Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.

    The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less

  18. Experiments on water detritiation and cryogenic distillation at TLK; Impact on ITER fuel cycle subsystems interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cristescu, I.; Cristescu, I. R.; Doerr, L.

    2008-07-15

    The ITER Isotope Separation System (ISS) and Water Detritiation System (WDS) should be integrated in order to reduce potential chronic tritium emissions from the ISS. This is achieved by routing the top (protium) product from the ISS to a feed point near the bottom end of the WDS Liquid Phase Catalytic Exchange (LPCE) column. This provides an additional barrier against ISS emissions and should mitigate the memory effects due to process parameter fluctuations in the ISS. To support the research activities needed to characterize the performances of various components for WDS and ISS processes under various working conditions and configurationsmore » as needed for ITER design, an experimental facility called TRENTA representative of the ITER WDS and ISS protium separation column, has been commissioned and is in operation at TLK The experimental program on TRENTA facility is conducted to provide the necessary design data related to the relevant ITER operating modes. The operation availability and performances of ISS-WDS have impact on ITER fuel cycle subsystems with consequences on the design integration. The preliminary experimental data on TRENTA facility are presented. (authors)« less

  19. Integrating Low-Cost Rapid Usability Testing into Agile System Development of Healthcare IT: A Methodological Perspective.

    PubMed

    Kushniruk, Andre W; Borycki, Elizabeth M

    2015-01-01

    The development of more usable and effective healthcare information systems has become a critical issue. In the software industry methodologies such as agile and iterative development processes have emerged to lead to more effective and usable systems. These approaches highlight focusing on user needs and promoting iterative and flexible development practices. Evaluation and testing of iterative agile development cycles is considered an important part of the agile methodology and iterative processes for system design and re-design. However, the issue of how to effectively integrate usability testing methods into rapid and flexible agile design cycles has remained to be fully explored. In this paper we describe our application of an approach known as low-cost rapid usability testing as it has been applied within agile system development in healthcare. The advantages of the integrative approach are described, along with current methodological considerations.

  20. Using Rapid Prototyping to Design a Smoking Cessation Website with End-Users.

    PubMed

    Ronquillo, Charlene; Currie, Leanne; Rowsell, Derek; Phillips, J Craig

    2016-01-01

    Rapid prototyping is an iterative approach to design involving cycles of prototype building, review by end-users and refinement, and can be a valuable tool in user-centered website design. Informed by various user-centered approaches, we used rapid prototyping as a tool to collaborate with users in building a peer-support focused smoking-cessation website for gay men living with HIV. Rapid prototyping was effective in eliciting feedback on the needs of this group of potential end-users from a smoking cessation website.

  1. Modeling and Simulation of a Parametrically Resonant Micromirror With Duty-Cycled Excitation

    PubMed Central

    Shahid, Wajiha; Qiu, Zhen; Duan, Xiyu; Li, Haijun; Wang, Thomas D.; Oldham, Kenn R.

    2014-01-01

    High frequency large scanning angle electrostatically actuated microelectromechanical systems (MEMS) mirrors are used in a variety of applications involving fast optical scanning. A 1-D parametrically resonant torsional micromirror for use in biomedical imaging is analyzed here with respect to operation by duty-cycled square waves. Duty-cycled square wave excitation can have significant advantages for practical mirror regulation and/or control. The mirror’s nonlinear dynamics under such excitation is analyzed in a Hill’s equation form. This form is used to predict stability regions (the voltage-frequency relationship) of parametric resonance behavior over large scanning angles using iterative approximations for nonlinear capacitance behavior of the mirror. Numerical simulations are also performed to obtain the mirror’s frequency response over several voltages for various duty cycles. Frequency sweeps, stability results, and duty cycle trends from both analytical and simulation methods are compared with experimental results. Both analytical models and simulations show good agreement with experimental results over the range of duty cycled excitations tested. This paper discusses the implications of changing amplitude and phase with duty cycle for robust open-loop operation and future closed-loop operating strategies. PMID:25506188

  2. Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Dufek, Jan

    2014-06-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.

  3. Optical implementation of inner product neural associative memory

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor)

    1995-01-01

    An optical implementation of an inner-product neural associative memory is realized with a first spatial light modulator for entering an initial two-dimensional N-tuple vector and for entering a thresholded output vector image after each iteration until convergence is reached, and a second spatial light modulator for entering M weighted vectors of inner-product scalars multiplied with each of the M stored vectors, where the inner-product scalars are produced by multiplication of the initial input vector in the first iterative cycle (and thresholded vectors in subsequent iterative cycles) with each of the M stored vectors, and the weighted vectors are produced by multiplication of the scalars with corresponding ones of the stored vectors. A Hughes liquid crystal light valve is used for the dual function of summing the weighted vectors and thresholding the sum vector. The thresholded vector is then entered through the first spatial light modulator for reiteration of the process cycle until convergence is reached.

  4. Reducing Design Cycle Time and Cost Through Process Resequencing

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    2004-01-01

    In today's competitive environment, companies are under enormous pressure to reduce the time and cost of their design cycle. One method for reducing both time and cost is to develop an understanding of the flow of the design processes and the effects of the iterative subcycles that are found in complex design projects. Once these aspects are understood, the design manager can make decisions that take advantage of decomposition, concurrent engineering, and parallel processing techniques to reduce the total time and the total cost of the design cycle. One software tool that can aid in this decision-making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). The DeMAID software minimizes the feedback couplings that create iterative subcycles, groups processes into iterative subcycles, and decomposes the subcycles into a hierarchical structure. The real benefits of producing the best design in the least time and at a minimum cost are obtained from sequencing the processes in the subcycles.

  5. Structural materials by powder HIP for fusion reactors

    NASA Astrophysics Data System (ADS)

    Dellis, C.; Le Marois, G.; van Osch, E. V.

    1998-10-01

    Tokamak blankets have complex shapes and geometries with double curvature and embedded cooling channels. Usual manufacturing techniques such as forging, bending and welding generate very complex fabrication routes. Hot Isostatic Pressing (HIP) is a versatile and flexible fabrication technique that has a broad range of commercial applications. Powder HIP appears to be one of the most suitable techniques for the manufacturing of such complex shape components as fusion reactor modules. During the HIP cycle, consolidation of the powder is made and porosity in the material disappears. This involves a variation of 30% in volume of the component. These deformations are not isotropic due to temperature gradients in the part and the stiffness of the canister. This paper discusses the following points: (i) Availability of manufacturing process by powder HIP of 316LN stainless steel (ITER modules) and F82H martensitic steel (ITER Test Module and DEMO blanket) with properties equivalent to the forged one.(ii) Availability of powerful modelling techniques to simulate the densification of powder during the HIP cycle, and to control the deformation of components during consolidation by improving the canister design.(iii) Material data base needed for simulation of the HIP process, and the optimisation of canister geometry.(iv) Irradiation behaviour on powder HIP materials from preliminary results.

  6. Improvement of tritium accountancy technology for ITER fuel cycle safety enhancement

    NASA Astrophysics Data System (ADS)

    O'hira, S.; Hayashi, T.; Nakamura, H.; Kobayashi, K.; Tadokoro, T.; Nakamura, H.; Itoh, T.; Yamanishi, T.; Kawamura, Y.; Iwai, Y.; Arita, T.; Maruyama, T.; Kakuta, T.; Konishi, S.; Enoeda, M.; Yamada, M.; Suzuki, T.; Nishi, M.; Nagashima, T.; Ohta, M.

    2000-03-01

    In order to improve the safe handling and control of tritium for the ITER fuel cycle, effective in situ tritium accounting methods have been developed at the Tritium Process Laboratory in the Japan Atomic Energy Research Institute under one of the ITER-EDA R&D tasks. The remote and multilocation analysis of process gases by an application of laser Raman spectroscopy developed and tested could provide a measurement of hydrogen isotope gases with a detection limit of 0.3 kPa analytical periods of 120 s. An in situ tritium inventory measurement by application of a `self-assaying' storage bed with 25 g tritium capacity could provide a measurement with the required detection limit of less than 1% and a design proof of a bed with 100 g tritium capacity.

  7. Systematic development of input-quantum-limited fluoroscopic imagers based on active-matrix flat-panel technology

    NASA Astrophysics Data System (ADS)

    Antonuk, Larry E.; Zhao, Qihua; Su, Zhong; Yamamoto, Jin; El-Mohri, Youcef; Li, Yixin; Wang, Yi; Sawant, Amit R.

    2004-05-01

    The development of fluoroscopic imagers exhibiting performance that is primarily limited by the noise of the incident x-ray quanta, even at very low exposures, remains a highly desirable objective for active matrix flat-panel technology. Previous theoretical and empirical studies have indicated that promising strategies to acheiving this goal include the development of array designs incorporating improved optical collection fill factors, pixel-level amplifiers, or very high-gain photoconductors. Our group is pursuing all three strategies and this paper describes progress toward the systematic development of array designs involving the last approach. The research involved the iterative fabrication and evaluation of a series of prototype imagers incorporating a promising high-gain photoconductive material, mercuric iodide (HgI2). Over many cycles of photoconductor deposition and array evaluation, improvements ina variety of properties have been observed and remaining fundamental challenges have become apparent. For example, process compatibility between the deposited HgI2 and the arrays have been greatly improved, while preserving efficient, prompt signal extraction. As a result, x-ray sensitivities within a factor of two of the nominal limit associated with the single-crystal form of HgI2 have been observed at relatively low electric fields (~0.1 to 0.6 V/μm), for some iterations. In addition, for a number of iterations, performance targets for dark current stability and range of linearity have been met or exceeded. However, spotting of the array, due to localized chemical reactions, is still a concern. Moreover, the dark current, uniformity of pixel response, and degree of charge trapping, though markedly improved for some iterations, require further optimization. Furthermore, achieving the desired performance for all properties simultaneously remains an important goal. In this paper, a broad overview of the progress of the research will be presented, remaining challenges in the development of this photoconductive material will be outlined, and prospects for further improvement will be discussed.

  8. Flat Tile Armour Cooled by Hypervapotron Tube: a Possible Technology for ITER

    NASA Astrophysics Data System (ADS)

    Schlosser, J.; Escourbiac, F.; Merola, M.; Schedler, B.; Bayetti, P.; Missirlian, M.; Mitteau, R.; Robin-Vastra, I.

    Carbon fibre composite (CFC) flat tile armours for actively cooled plasma facing components (PFC’s) are an important challenge for controlled fusion machines. Flat tile concepts, water cooled by tubes, were studied, developed, tested and finally operated with success in Tore Supra. The components were designed for 10 MW/m2 and mock-ups were successfully fatigue tested at 15 MW/m2, 1000 cycles. For ITER, a tube-in-tile concept was developed and mock-ups sustained up to 25 MW/m2 for 1000 cycles without failure. Recently flat tile armoured mock-ups cooled by a hypervapotron tube successfully sustained a cascade failure test under a mean heat flux of 10 MW/m2 but with a doubling of the heat flux on some tiles to simulate missing tiles (500 cycles). This encouraging results lead to reconsider the limits for flat tile concept when cooled by hypervapotron (HV) tube. New tests are now scheduled to investigate these limits in regard to the ITER requirements. Experimental evidence of the concept could be gained in Tore Supra by installing a new limiter into the machine.

  9. Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.

    1996-01-01

    The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.

  10. Cyclic Game Dynamics Driven by Iterated Reasoning

    PubMed Central

    Frey, Seth; Goldstone, Robert L.

    2013-01-01

    Recent theories from complexity science argue that complex dynamics are ubiquitous in social and economic systems. These claims emerge from the analysis of individually simple agents whose collective behavior is surprisingly complicated. However, economists have argued that iterated reasoning–what you think I think you think–will suppress complex dynamics by stabilizing or accelerating convergence to Nash equilibrium. We report stable and efficient periodic behavior in human groups playing the Mod Game, a multi-player game similar to Rock-Paper-Scissors. The game rewards subjects for thinking exactly one step ahead of others in their group. Groups that play this game exhibit cycles that are inconsistent with any fixed-point solution concept. These cycles are driven by a “hopping” behavior that is consistent with other accounts of iterated reasoning: agents are constrained to about two steps of iterated reasoning and learn an additional one-half step with each session. If higher-order reasoning can be complicit in complex emergent dynamics, then cyclic and chaotic patterns may be endogenous features of real-world social and economic systems. PMID:23441191

  11. An iterative solver for the 3D Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Belonosov, Mikhail; Dmitriev, Maxim; Kostin, Victor; Neklyudov, Dmitry; Tcheverda, Vladimir

    2017-09-01

    We develop a frequency-domain iterative solver for numerical simulation of acoustic waves in 3D heterogeneous media. It is based on the application of a unique preconditioner to the Helmholtz equation that ensures convergence for Krylov subspace iteration methods. Effective inversion of the preconditioner involves the Fast Fourier Transform (FFT) and numerical solution of a series of boundary value problems for ordinary differential equations. Matrix-by-vector multiplication for iterative inversion of the preconditioned matrix involves inversion of the preconditioner and pointwise multiplication of grid functions. Our solver has been verified by benchmarking against exact solutions and a time-domain solver.

  12. Systematic review of the application of the plan–do–study–act method to improve quality in healthcare

    PubMed Central

    Taylor, Michael J; McNicholas, Chris; Nicolay, Chris; Darzi, Ara; Bell, Derek; Reed, Julie E

    2014-01-01

    Background Plan–do–study–act (PDSA) cycles provide a structure for iterative testing of changes to improve quality of systems. The method is widely accepted in healthcare improvement; however there is little overarching evaluation of how the method is applied. This paper proposes a theoretical framework for assessing the quality of application of PDSA cycles and explores the consistency with which the method has been applied in peer-reviewed literature against this framework. Methods NHS Evidence and Cochrane databases were searched by three independent reviewers. Empirical studies were included that reported application of the PDSA method in healthcare. Application of PDSA cycles was assessed against key features of the method, including documentation characteristics, use of iterative cycles, prediction-based testing of change, initial small-scale testing and use of data over time. Results 73 of 409 individual articles identified met the inclusion criteria. Of the 73 articles, 47 documented PDSA cycles in sufficient detail for full analysis against the whole framework. Many of these studies reported application of the PDSA method that failed to accord with primary features of the method. Less than 20% (14/73) fully documented the application of a sequence of iterative cycles. Furthermore, a lack of adherence to the notion of small-scale change is apparent and only 15% (7/47) reported the use of quantitative data at monthly or more frequent data intervals to inform progression of cycles. Discussion To progress the development of the science of improvement, a greater understanding of the use of improvement methods, including PDSA, is essential to draw reliable conclusions about their effectiveness. This would be supported by the development of systematic and rigorous standards for the application and reporting of PDSAs. PMID:24025320

  13. Subsonic panel method for designing wing surfaces from pressure distribution

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.; Hawk, J. D.

    1983-01-01

    An iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical distribution of pressure. The calculations are initialized by using a surface panel method to analyze a baseline wing or wing-fuselage configuration. A first-order expansion to the baseline panel method equations is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter. In every iteration cycle, the matrix is used both to calculate the geometry perturbation and to analyze the perturbed geometry. The distribution of potential on the perturbed geometry is established by simple linear extrapolation from the baseline solution. The extrapolated potential is converted to pressure by Bernoulli's equation. Not only is the accuracy of the approach good for very large perturbations, but the computing cost of each complete iteration cycle is substantially less than one analysis solution by a conventional panel method.

  14. A simplified method for elastic-plastic-creep structural analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1984-01-01

    A simplified inelastic analysis computer program (ANSYPM) was developed for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects are calculated on the basis of stress relaxation at constant strain, creep at constant stress or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.

  15. A simplified method for elastic-plastic-creep structural analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1985-01-01

    A simplified inelastic analysis computer program (ANSYPM) was developed for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects are calculated on the basis of stress relaxation at constant strain, creep at constant stress or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.

  16. Thermal fatigue testing of a diffusion-bonded beryllium divertor mock-up under ITER-relevant conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Youchison, D.L.; Watson, R.D.; McDonald, J.M.

    Thermal response and thermal fatigue tests of four 5-mm-thick beryllium tiles on a Russian Federation International Thermonuclear Experimental Reactor (ITER)-relevant divertor mock-up were completed on the electron beam test system at Sandia National Laboratories. Thermal response tests were performed on the tiles to an absorbed heat flux of 5 MW/m{sup 2} and surface temperatures near 300{degree}C using 1.4 MPa water at 5 m/s flow velocity and an inlet temperature of 8 to 15{degree}C. One tile was exposed to incrementally increasing heat fluxes up to 9.5 MW/m{sup 2} and surface temperatures up to 690{degree}C before debonding at 10MW/m{sup 2}. A secondmore » tile debonded in 25 to 30 cycles at <0.5 MW/m{sup 2}. However, a third tile debonded after 9200 thermal fatigue cycles at 5 MW/m{sup 2}, while another debonded after 6800 cycles. Posttest surface analysis indicated that fatigue failure occurred in the intermetallic layers between the beryllium and copper. No fatigue cracking of the bulk beryllium was observed. It appears that microcracks growing at the diffusion bond produced the observed gradual temperature increases during thermal cycling. These experiments indicate that diffusion-bonded beryllium tiles can survive several thousand thermal cycles under ITER-relevant conditions. However, the reliability of the diffusion-bonded joint remains a serious issue. 17 refs., 25 figs., 6 tabs.« less

  17. ISS Double-Gimbaled CMG Subsystem Simulation Using the Agile Development Method

    NASA Technical Reports Server (NTRS)

    Inampudi, Ravi

    2016-01-01

    This paper presents an evolutionary approach in simulating a cluster of 4 Control Moment Gyros (CMG) on the International Space Station (ISS) using a common sense approach (the agile development method) for concurrent mathematical modeling and simulation of the CMG subsystem. This simulation is part of Training systems for the 21st Century simulator which will provide training for crew members, instructors, and flight controllers. The basic idea of how the CMGs on the space station are used for its non-propulsive attitude control is briefly explained to set up the context for simulating a CMG subsystem. Next different reference frames and the detailed equations of motion (EOM) for multiple double-gimbal variable-speed control moment gyroscopes (DGVs) are presented. Fixing some of the terms in the EOM becomes the special case EOM for ISS's double-gimbaled fixed speed CMGs. CMG simulation development using the agile development method is presented in which customer's requirements and solutions evolve through iterative analysis, design, coding, unit testing and acceptance testing. At the end of the iteration a set of features implemented in that iteration are demonstrated to the flight controllers thus creating a short feedback loop and helping in creating adaptive development cycles. The unified modeling language (UML) tool is used in illustrating the user stories, class designs and sequence diagrams. This incremental development approach of mathematical modeling and simulating the CMG subsystem involved the development team and the customer early on, thus improving the quality of the working CMG system in each iteration and helping the team to accurately predict the cost, schedule and delivery of the software.

  18. Evaluation of the cryogenic mechanical properties of the insulation material for ITER Feeder superconducting joint

    NASA Astrophysics Data System (ADS)

    Wu, Zhixiong; Huang, Rongjin; Huang, ChuanJun; Yang, Yanfang; Huang, Xiongyi; Li, Laifeng

    2017-12-01

    The Glass-fiber reinforced plastic (GFRP) fabricated by the vacuum bag process was selected as the high voltage electrical insulation and mechanical support for the superconducting joints and the current leads for the ITER Feeder system. To evaluate the cryogenic mechanical properties of the GFRP, the mechanical properties such as the short beam strength (SBS), the tensile strength and the fatigue fracture strength after 30,000 cycles, were measured at 77K in this study. The results demonstrated that the GFRP met the design requirements of ITER.

  19. In-pile testing of ITER first wall mock-ups at relevant thermal loading conditions

    NASA Astrophysics Data System (ADS)

    Litunovsky, N.; Gervash, A.; Lorenzetto, P.; Mazul, I.; Melder, R.

    2009-04-01

    The paper describes the experimental technique and preliminary results of thermal fatigue testing of ITER first wall (FW) water-cooled mock-ups inside the core of the RBT-6 experimental fission reactor (RIAR, Dimitrovgrad, Russia). This experiment has provided simultaneous effect of neutron fluence and thermal cycling damages on the mock-ups. A PC-controlled high-temperature graphite ohmic heater was applied to provide cyclic thermal load onto the mock-ups surface. This experiment lasted for 309 effective irradiation days with a final damage level (CuCrZr) of 1 dpa in the mock-ups. About 3700 thermal cycles with a heat flux of 0.4-0.5 MW/m 2 onto the mock-ups were realized before the heater fails. Then, irradiation was continued in a non-cycling mode.

  20. Composition of web services using Markov decision processes and dynamic programming.

    PubMed

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.

  1. Railway track geometry degradation due to differential settlement of ballast/subgrade - Numerical prediction by an iterative procedure

    NASA Astrophysics Data System (ADS)

    Nielsen, Jens C. O.; Li, Xin

    2018-01-01

    An iterative procedure for numerical prediction of long-term degradation of railway track geometry (longitudinal level) due to accumulated differential settlement of ballast/subgrade is presented. The procedure is based on a time-domain model of dynamic vehicle-track interaction to calculate the contact loads between sleepers and ballast in the short-term, which are then used in an empirical model to determine the settlement of ballast/subgrade below each sleeper in the long-term. The number of load cycles (wheel passages) accounted for in each iteration step is determined by an adaptive step length given by a maximum settlement increment. To reduce the computational effort for the simulations of dynamic vehicle-track interaction, complex-valued modal synthesis with a truncated modal set is applied for the linear subset of the discretely supported track model with non-proportional spatial distribution of viscous damping. Gravity loads and state-dependent vehicle, track and wheel-rail contact conditions are accounted for as external loads on the modal model, including situations involving loss of (and recovered) wheel-rail contact, impact between hanging sleeper and ballast, and/or a prescribed variation of non-linear track support stiffness properties along the track model. The procedure is demonstrated by calculating the degradation of longitudinal level over time as initiated by a prescribed initial local rail irregularity (dipped welded rail joint).

  2. Plasma facing components: a conceptual design strategy for the first wall in FAST tokamak

    NASA Astrophysics Data System (ADS)

    Labate, C.; Di Gironimo, G.; Renno, F.

    2015-09-01

    Satellite tokamaks are conceived with the main purpose of developing new or alternative ITER- and DEMO-relevant technologies, able to contribute in resolving the pending issues about plasma operation. In particular, a high criticality needs to be associated to the design of plasma facing components, i.e. first wall (FW) and divertor, due to physical, topological and thermo-structural reasons. In such a context, the design of the FW in FAST fusion plant, whose operational range is close to ITER’s one, takes place. According to the mission of experimental satellites, the FW design strategy, which is presented in this paper relies on a series of innovative design choices and proposals with a particular attention to the typical key points of plasma facing components design. Such an approach, taking into account a series of involved physical constraints and functional requirements to be fulfilled, marks a clear borderline with the FW solution adopted in ITER, in terms of basic ideas, manufacturing aspects, remote maintenance procedure, manifolds management, cooling cycle and support system configuration.

  3. A computationally efficient approach for isolating satellite phase fractional cycle biases based on Kalman filter

    NASA Astrophysics Data System (ADS)

    Xiao, Guorui; Mayer, Michael; Heck, Bernhard; Sui, Lifen; Cong, Mingri

    2017-04-01

    Integer ambiguity resolution (AR) can significantly shorten the convergence time and improve the accuracy of Precise Point Positioning (PPP). Phase fractional cycle biases (FCB) originating from satellites destroy the integer nature of carrier phase ambiguities. To isolate the satellite FCB, observations from a global reference network are required. Firstly, float ambiguities containing FCBs are obtained by PPP processing. Secondly, the least squares method (LSM) is adopted to recover FCBs from all the float ambiguities. Finally, the estimated FCB products can be applied by the user to achieve PPP-AR. During the estimation of FCB, the LSM step can be very time-consuming, considering the large number of observations from hundreds of stations and thousands of epochs. In addition, iterations are required to deal with the one-cycle inconsistency among observations. Since the integer ambiguities are derived by directly rounding float ambiguities, the one-cycle inconsistency arises whenever the fractional parts of float ambiguities exceed the rounding boundary (e.g., 0.5 and -0.5). The iterations of LSM and the large number of observations require a long time to finish the estimation. Consequently, only a sparse global network containing a limited number of stations was processed in former research. In this paper, we propose to isolate the FCB based on a Kalman filter. The large number of observations is handled epoch-by-epoch, which significantly reduces the dimension of the involved matrix and accelerates the computation. In addition, it is also suitable for real-time applications. As for the one-cycle inconsistency, a pre-elimination method is developed to avoid the iteration of the whole process. According to the analysis of the derived satellite FCB products, we find that both wide-lane (WL) and narrow-lane (NL) FCB are very stable over time (e.g., WL FCB over several days rsp. NL FCB over tens of minutes). The stability implies that the satellite FCB can be removed by previous estimation. After subtraction of the satellite FCB, the receiver FCB can be determined. Theoretically, the receiver FCBs derived from different satellite observations should be the same for a single station. Thereby, the one-cycle inconsistency among satellites can be detected and eliminated by adjusting the corresponding receiver FCB. Here, stations can be handled individually to obtain "clean" FCB observations. In an experiment, 24 h observations from 200 stations are processed to estimate GPS FCB. The process finishes in one hour using a personal computer. The estimated WL FCB has a good consistency with existing WL FCB products (e.g., CNES, WHU-SGG). All differences are within ± 0.1 cycles, which indicates the correctness of the proposed approach. For NL FCB, all differences are within ± 0.2 cycles. Concerning the NL wavelength (10.7 cm), the slightly worse NL FCB may be ascribed to different PPP processing strategies. The state-based approach of the Kalman filter also allows for a more realistic modeling of stochastic parameters, which will be investigated in future research.

  4. Participatory design in the development of the wheelchair convoy system

    PubMed Central

    Sharma, Vinod; Simpson, Richard C; LoPresti, Edmund F; Mostowy, Casimir; Olson, Joseph; Puhlman, Jeremy; Hayashi, Steve; Cooper, Rory A; Konarski, Ed; Kerley, Barry

    2008-01-01

    Background In long-term care environments, residents who have severe mobility deficits are typically transported by having another person push the individual in a manual wheelchair. This practice is inefficient and encourages staff to hurry to complete the process, thereby setting the stage for unsafe practices. Furthermore, the time involved in assembling multiple individuals with disabilities often deters their participation in group activities. Methods The Wheelchair Convoy System (WCS) is being developed to allow a single caregiver to move multiple individuals without removing them from their wheelchairs. The WCS will consist of a processor, and a flexible cord linking each wheelchair to the wheelchair in front of it. A Participatory Design approach – in which several iterations of design, fabrication and evaluation are used to elicit feedback from users – was used. Results An iterative cycle of development and evaluation was followed through five prototypes of the device. The third and fourth prototypes were evaluated in unmanned field trials at J. Iverson Riddle Development Center. The prototypes were used to form a convoy of three wheelchairs that successfully completed a series of navigation tasks. Conclusion A Participatory Design approach to the project allowed the design of the WCS to quickly evolve towards a viable solution. The design that emerged by the end of the fifth development cycle bore little resemblance to the initial design, but successfully met the project's design criteria. Additional development and testing is planned to further refine the system. PMID:18171465

  5. Development of the Nuclear-Electronic Orbital Approach and Applications to Ionic Liquids and Tunneling Processes

    DTIC Science & Technology

    2010-02-24

    electronic Schrodinger equation . In previous grant cycles, we implemented the NEO approach at the Hartree-Fock (NEO-HF),13 configuration interaction...electronic and nuclear molecular orbitals. The resulting electronic and nuclear Hartree-Fock-Roothaan equations are solved iteratively until self...directly into the standard Hartree- Fock-Roothaan equations , which are solved iteratively to self-consistency. The density matrix representation

  6. Conjugate gradient coupled with multigrid for an indefinite problem

    NASA Technical Reports Server (NTRS)

    Gozani, J.; Nachshon, A.; Turkel, E.

    1984-01-01

    An iterative algorithm for the Helmholtz equation is presented. This scheme was based on the preconditioned conjugate gradient method for the normal equations. The preconditioning is one cycle of a multigrid method for the discrete Laplacian. The smoothing algorithm is red-black Gauss-Seidel and is constructed so it is a symmetric operator. The total number of iterations needed by the algorithm is independent of h. By varying the number of grids, the number of iterations depends only weakly on k when k(3)h(2) is constant. Comparisons with a SSOR preconditioner are presented.

  7. Insights into Global Health Practice from the Agile Software Development Movement

    PubMed Central

    Flood, David; Chary, Anita; Austad, Kirsten; Diaz, Anne Kraemer; García, Pablo; Martinez, Boris; Canú, Waleska López; Rohloff, Peter

    2016-01-01

    Global health practitioners may feel frustration that current models of global health research, delivery, and implementation are overly focused on specific interventions, slow to provide health services in the field, and relatively ill-equipped to adapt to local contexts. Adapting design principles from the agile software development movement, we propose an analogous approach to designing global health programs that emphasizes tight integration between research and implementation, early involvement of ground-level health workers and program beneficiaries, and rapid cycles of iterative program improvement. Using examples from our own fieldwork, we illustrate the potential of ‘agile global health’ and reflect on the limitations, trade-offs, and implications of this approach. PMID:27134081

  8. User-Centered Design Practices to Redesign a Nursing e-Chart in Line with the Nursing Process.

    PubMed

    Schachner, María B; Recondo, Francisco J; González, Zulma A; Sommer, Janine A; Stanziola, Enrique; Gassino, Fernando D; Simón, Mariana; López, Gastón E; Benítez, Sonia E

    2016-01-01

    Regarding the user-centered design (UCD) practices carried out at Hospital Italiano of Buenos Aires, nursing e-chart user interface was redesigned in order to improve records' quality of nursing process based on an adapted Virginia Henderson theoretical model and patient safety standards to fulfil Joint Commission accreditation requirements. UCD practices were applied as standardized and recommended for electronic medical records usability evaluation. Implementation of these practices yielded a series of prototypes in 5 iterative cycles of incremental improvements to achieve goals of usability which were used and perceived as satisfactory by general care nurses. Nurses' involvement allowed balance between their needs and institution requirements.

  9. Insights into Global Health Practice from the Agile Software Development Movement.

    PubMed

    Flood, David; Chary, Anita; Austad, Kirsten; Diaz, Anne Kraemer; García, Pablo; Martinez, Boris; Canú, Waleska López; Rohloff, Peter

    2016-01-01

    Global health practitioners may feel frustration that current models of global health research, delivery, and implementation are overly focused on specific interventions, slow to provide health services in the field, and relatively ill-equipped to adapt to local contexts. Adapting design principles from the agile software development movement, we propose an analogous approach to designing global health programs that emphasizes tight integration between research and implementation, early involvement of ground-level health workers and program beneficiaries, and rapid cycles of iterative program improvement. Using examples from our own fieldwork, we illustrate the potential of 'agile global health' and reflect on the limitations, trade-offs, and implications of this approach.

  10. Composition of Web Services Using Markov Decision Processes and Dynamic Programming

    PubMed Central

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. PMID:25874247

  11. Multigrid methods in structural mechanics

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Bigelow, C. A.; Taasan, S.; Hussaini, M. Y.

    1986-01-01

    Although the application of multigrid methods to the equations of elasticity has been suggested, few such applications have been reported in the literature. In the present work, multigrid techniques are applied to the finite element analysis of a simply supported Bernoulli-Euler beam, and various aspects of the multigrid algorithm are studied and explained in detail. In this study, six grid levels were used to model half the beam. With linear prolongation and sequential ordering, the multigrid algorithm yielded results which were of machine accuracy with work equivalent to 200 standard Gauss-Seidel iterations on the fine grid. Also with linear prolongation and sequential ordering, the V(1,n) cycle with n greater than 2 yielded better convergence rates than the V(n,1) cycle. The restriction and prolongation operators were derived based on energy principles. Conserving energy during the inter-grid transfers required that the prolongation operator be the transpose of the restriction operator, and led to improved convergence rates. With energy-conserving prolongation and sequential ordering, the multigrid algorithm yielded results of machine accuracy with a work equivalent to 45 Gauss-Seidel iterations on the fine grid. The red-black ordering of relaxations yielded solutions of machine accuracy in a single V(1,1) cycle, which required work equivalent to about 4 iterations on the finest grid level.

  12. Property Changes of Cyanate Ester/epoxy Insulation Systems Caused by AN Iter-Like Double Impregnation and by Reactor Irradiation

    NASA Astrophysics Data System (ADS)

    Prokopec, R.; Humer, K.; Fillunger, H.; Maix, R. K.; Weber, H. W.

    2010-04-01

    Because of the double pancake design of the ITER TF coils the insulation will be applied in several steps. As a consequence, the conductor insulation as well as the pancake insulation will undergo multiple heat cycles in addition to the initial curing cycle. In particular the properties of the organic resin may be influenced, since its heat resistance is limited. Two identical types of sample consisting of wrapped R-glass/Kapton layers and vacuum impregnated with a cyanate ester/epoxy blend were prepared. The build-up of the reinforcement was identical for both insulation systems; however, one system was fabricated in two steps. In the first step only one half of the reinforcing layers was impregnated and cured. Afterwards the remaining layers were wrapped onto the already cured system, before the resulting system was impregnated and cured again. The mechanical properties were characterized prior to and after irradiation to fast neutron fluences of 1 and 2×1022 m-2 (E>0.1 MeV) in tension and interlaminar shear at 77 K. In order to simulate the pulsed operation of ITER, tension-tension fatigue measurements were performed in the load controlled mode. The results do not show any evidence for reduced mechanical strength caused by the additional heat cycle.

  13. Performance improvement of robots using a learning control scheme

    NASA Technical Reports Server (NTRS)

    Krishna, Ramuhalli; Chiang, Pen-Tai; Yang, Jackson C. S.

    1987-01-01

    Many applications of robots require that the same task be repeated a number of times. In such applications, the errors associated with one cycle are also repeated every cycle of the operation. An off-line learning control scheme is used here to modify the command function which would result in smaller errors in the next operation. The learning scheme is based on a knowledge of the errors and error rates associated with each cycle. Necessary conditions for the iterative scheme to converge to zero errors are derived analytically considering a second order servosystem model. Computer simulations show that the errors are reduced at a faster rate if the error rate is included in the iteration scheme. The results also indicate that the scheme may increase the magnitude of errors if the rate information is not included in the iteration scheme. Modification of the command input using a phase and gain adjustment is also proposed to reduce the errors with one attempt. The scheme is then applied to a computer model of a robot system similar to PUMA 560. Improved performance of the robot is shown by considering various cases of trajectory tracing. The scheme can be successfully used to improve the performance of actual robots within the limitations of the repeatability and noise characteristics of the robot.

  14. Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.

    2004-01-01

    Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.

  15. Overview of the hydraulic characteristics of the ITER Central Solenoid Model Coil conductors after 15 years of test campaigns

    NASA Astrophysics Data System (ADS)

    Brighenti, A.; Bonifetto, R.; Isono, T.; Kawano, K.; Russo, G.; Savoldi, L.; Zanino, R.

    2017-12-01

    The ITER Central Solenoid Model Coil (CSMC) is a superconducting magnet, layer-wound two-in-hand using Nb3Sn cable-in-conduit conductors (CICCs) with the central channel typical of ITER magnets, cooled with supercritical He (SHe) at ∼4.5 K and 0.5 MPa, operating for approximately 15 years at the National Institutes for Quantum and Radiological Science and Technology in Naka, Japan. The aim of this work is to give an overview of the issues related to the hydraulic performance of the three different CICCs used in the CSMC based on the extensive experimental database put together during the past 15 years. The measured hydraulic characteristics are compared for the different test campaigns and compared also to those coming from the tests of short conductor samples when available. It is shown that the hydraulic performance of the CSMC conductors did not change significantly in the sequence of test campaigns with more than 50 cycles up to 46 kA and 8 cooldown/warmup cycles from 300 K to 4.5 K. The capability of the correlations typically used to predict the friction factor of the SHe for the design and analysis of ITER-like CICCs is also shown.

  16. A novel iterative scheme and its application to differential equations.

    PubMed

    Khan, Yasir; Naeem, F; Šmarda, Zdeněk

    2014-01-01

    The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.

  17. Iterative methods for mixed finite element equations

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.

    1985-01-01

    Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.

  18. Novel Framework for Reduced Order Modeling of Aero-engine Components

    NASA Astrophysics Data System (ADS)

    Safi, Ali

    The present study focuses on the popular dynamic reduction methods used in design of complex assemblies (millions of Degrees of Freedom) where numerous iterations are involved to achieve the final design. Aerospace manufacturers such as Rolls Royce and Pratt & Whitney are actively seeking techniques that reduce computational time while maintaining accuracy of the models. This involves modal analysis of components with complex geometries to determine the dynamic behavior due to non-linearity and complicated loading conditions. In such a case the sub-structuring and dynamic reduction techniques prove to be an efficient tool to reduce design cycle time. The components whose designs are finalized can be dynamically reduced to mass and stiffness matrices at the boundary nodes in the assembly. These matrices conserve the dynamics of the component in the assembly, and thus avoid repeated calculations during the analysis runs for design modification of other components. This thesis presents a novel framework in terms of modeling and meshing of any complex structure, in this case an aero-engine casing. In this study the affect of meshing techniques on the run time are highlighted. The modal analysis is carried out using an extremely fine mesh to ensure all minor details in the structure are captured correctly in the Finite Element (FE) model. This is used as the reference model, to compare against the results of the reduced model. The study also shows the conditions/criteria under which dynamic reduction can be implemented effectively, proving the accuracy of Criag-Bampton (C.B.) method and limitations of Static Condensation. The study highlights the longer runtime needed to produce the reduced matrices of components compared to the overall runtime of the complete unreduced model. Although once the components are reduced, the assembly run is significantly. Hence the decision to use Component Mode Synthesis (CMS) is to be taken judiciously considering the number of iterations that may be required during the design cycle.

  19. Analysis of supercritical CO{sub 2} cycle control strategies and dynamic response for Generation IV Reactors.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moisseytsev, A.; Sienicki, J. J.

    2011-04-12

    The analysis of specific control strategies and dynamic behavior of the supercritical carbon dioxide (S-CO{sub 2}) Brayton cycle has been extended to the two reactor types selected for continued development under the Generation IV Nuclear Energy Systems Initiative; namely, the Very High Temperature Reactor (VHTR) and the Sodium-Cooled Fast Reactor (SFR). Direct application of the standard S-CO{sub 2} recompression cycle to the VHTR was found to be challenging because of the mismatch in the temperature drop of the He gaseous reactor coolant through the He-to-CO{sub 2} reactor heat exchanger (RHX) versus the temperature rise of the CO{sub 2} through themore » RHX. The reference VHTR features a large temperature drop of 450 C between the assumed core outlet and inlet temperatures of 850 and 400 C, respectively. This large temperature difference is an essential feature of the VHTR enabling a lower He flow rate reducing the required core velocities and pressure drop. In contrast, the standard recompression S-CO{sub 2} cycle wants to operate with a temperature rise through the RHX of about 150 C reflecting the temperature drop as the CO{sub 2} expands from 20 MPa to 7.4 MPa in the turbine and the fact that the cycle is highly recuperated such that the CO{sub 2} entering the RHX is effectively preheated. Because of this mismatch, direct application of the standard recompression cycle results in a relatively poor cycle efficiency of 44.9%. However, two approaches have been identified by which the S-CO{sub 2} cycle can be successfully adapted to the VHTR and the benefits of the S-CO{sub 2} cycle, especially a significant gain in cycle efficiency, can be realized. The first approach involves the use of three separate cascaded S-CO{sub 2} cycles. Each S-CO{sub 2} cycle is coupled to the VHTR through its own He-to-CO{sub 2} RHX in which the He temperature is reduced by 150 C. The three respective cycles have efficiencies of 54, 50, and 44%, respectively, resulting in a net cycle efficiency of 49.3 %. The other approach involves reducing the minimum cycle pressure significantly below the critical pressure such that the temperature drop in the turbine is increased while the minimum cycle temperature is maintained above the critical temperature to prevent the formation of a liquid phase. The latter approach also involves the addition of a precooler and a third compressor before the main compressor to retain the benefits of compression near the critical point with the main compressor. For a minimum cycle pressure of 1 MPa, a cycle efficiency of 49.5% is achieved. Either approach opens up the door to applying the SCO{sub 2} cycle to the VHTR. In contrast, the SFR system typically has a core outlet-inlet temperature difference of about 150 C such that the standard recompression cycle is ideally suited for direct application to the SFR. The ANL Plant Dynamics Code has been modified for application to the VHTR and SFR when the reactor side dynamic behavior is calculated with another system level computer code such as SAS4A/SYSSYS-1 in the SFR case. The key modification involves modeling heat exchange in the RHX, accepting time dependent tabular input from the reactor code, and generating time dependent tabular input to the reactor code such that both the reactor and S-CO{sub 2} cycle sides can be calculated in a convergent iterative scheme. This approach retains the modeling benefits provided by the detailed reactor system level code and can be applied to any reactor system type incorporating a S-CO{sub 2} cycle. This approach was applied to the particular calculation of a scram scenario for a SFR in which the main and intermediate sodium pumps are not tripped and the generator is not disconnected from the electrical grid in order to enhance heat removal from the reactor system thereby enhancing the cooldown rate of the Na-to-CO{sub 2} RHX. The reactor side is calculated with SAS4A/SASSYS-1 while the S-CO{sub 2} cycle is calculated with the Plant Dynamics Code with a number of iterations over a timescale of 500 seconds. It is found that the RHX undergoes a maximum cooldown rate of {approx} -0.3 C/s. The Plant Dynamics Code was also modified to decrease its running time by replacing the compressible flow form of the momentum equation with an incompressible flow equation for use inside of the cooler or recuperators where the CO{sub 2} has a compressibility similar to that of a liquid. Appendices provide a quasi-static control strategy for a SFR as well as the self-adaptive linear function fitting algorithm developed to produce the tabular data for input to the reactor code and Plant Dynamics Code from the detailed output of the other code.« less

  20. Developing a Multi-Year Learning Progression for Carbon Cycling in Socio-Ecological Systems

    ERIC Educational Resources Information Center

    Mohan, Lindsey; Chen, Jing; Anderson, Charles W.

    2009-01-01

    This study reports on our steps toward achieving a conceptually coherent and empirically validated learning progression for carbon cycling in socio-ecological systems. It describes an iterative process of designing and analyzing assessment and interview data from students in upper elementary through high school. The product of our development…

  1. Iterating between lessons on concepts and procedures can improve mathematics knowledge.

    PubMed

    Rittle-Johnson, Bethany; Koedinger, Kenneth

    2009-09-01

    Knowledge of concepts and procedures seems to develop in an iterative fashion, with increases in one type of knowledge leading to increases in the other type of knowledge. This suggests that iterating between lessons on concepts and procedures may improve learning. The purpose of the current study was to evaluate the instructional benefits of an iterative lesson sequence compared to a concepts-before-procedures sequence for students learning decimal place-value concepts and arithmetic procedures. In two classroom experiments, sixth-grade students from two schools participated (N=77 and 26). Students completed six decimal lessons on an intelligent-tutoring systems. In the iterative condition, lessons cycled between concept and procedure lessons. In the concepts-first condition, all concept lessons were presented before introducing the procedure lessons. In both experiments, students in the iterative condition gained more knowledge of arithmetic procedures, including ability to transfer the procedures to problems with novel features. Knowledge of concepts was fairly comparable across conditions. Finally, pre-test knowledge of one type predicted gains in knowledge of the other type across experiments. An iterative sequencing of lessons seems to facilitate learning and transfer, particularly of mathematical procedures. The findings support an iterative perspective for the development of knowledge of concepts and procedures.

  2. The tug-of-war: fidelity versus adaptation throughout the health promotion program life cycle.

    PubMed

    Bopp, Melissa; Saunders, Ruth P; Lattimore, Diana

    2013-06-01

    Researchers across multiple fields have described the iterative and nonlinear phases of the translational research process from program development to dissemination. This process can be conceptualized within a "program life cycle" framework that includes overlapping and nonlinear phases: development, adoption, implementation, maintenance, sustainability or termination, and dissemination or diffusion, characterized by tensions between fidelity to the original plan and adaptation for the setting and population. In this article, we describe the life cycle (phases) for research-based health promotion programs, the key influences at each phase, and the issues related to the tug-of-war between fidelity and adaptation throughout the process using a fictionalized case study based on our previous research. This article suggests the importance of reconceptualizing intervention design, involving stakeholders, and monitoring fidelity and adaptation throughout all phases to maintain implementation fidelity and completeness. Intervention fidelity should be based on causal mechanisms to ensure effectiveness, while allowing for appropriate adaption to ensure maximum implementation and sustainability. Recommendations for future interventions include considering the determinants of implementation including contextual factors at each phase, the roles of stakeholders, and the importance of developing a rigorous, adaptive, and flexible definition of implementation fidelity and completeness.

  3. Manufacturing and testing of a prototypical divertor vertical target for ITER

    NASA Astrophysics Data System (ADS)

    Merola, M.; Plöchl, L.; Chappuis, Ph; Escourbiac, F.; Grattarola, M.; Smid, I.; Tivey, R.; Vieider, G.

    2000-12-01

    After an extensive R&D activity, a medium-scale divertor vertical target prototype has been manufactured by the EU Home Team. This component contains all the main features of the corresponding ITER divertor design and consists of two units with one cooling channel each, assembled together and having an overall length and width of about 600 and 50 mm, respectively. The upper part of the prototype has a tungsten macro-brush armour, whereas the lower part is covered by CFC monoblocks. A number of joining techniques were required to manufacture this component as well as an appreciable effort in the development of suitable non-destructive testing methods. The component was high heat flux tested in FE200 electron beam facility at Le Creusot, France. It endured 100 cycles at 5 MW/m 2, 1000 cycles at 10 MW/m 2 and more then 1000 cycles at 15-20 MW/m 2. The final critical heat flux test reached a value in excess of 30 MW/m 2.

  4. Multivariable frequency domain identification via 2-norm minimization

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    1992-01-01

    The author develops a computational approach to multivariable frequency domain identification, based on 2-norm minimization. In particular, a Gauss-Newton (GN) iteration is developed to minimize the 2-norm of the error between frequency domain data and a matrix fraction transfer function estimate. To improve the global performance of the optimization algorithm, the GN iteration is initialized using the solution to a particular sequentially reweighted least squares problem, denoted as the SK iteration. The least squares problems which arise from both the SK and GN iterations are shown to involve sparse matrices with identical block structure. A sparse matrix QR factorization method is developed to exploit the special block structure, and to efficiently compute the least squares solution. A numerical example involving the identification of a multiple-input multiple-output (MIMO) plant having 286 unknown parameters is given to illustrate the effectiveness of the algorithm.

  5. Uniform convergence of multigrid V-cycle iterations for indefinite and nonsymmetric problems

    NASA Technical Reports Server (NTRS)

    Bramble, James H.; Kwak, Do Y.; Pasciak, Joseph E.

    1993-01-01

    In this paper, we present an analysis of a multigrid method for nonsymmetric and/or indefinite elliptic problems. In this multigrid method various types of smoothers may be used. One type of smoother which we consider is defined in terms of an associated symmetric problem and includes point and line, Jacobi, and Gauss-Seidel iterations. We also study smoothers based entirely on the original operator. One is based on the normal form, that is, the product of the operator and its transpose. Other smoothers studied include point and line, Jacobi, and Gauss-Seidel. We show that the uniform estimates for symmetric positive definite problems carry over to these algorithms. More precisely, the multigrid iteration for the nonsymmetric and/or indefinite problem is shown to converge at a uniform rate provided that the coarsest grid in the multilevel iteration is sufficiently fine (but not depending on the number of multigrid levels).

  6. Metformin inhibits cell cycle progression of B-cell chronic lymphocytic leukemia cells.

    PubMed

    Bruno, Silvia; Ledda, Bernardetta; Tenca, Claudya; Ravera, Silvia; Orengo, Anna Maria; Mazzarello, Andrea Nicola; Pesenti, Elisa; Casciaro, Salvatore; Racchi, Omar; Ghiotto, Fabio; Marini, Cecilia; Sambuceti, Gianmario; DeCensi, Andrea; Fais, Franco

    2015-09-08

    B-cell chronic lymphocytic leukemia (CLL) was believed to result from clonal accumulation of resting apoptosis-resistant malignant B lymphocytes. However, it became increasingly clear that CLL cells undergo, during their life, iterative cycles of re-activation and subsequent clonal expansion. Drugs interfering with CLL cell cycle entry would be greatly beneficial in the treatment of this disease. 1, 1-Dimethylbiguanide hydrochloride (metformin), the most widely prescribed oral hypoglycemic agent, inexpensive and well tolerated, has recently received increased attention for its potential antitumor activity. We wondered whether metformin has apoptotic and anti-proliferative activity on leukemic cells derived from CLL patients. Metformin was administered in vitro either to quiescent cells or during CLL cell activation stimuli, provided by classical co-culturing with CD40L-expressing fibroblasts. At doses that were totally ineffective on normal lymphocytes, metformin induced apoptosis of quiescent CLL cells and inhibition of cell cycle entry when CLL were stimulated by CD40-CD40L ligation. This cytostatic effect was accompanied by decreased expression of survival- and proliferation-associated proteins, inhibition of signaling pathways involved in CLL disease progression and decreased intracellular glucose available for glycolysis. In drug combination experiments, metformin lowered the apoptotic threshold and potentiated the cytotoxic effects of classical and novel antitumor molecules. Our results indicate that, while CLL cells after stimulation are in the process of building their full survival and cycling armamentarium, the presence of metformin affects this process.

  7. Enabling Incremental Iterative Development at Scale: Quality Attribute Refinement and Allocation in Practice

    DTIC Science & Technology

    2015-06-01

    abstract constraints along six dimen- sions for expansion: user, actions, data , business rules, interfaces, and quality attributes [Gottesdiener 2010...relevant open source systems. For example, the CONNECT and HADOOP Distributed File System (HDFS) projects have many user stories that deal with...Iteration Zero involves architecture planning before writing any code. An overly long Iteration Zero is equivalent to the dysfunctional “ Big Up-Front

  8. Improvements in surface singularity analysis and design methods. [applicable to airfoils

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.

    1979-01-01

    The coupling of the combined source vortex distribution of Green's potential flow function with contemporary numerical techniques is shown to provide accurate, efficient, and stable solutions to subsonic inviscid analysis and design problems for multi-element airfoils. The analysis problem is solved by direct calculation of the surface singularity distribution required to satisfy the flow tangency boundary condition. The design or inverse problem is solved by an iteration process. In this process, the geometry and the associated pressure distribution are iterated until the pressure distribution most nearly corresponding to the prescribed design distribution is obtained. Typically, five iteration cycles are required for convergence. A description of the analysis and design method is presented, along with supporting examples.

  9. Standardized verification of fuel cycle modeling

    DOE PAGES

    Feng, B.; Dixon, B.; Sunny, E.; ...

    2016-04-05

    A nuclear fuel cycle systems modeling and code-to-code comparison effort was coordinated across multiple national laboratories to verify the tools needed to perform fuel cycle analyses of the transition from a once-through nuclear fuel cycle to a sustainable potential future fuel cycle. For this verification study, a simplified example transition scenario was developed to serve as a test case for the four systems codes involved (DYMOND, VISION, ORION, and MARKAL), each used by a different laboratory participant. In addition, all participants produced spreadsheet solutions for the test case to check all the mass flows and reactor/facility profiles on a year-by-yearmore » basis throughout the simulation period. The test case specifications describe a transition from the current US fleet of light water reactors to a future fleet of sodium-cooled fast reactors that continuously recycle transuranic elements as fuel. After several initial coordinated modeling and calculation attempts, it was revealed that most of the differences in code results were not due to different code algorithms or calculation approaches, but due to different interpretations of the input specifications among the analysts. Therefore, the specifications for the test case itself were iteratively updated to remove ambiguity and to help calibrate interpretations. In addition, a few corrections and modifications were made to the codes as well, which led to excellent agreement between all codes and spreadsheets for this test case. Although no fuel cycle transition analysis codes matched the spreadsheet results exactly, all remaining differences in the results were due to fundamental differences in code structure and/or were thoroughly explained. As a result, the specifications and example results are provided so that they can be used to verify additional codes in the future for such fuel cycle transition scenarios.« less

  10. APRN Usability Testing of a Tailored Computer-Mediated Health Communication Program

    PubMed Central

    Lin, Carolyn A.; Neafsey, Patricia J.; Anderson, Elizabeth

    2010-01-01

    This study tested the usability of a touch-screen enabled “Personal Education Program” (PEP) with Advanced Practice Registered Nurses (APRN). The PEP is designed to enhance medication adherence and reduce adverse self-medication behaviors in older adults with hypertension. An iterative research process was employed, which involved the use of: (1) pre-trial focus groups to guide the design of system information architecture, (2) two different cycles of think-aloud trials to test the software interface, and (3) post-trial focus groups to gather feedback on the think-aloud studies. Results from this iterative usability testing process were utilized to systematically modify and improve the three PEP prototype versions—the pilot, Prototype-1 and Prototype-2. Findings contrasting the two separate think-aloud trials showed that APRN users rated the PEP system usability, system information and system-use satisfaction at a moderately high level between trials. In addition, errors using the interface were reduced by 76 percent and the interface time was reduced by 18.5 percent between the two trials. The usability testing processes employed in this study ensured an interface design adapted to APRNs' needs and preferences to allow them to effectively utilize the computer-mediated health-communication technology in a clinical setting. PMID:19940619

  11. Integrating data from an online diabetes prevention program into an electronic health record and clinical workflow, a design phase usability study.

    PubMed

    Mishuris, Rebecca Grochow; Yoder, Jordan; Wilson, Dan; Mann, Devin

    2016-07-11

    Health information is increasingly being digitally stored and exchanged. The public is regularly collecting and storing health-related data on their own electronic devices and in the cloud. Diabetes prevention is an increasingly important preventive health measure, and diet and exercise are key components of this. Patients are turning to online programs to help them lose weight. Despite primary care physicians being important in patients' weight loss success, there is no exchange of information between the primary care provider (PCP) and these online weight loss programs. There is an emerging opportunity to integrate this data directly into the electronic health record (EHR), but little is known about what information to share or how to share it most effectively. This study aims to characterize the preferences of providers concerning the integration of externally generated lifestyle modification data into a primary care EHR workflow. We performed a qualitative study using two rounds of semi-structured interviews with primary care providers. We used an iterative design process involving primary care providers, health information technology software developers and health services researchers to develop the interface. Using grounded-theory thematic analysis 4 themes emerged from the interviews: 1) barriers to establishing healthy lifestyles, 2) features of a lifestyle modification program, 3) reporting of outcomes to the primary care provider, and 4) integration with primary care. These themes guided the rapid-cycle agile design process of an interface of data from an online diabetes prevention program into the primary care EHR workflow. The integration of external health-related data into the EHR must be embedded into the provider workflow in order to be useful to the provider and beneficial for the patient. Accomplishing this requires evaluation of that clinical workflow during software design. The development of this novel interface used rapid cycle iterative design, early involvement by providers, and usability testing methodology. This provides a framework for how to integrate external data into provider workflow in efficient and effective ways. There is now the potential to realize the importance of having this data available in the clinical setting for patient engagement and health outcomes.

  12. To Demonstrate an Integrated Solution for Plasma-Material Interfaces Compatible with an Optimized Core Plasma

    NASA Astrophysics Data System (ADS)

    Goldston, Robert; Brooks, Jeffrey; Hubbard, Amanda; Leonard, Anthony; Lipschultz, Bruce; Maingi, Rajesh; Ulrickson, Michael; Whyte, Dennis

    2009-11-01

    The plasma facing components in a Demo reactor will face much more extreme boundary plasma conditions and operating requirements than any present or planned experiment. These include 1) Power density a factor of four or more greater than in ITER, 2) Continuous operation resulting in annual energy and particle throughput 100-200 times larger than ITER, 3) Elevated surface operating temperature for efficient electricity production, 4) Tritium fuel cycle control for safety and breeding requirements, and 5) Steady state plasma confinement and control. Consistent with ReNeW Thrust 12, design options are being explored for a new moderate-scale facility to assess core-edge interaction issues and solutions. Key desired features include high power density, sufficient pulse length and duty cycle, elevated wall temperature, steady-state control of an optimized core plasma, and flexibility in changing boundary components as well as access for comprehensive measurements.

  13. Application Guide for Heat Recovery Incinerators.

    DTIC Science & Technology

    1986-02-01

    of the absorption cycle to vaporize the refrigerant, typically an aqueous ammonia . The refrigerant then follows the typical refrigeration cycle...this third level of iteration, the information gathered in level II should be updated if necessary and verified. Use the NCEL survey method (see...and quantity of the solid waste can be determined by applying procedures set forth in Appendix B. For level III, NCEL has developed a survey method

  14. Cycle 24 COS FUV Internal/External Wavelength Scale Monitor

    NASA Astrophysics Data System (ADS)

    Fischer, William J.

    2018-02-01

    We report on the monitoring of the COS FUV wavelength scale zero-points during Cycle 24 in program 14855. Select cenwaves were monitored for all FUV gratings at Lifetime Position 3. The target and cenwaves have remained the same since Cycle 21, with a change only to the target acquisition sequence. All measured offsets are within the error goals, although the G140L cenwaves show offsets at the short-wavelength end of segment A that are approaching the tolerance. This behavior will be closely monitored in subsequent iterations of the program.

  15. Solving Differential Equations Using Modified Picard Iteration

    ERIC Educational Resources Information Center

    Robin, W. A.

    2010-01-01

    Many classes of differential equations are shown to be open to solution through a method involving a combination of a direct integration approach with suitably modified Picard iterative procedures. The classes of differential equations considered include typical initial value, boundary value and eigenvalue problems arising in physics and…

  16. Accurate Micro-Tool Manufacturing by Iterative Pulsed-Laser Ablation

    NASA Astrophysics Data System (ADS)

    Warhanek, Maximilian; Mayr, Josef; Dörig, Christian; Wegener, Konrad

    2017-12-01

    Iterative processing solutions, including multiple cycles of material removal and measurement, are capable of achieving higher geometric accuracy by compensating for most deviations manifesting directly on the workpiece. Remaining error sources are the measurement uncertainty and the repeatability of the material-removal process including clamping errors. Due to the lack of processing forces, process fluids and wear, pulsed-laser ablation has proven high repeatability and can be realized directly on a measuring machine. This work takes advantage of this possibility by implementing an iterative, laser-based correction process for profile deviations registered directly on an optical measurement machine. This way efficient iterative processing is enabled, which is precise, applicable for all tool materials including diamond and eliminates clamping errors. The concept is proven by a prototypical implementation on an industrial tool measurement machine and a nanosecond fibre laser. A number of measurements are performed on both the machine and the processed workpieces. Results show production deviations within 2 μm diameter tolerance.

  17. D4Z - a new renumbering for iterative solution of ground-water flow and solute- transport equations

    USGS Publications Warehouse

    Kipp, K.L.; Russell, T.F.; Otto, J.S.

    1992-01-01

    D4 zig-zag (D4Z) is a new renumbering scheme for producing a reduced matrix to be solved by an incomplete LU preconditioned, restarted conjugate-gradient iterative solver. By renumbering alternate diagonals in a zig-zag fashion, a very low sensitivity of convergence rate to renumbering direction is obtained. For two demonstration problems involving groundwater flow and solute transport, iteration counts are related to condition numbers and spectra of the reduced matrices.

  18. Programmable Iterative Optical Image And Data Processing

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    1995-01-01

    Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

  19. Mixed Material Plasma-Surface Interactions in ITER: Recent Results from the PISCES Group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tynan, George R.; Baldwin, Matthew; Doerner, Russell

    This paper summarizes recent PISCES studies focused on the effects associated with mixed species plasmas that are similar in composition to what one might expect in ITER. Formation of nanometer scale whiskerlike features occurs in W surfaces exposed to pure He and mixed D/He plasmas and appears to be associated with the formation of He nanometer-scaled bubbles in the W surface. Studies of Be-W alloy formation in Be-seeded D plasmas suggest that this process may be important in ITER all metal wall operational scenarios. Studies also suggest that BeD formation via chemical sputtering of Be walls may be an importantmore » first wall erosion mechanism. D retention in ITER mixed materials has also been studied. The D release behavior from beryllium co-deposits does not appear to be a diffusion dominated process, but instead is consistent with thermal release from a number of variable trapping energy sites. As a result, the amount of tritium remaining in codeposits in ITER after baking will be determined by the maximum temperature achieved, rather than by the duration of the baking cycle.« less

  20. CuCrZr alloy microstructure and mechanical properties after hot isostatic pressing bonding cycles

    NASA Astrophysics Data System (ADS)

    Frayssines, P.-E.; Gentzbittel, J.-M.; Guilloud, A.; Bucci, P.; Soreau, T.; Francois, N.; Primaux, F.; Heikkinen, S.; Zacchia, F.; Eaton, R.; Barabash, V.; Mitteau, R.

    2014-04-01

    ITER first wall (FW) panels are a layered structure made of the three following materials: 316L(N) austenitic stainless steel, CuCrZr alloy and beryllium. Two hot isostatic pressing (HIP) cycles are included in the reference fabrication route to bond these materials together for the normal heat flux design supplied by the European Union (EU). This reference fabrication route ensures sufficiently good mechanical properties for the materials and joints, which fulfil the ITER mechanical specifications, but often results in a coarse grain size for the CuCrZr alloy, which is not favourable, especially, for the thermal creep properties of the FW panels. To limit the abnormal grain growth of CuCrZr and make the ITER FW fabrication route more reliable, a study began in 2010 in the EU in the frame of an ITER task agreement. Two material fabrication approaches have been investigated. The first one was dedicated to the fabrication of solid CuCrZr alloy in close collaboration with an industrial copper alloys manufacturer. The second approach investigated was the manufacturing of CuCrZr alloy using the powder metallurgy (PM) route and HIP consolidation. This paper presents the main mechanical and microstructural results associated with the two CuCrZr approaches mentioned above. The mechanical properties of solid CuCrZr, PM CuCrZr and joints (solid CuCrZr/solid CuCrZr and solid CuCrZr/316L(N) and PM CuCrZr/316L(N)) are also presented.

  1. Generic mission planning concepts for space astronomy missions

    NASA Technical Reports Server (NTRS)

    Guffin, O. T.; Onken, J. F.

    1993-01-01

    The past two decades have seen the rapid development of space astronomy, both manned and unmanned, and the concurrent proliferation of the operational concepts and software that have been produced to support each individual project. Having been involved in four of these missions since the '70's and three yet to fly in the present decade, the authors believe it is time to step back and evaluate this body of experience from a macro-systems point of view to determine the potential for generic mission planning concepts that could be applied to future missions. This paper presents an organized evaluation of astronomy mission planning functions, functional flows, iteration cycles, replanning activities, and the requirements that drive individual concepts to specific solutions. The conclusions drawn from this exercise are then used to propose a generic concept that could support multiple missions.

  2. Dynamical coupling between magnetic equilibrium and transport in tokamak scenario modelling, with application to current ramps

    NASA Astrophysics Data System (ADS)

    Fable, E.; Angioni, C.; Ivanov, A. A.; Lackner, K.; Maj, O.; Medvedev, S. Yu; Pautasso, G.; Pereverzev, G. V.; Treutterer, W.; the ASDEX Upgrade Team

    2013-07-01

    The modelling of tokamak scenarios requires the simultaneous solution of both the time evolution of the plasma kinetic profiles and of the magnetic equilibrium. Their dynamical coupling involves additional complications, which are not present when the two physical problems are solved separately. Difficulties arise in maintaining consistency in the time evolution among quantities which appear in both the transport and the Grad-Shafranov equations, specifically the poloidal and toroidal magnetic fluxes as a function of each other and of the geometry. The required consistency can be obtained by means of iteration cycles, which are performed outside the equilibrium code and which can have different convergence properties depending on the chosen numerical scheme. When these external iterations are performed, the stability of the coupled system becomes a concern. In contrast, if these iterations are not performed, the coupled system is numerically stable, but can become physically inconsistent. By employing a novel scheme (Fable E et al 2012 Nucl. Fusion submitted), which ensures stability and physical consistency among the same quantities that appear in both the transport and magnetic equilibrium equations, a newly developed version of the ASTRA transport code (Pereverzev G V et al 1991 IPP Report 5/42), which is coupled to the SPIDER equilibrium code (Ivanov A A et al 2005 32nd EPS Conf. on Plasma Physics (Tarragona, 27 June-1 July) vol 29C (ECA) P-5.063), in both prescribed- and free-boundary modes is presented here for the first time. The ASTRA-SPIDER coupled system is then applied to the specific study of the modelling of controlled current ramp-up in ASDEX Upgrade discharges.

  3. AC loss, interstrand resistance and mechanical properties of prototype EU DEMO TF conductors up to 30 000 load cycles

    NASA Astrophysics Data System (ADS)

    Yagotintsev, K.; Nijhuis, A.

    2018-07-01

    Two prototype Nb3Sn cable-in-conduit conductors conductors were designed and manufactured for the toroidal field (TF) magnet system of the envisaged European DEMO fusion reactor. The AC loss, contact resistance and mechanical properties of two sample conductors were tested in the Twente Cryogenic Cable Press under cyclic load up to 30 000 cycles. Though both conductors were designed to operate at 82 kA in a background magnetic field of 13.6 T, they reflect different approaches with respect to the magnet winding pack assembly. The first approach is based on react and wind technology while the second is the more common wind and react technology. Each conductor was tested first for AC loss in virgin condition without handling. The impact of Lorentz load during magnet operation was simulated using the cable press. In the press each conductor specimen was subjected to transverse cyclic load up to 30 000 cycles in liquid helium bath at 4.2 K. Here a summary of results for AC loss, contact resistance, conductor deformation, mechanical heat production and conductor stiffness evolution during cycling of the load is presented. Both conductors showed similar mechanical behaviour but quite different AC loss. In comparison with previously tested ITER TF conductors, both DEMO TF conductors possess very low contact resistance resulting in high coupling loss. At the same time, load cycling has limited impact on properties of DEMO TF conductors in comparison with ITER TF conductors.

  4. Guided Iterative Substructure Search (GI-SSS) - A New Trick for an Old Dog.

    PubMed

    Weskamp, Nils

    2016-07-01

    Substructure search (SSS) is a fundamental technique supported by various chemical information systems. Many users apply it in an iterative manner: they modify their queries to shape the composition of the retrieved hit sets according to their needs. We propose and evaluate two heuristic extensions of SSS aimed at simplifying these iterative query modifications by collecting additional information during query processing and visualizing this information in an intuitive way. This gives the user a convenient feedback on how certain changes to the query would affect the retrieved hit set and reduces the number of trial-and-error cycles needed to generate an optimal search result. The proposed heuristics are simple, yet surprisingly effective and can be easily added to existing SSS implementations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Iterative and multigrid methods in the finite element solution of incompressible and turbulent fluid flow

    NASA Astrophysics Data System (ADS)

    Lavery, N.; Taylor, C.

    1999-07-01

    Multigrid and iterative methods are used to reduce the solution time of the matrix equations which arise from the finite element (FE) discretisation of the time-independent equations of motion of the incompressible fluid in turbulent motion. Incompressible flow is solved by using the method of reduce interpolation for the pressure to satisfy the Brezzi-Babuska condition. The k-l model is used to complete the turbulence closure problem. The non-symmetric iterative matrix methods examined are the methods of least squares conjugate gradient (LSCG), biconjugate gradient (BCG), conjugate gradient squared (CGS), and the biconjugate gradient squared stabilised (BCGSTAB). The multigrid algorithm applied is based on the FAS algorithm of Brandt, and uses two and three levels of grids with a V-cycling schedule. These methods are all compared to the non-symmetric frontal solver. Copyright

  6. Why and how Mastering an Incremental and Iterative Software Development Process

    NASA Astrophysics Data System (ADS)

    Dubuc, François; Guichoux, Bernard; Cormery, Patrick; Mescam, Jean Christophe

    2004-06-01

    One of the key issues regularly mentioned in the current software crisis of the space domain is related to the software development process that must be performed while the system definition is not yet frozen. This is especially true for complex systems like launchers or space vehicles.Several more or less mature solutions are under study by EADS SPACE Transportation and are going to be presented in this paper. The basic principle is to develop the software through an iterative and incremental process instead of the classical waterfall approach, with the following advantages:- It permits systematic management and incorporation of requirements changes over the development cycle with a minimal cost. As far as possible the most dimensioning requirements are analyzed and developed in priority for validating very early the architecture concept without the details.- A software prototype is very quickly available. It improves the communication between system and software teams, as it enables to check very early and efficiently the common understanding of the system requirements.- It allows the software team to complete a whole development cycle very early, and thus to become quickly familiar with the software development environment (methodology, technology, tools...). This is particularly important when the team is new, or when the environment has changed since the previous development. Anyhow, it improves a lot the learning curve of the software team.These advantages seem very attractive, but mastering efficiently an iterative development process is not so easy and induces a lot of difficulties such as:- How to freeze one configuration of the system definition as a development baseline, while most of thesystem requirements are completely and naturally unstable?- How to distinguish stable/unstable and dimensioning/standard requirements?- How to plan the development of each increment?- How to link classical waterfall development milestones with an iterative approach: when should theclassical reviews be performed: Software Specification Review? Preliminary Design Review? CriticalDesign Review? Code Review? Etc...Several solutions envisaged or already deployed by EADS SPACE Transportation will be presented, both from a methodological and technological point of view:- How the MELANIE EADS ST internal methodology improves the concurrent engineering activitiesbetween GNC, software and simulation teams in a very iterative and reactive way.- How the CMM approach can help by better formalizing Requirements Management and Planningprocesses.- How the Automatic Code Generation with "certified" tools (SCADE) can still dramatically shorten thedevelopment cycle.Then the presentation will conclude by showing an evaluation of the cost and planning reduction based on a pilot application by comparing figures on two similar projects: one with the classical waterfall process, the other one with an iterative and incremental approach.

  7. ITER-FEAT operation

    NASA Astrophysics Data System (ADS)

    Shimomura, Y.; Aymar, R.; Chuyanov, V. A.; Huguet, M.; Matsumoto, H.; Mizoguchi, T.; Murakami, Y.; Polevoi, A. R.; Shimada, M.; ITER Joint Central Team; ITER Home Teams

    2001-03-01

    ITER is planned to be the first fusion experimental reactor in the world operating for research in physics and engineering. The first ten years of operation will be devoted primarily to physics issues at low neutron fluence and the following ten years of operation to engineering testing at higher fluence. ITER can accommodate various plasma configurations and plasma operation modes, such as inductive high Q modes, long pulse hybrid modes and non-inductive steady state modes, with large ranges of plasma current, density, beta and fusion power, and with various heating and current drive methods. This flexibility will provide an advantage for coping with uncertainties in the physics database, in studying burning plasmas, in introducing advanced features and in optimizing the plasma performance for the different programme objectives. Remote sites will be able to participate in the ITER experiment. This concept will provide an advantage not only in operating ITER for 24 hours a day but also in involving the worldwide fusion community and in promoting scientific competition among the ITER Parties.

  8. Metallographic autopsies of full-scale ITER prototype cable-in-conduit conductors after full testing in SULTAN: 1. The mechanical role of copper strands in a CICC

    DOE PAGES

    Sanabria, Carlos; Lee, Peter J.; Starch, William; ...

    2015-06-22

    Cables made with Nb 3Sn-based superconductor strands will provide the 13 T maximum peak magnetic field of the ITER Central Solenoid (CS) coils and they must survive up to 60,000 electromagnetic cycles. Accordingly, prototype designs of CS cable-in-conduit-conductors (CICC) were electromagnetically tested over multiple magnetic field cycles and warm-up-cool-down scenarios in the SULTAN facility at CRPP. We report here a post mortem metallographic analysis of two CS CICC prototypes which exhibited some rate of irreversible performance degradation during cycling. The standard ITER CS CICC cable design uses a combination of superconducting and Cu strands, and because the Lorentz force onmore » the strand is proportional to the transport current in the strand, removing the copper strands (while increasing the Cu:SC ratio of the superconducting strands) was proposed as one way of reducing the strand load. In this study we compare the two alternative CICCs, with and without Cu strands, keeping in mind that the degradation after SULTAN test was lower for the CICC without Cu strands. The post mortem metallographic evaluation revealed that the overall strand transverse movement was 20% lower in the CICC without Cu strands and that the tensile filament fractures found were less, both indications of an overall reduction in high tensile strain regions. Furthermore, it was interesting to see that the Cu strands in the mixed cable design (with higher degradation) helped reduce the contact stresses on the high pressure side of the CICC, but in either case, the strain reduction mechanisms were not enough to suppress cyclic degradation. Advantages and disadvantages of each conductor design are discussed here aimed to understand the sources of the degradation.« less

  9. Constructing Easily Iterated Functions with Interesting Properties

    ERIC Educational Resources Information Center

    Sprows, David J.

    2009-01-01

    A number of schools have recently introduced new courses dealing with various aspects of iteration theory or at least have found ways of including topics such as chaos and fractals in existing courses. In this note, we will consider a family of functions whose members are especially well suited to illustrate many of the concepts involved in these…

  10. Embodied Design: Constructing Means for Constructing Meaning

    ERIC Educational Resources Information Center

    Abrahamson, Dor

    2009-01-01

    Design-based research studies are conducted as iterative implementation-analysis-modification cycles, in which emerging theoretical models and pedagogically plausible activities are reciprocally tuned toward each other as a means of investigating conjectures pertaining to mechanisms underlying content teaching and learning. Yet this approach, even…

  11. Control of particle and power exhaust in pellet fuelled ITER DT scenarios employing integrated models

    NASA Astrophysics Data System (ADS)

    Wiesen, S.; Köchl, F.; Belo, P.; Kotov, V.; Loarte, A.; Parail, V.; Corrigan, G.; Garzotti, L.; Harting, D.

    2017-07-01

    The integrated model JINTRAC is employed to assess the dynamic density evolution of the ITER baseline scenario when fuelled by discrete pellets. The consequences on the core confinement properties, α-particle heating due to fusion and the effect on the ITER divertor operation, taking into account the material limitations on the target heat loads, are discussed within the integrated model. Using the model one can observe that stable but cyclical operational regimes can be achieved for a pellet-fuelled ITER ELMy H-mode scenario with Q  =  10 maintaining partially detached conditions in the divertor. It is shown that the level of divertor detachment is inversely correlated with the core plasma density due to α-particle heating, and thus depends on the density evolution cycle imposed by pellet ablations. The power crossing the separatrix to be dissipated depends on the enhancement of the transport in the pedestal region being linked with the pressure gradient evolution after pellet injection. The fuelling efficacy of the deposited pellet material is strongly dependent on the E  ×  B plasmoid drift. It is concluded that integrated models like JINTRAC, if validated and supported by realistic physics constraints, may help to establish suitable control schemes of particle and power exhaust in burning ITER DT-plasma scenarios.

  12. Finite Volume Element (FVE) discretization and multilevel solution of the axisymmetric heat equation

    NASA Astrophysics Data System (ADS)

    Litaker, Eric T.

    1994-12-01

    The axisymmetric heat equation, resulting from a point-source of heat applied to a metal block, is solved numerically; both iterative and multilevel solutions are computed in order to compare the two processes. The continuum problem is discretized in two stages: finite differences are used to discretize the time derivatives, resulting is a fully implicit backward time-stepping scheme, and the Finite Volume Element (FVE) method is used to discretize the spatial derivatives. The application of the FVE method to a problem in cylindrical coordinates is new, and results in stencils which are analyzed extensively. Several iteration schemes are considered, including both Jacobi and Gauss-Seidel; a thorough analysis of these schemes is done, using both the spectral radii of the iteration matrices and local mode analysis. Using this discretization, a Gauss-Seidel relaxation scheme is used to solve the heat equation iteratively. A multilevel solution process is then constructed, including the development of intergrid transfer and coarse grid operators. Local mode analysis is performed on the components of the amplification matrix, resulting in the two-level convergence factors for various combinations of the operators. A multilevel solution process is implemented by using multigrid V-cycles; the iterative and multilevel results are compared and discussed in detail. The computational savings resulting from the multilevel process are then discussed.

  13. Estimation of the tritium retention in ITER tungsten divertor target using macroscopic rate equations simulations

    NASA Astrophysics Data System (ADS)

    Hodille, E. A.; Bernard, E.; Markelj, S.; Mougenot, J.; Becquart, C. S.; Bisson, R.; Grisolia, C.

    2017-12-01

    Based on macroscopic rate equation simulations of tritium migration in an actively cooled tungsten (W) plasma facing component (PFC) using the code MHIMS (migration of hydrogen isotopes in metals), an estimation has been made of the tritium retention in ITER W divertor target during a non-uniform exponential distribution of particle fluxes. Two grades of materials are considered to be exposed to tritium ions: an undamaged W and a damaged W exposed to fast fusion neutrons. Due to strong temperature gradient in the PFC, Soret effect’s impacts on tritium retention is also evaluated for both cases. Thanks to the simulation, the evolutions of the tritium retention and the tritium migration depth are obtained as a function of the implanted flux and the number of cycles. From these evolutions, extrapolation laws are built to estimate the number of cycles needed for tritium to permeate from the implantation zone to the cooled surface and to quantify the corresponding retention of tritium throughout the W PFC.

  14. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    NASA Astrophysics Data System (ADS)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  15. Rapid prototyping strategy for a surgical data warehouse.

    PubMed

    Tang, S-T; Huang, Y-F; Hsiao, M-L; Yang, S-H; Young, S-T

    2003-01-01

    Healthcare processes typically generate an enormous volume of patient information. This information largely represents unexploited knowledge, since current hospital operational systems (e.g., HIS, RIS) are not suitable for knowledge exploitation. Data warehousing provides an attractive method for solving these problems, but the process is very complicated. This study presents a novel strategy for effectively implementing a healthcare data warehouse. This study adopted the rapid prototyping (RP) method, which involves intensive interactions. System developers and users were closely linked throughout the life cycle of the system development. The presence of iterative RP loops meant that the system requirements were increasingly integrated and problems were gradually solved, such that the prototype system evolved into the final operational system. The results were analyzed by monitoring the series of iterative RP loops. First a definite workflow for ensuring data completeness was established, taking a patient-oriented viewpoint when collecting the data. Subsequently the system architecture was determined for data retrieval, storage, and manipulation. This architecture also clarifies the relationships among the novel system and legacy systems. Finally, a graphic user interface for data presentation was implemented. Our results clearly demonstrate the potential for adopting an RP strategy in the successful establishment of a healthcare data warehouse. The strategy can be modified and expanded to provide new services or support new application domains. The design patterns and modular architecture used in the framework will be useful in solving problems in different healthcare domains.

  16. The heat removal capability of actively cooled plasma-facing components for the ITER divertor

    NASA Astrophysics Data System (ADS)

    Missirlian, M.; Richou, M.; Riccardi, B.; Gavila, P.; Loarer, T.; Constans, S.

    2011-12-01

    Non-destructive examination followed by high-heat-flux testing was performed for different small- and medium-scale mock-ups; this included the most recent developments related to actively cooled tungsten (W) or carbon fibre composite (CFC) armoured plasma-facing components. In particular, the heat-removal capability of these mock-ups manufactured by European companies with all the main features of the ITER divertor design was investigated both after manufacturing and after thermal cycling up to 20 MW m-2. Compliance with ITER requirements was explored in terms of bonding quality, heat flux performances and operational compatibility. The main results show an overall good heat-removal capability after the manufacturing process independent of the armour-to-heat sink bonding technology and promising behaviour with respect to thermal fatigue lifetime under heat flux up to 20 MW m-2 for the CFC-armoured tiles and 15 MW m-2 for the W-armoured tiles, respectively.

  17. Approximate techniques of structural reanalysis

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Lowder, H. E.

    1974-01-01

    A study is made of two approximate techniques for structural reanalysis. These include Taylor series expansions for response variables in terms of design variables and the reduced-basis method. In addition, modifications to these techniques are proposed to overcome some of their major drawbacks. The modifications include a rational approach to the selection of the reduced-basis vectors and the use of Taylor series approximation in an iterative process. For the reduced basis a normalized set of vectors is chosen which consists of the original analyzed design and the first-order sensitivity analysis vectors. The use of the Taylor series approximation as a first (initial) estimate in an iterative process, can lead to significant improvements in accuracy, even with one iteration cycle. Therefore, the range of applicability of the reanalysis technique can be extended. Numerical examples are presented which demonstrate the gain in accuracy obtained by using the proposed modification techniques, for a wide range of variations in the design variables.

  18. A preconditioner for the finite element computation of incompressible, nonlinear elastic deformations

    NASA Astrophysics Data System (ADS)

    Whiteley, J. P.

    2017-10-01

    Large, incompressible elastic deformations are governed by a system of nonlinear partial differential equations. The finite element discretisation of these partial differential equations yields a system of nonlinear algebraic equations that are usually solved using Newton's method. On each iteration of Newton's method, a linear system must be solved. We exploit the structure of the Jacobian matrix to propose a preconditioner, comprising two steps. The first step is the solution of a relatively small, symmetric, positive definite linear system using the preconditioned conjugate gradient method. This is followed by a small number of multigrid V-cycles for a larger linear system. Through the use of exemplar elastic deformations, the preconditioner is demonstrated to facilitate the iterative solution of the linear systems arising. The number of GMRES iterations required has only a very weak dependence on the number of degrees of freedom of the linear systems.

  19. The Iterative Research Cycle: Process-Based Model Evaluation

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2014-12-01

    The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex physics based models that simulate a myriad of processes at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. In this talk I will give an overview of our latest research on process-based model calibration and evaluation. This approach, rooted in Bayesian theory, uses summary metrics of the calibration data rather than the data itself to help detect which component(s) of the model is (are) malfunctioning and in need of improvement. A few case studies involving hydrologic and geophysical models will be used to demonstrate the proposed methodology.

  20. How can information systems provide support to nurses' hand hygiene performance? Using gamification and indoor location to improve hand hygiene awareness and reduce hospital infections.

    PubMed

    Marques, Rita; Gregório, João; Pinheiro, Fernando; Póvoa, Pedro; da Silva, Miguel Mira; Lapão, Luís Velez

    2017-01-31

    Hospital-acquired infections are still amongst the major problems health systems are facing. Their occurrence can lead to higher morbidity and mortality rates, increased length of hospital stay, and higher costs for both hospital and patients. Performing hand hygiene is a simple and inexpensive prevention measure, but healthcare workers' compliance with it is often far from ideal. To raise awareness regarding hand hygiene compliance, individual behaviour change and performance optimization, we aimed to develop a gamification solution that collects data and provides real-time feedback accurately in a fun and engaging way. A Design Science Research Methodology (DSRM) was used to conduct this work. DSRM is useful to study the link between research and professional practices by designing, implementing and evaluating artifacts that address a specific need. It follows a development cycle (or iteration) composed by six activities. Two work iterations were performed applying gamification components, each using a different indoor location technology. Preliminary experiments, simulations and field studies were performed in an Intensive Care Unit (ICU) of a Portuguese tertiary hospital. Nurses working on this ICU were in a focus group during the research, participating in several sessions across the implementation process. Nurses enjoyed the concept and considered that it allows for a unique opportunity to receive feedback regarding their performance. Tests performed on the indoor location technology applied in the first iteration regarding distances estimation presented an unacceptable lack of accuracy. Using a proximity-based technique, it was possible to identify the sequence of positions, but beacons presented an unstable behaviour. In the second work iteration, a different indoor location technology was explored but it did not work properly, so there was no chance of testing the solution as a whole (gamification application included). Combining automated monitoring systems with gamification seems to be an innovative and promising approach, based on the already achieved results. Involving nurses in the project since the beginning allowed to align the solution with their needs. Despite strong evolution through recent years, indoor location technologies are still not ready to be applied in the healthcare field with nursing wards.

  1. Forecasting of the electrical actuators condition using stator’s current signals

    NASA Astrophysics Data System (ADS)

    Kruglova, T. N.; Yaroshenko, I. V.; Rabotalov, N. N.; Melnikov, M. A.

    2017-02-01

    This article describes a forecasting method for electrical actuators realized through the combination of Fourier transformation and neural network techniques. The method allows finding the value of diagnostic functions in the iterating operating cycle and the number of operational cycles in time before the BLDC actuator fails. For forecasting of the condition of the actuator, we propose a hierarchical structure of the neural network aiming to reduce the training time of the neural network and improve estimation accuracy.

  2. Air Asset to Mission Assignment for Dynamic High-Threat Environments in Real-Time

    DTIC Science & Technology

    2015-03-01

    39 Initial Distribution List 41 viii List of Figures Figure 2.1 Joint Air Tasking Cycle (JCS 2014). An iterative 120-hour cycle for planners within the...minutes of on- staion time, or “playtime”, with a total of two GBU -16 laser-guided bomb (LGB) and an Advanced Targeting Forward Looking Infrared (ATFLIR...proba- bility of survival against the SA-2 and SA-3 systems, respectively. A GBU -16 LGB has no standoff capability and 90%, 60%, and 70% probability of

  3. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  4. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    PubMed

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  5. Improving Defense Acquisition Management and Policy Through a Life-Cycle Affordability Framework

    DTIC Science & Technology

    2014-02-04

    substrates based on gender, culture, and propensity. Four Design a neurofeedback -based training program that will produce changes in neuronal substrates...Validate the training program by iterating Step 3 until the desired behavioral outcome is achieved. Confirm that the neurofeedback creates desired

  6. The Reflective Teacher Leader: An Action Research Model

    ERIC Educational Resources Information Center

    Furtado, Leena; Anderson, Dawnette

    2012-01-01

    This study presents four teacher reflections from action research projects ranging from kindergarten to adult school improvements. A teacher leadership matrix guided participants to connect teaching and learning theory to best practices by exploring uncharted territory within an iterative cycle of research and action. Teachers developed the…

  7. A Least-Squares Commutator in the Iterative Subspace Method for Accelerating Self-Consistent Field Convergence.

    PubMed

    Li, Haichen; Yaron, David J

    2016-11-08

    A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.

  8. Iterative solution of the inverse Cauchy problem for an elliptic equation by the conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Vasil'ev, V. I.; Kardashevsky, A. M.; Popov, V. V.; Prokopev, G. A.

    2017-10-01

    This article presents results of computational experiment carried out using a finite-difference method for solving the inverse Cauchy problem for a two-dimensional elliptic equation. The computational algorithm involves an iterative determination of the missing boundary condition from the override condition using the conjugate gradient method. The results of calculations are carried out on the examples with exact solutions as well as at specifying an additional condition with random errors are presented. Results showed a high efficiency of the iterative method of conjugate gradients for numerical solution

  9. A Kronecker product splitting preconditioner for two-dimensional space-fractional diffusion equations

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Lv, Wen; Zhang, Tongtong

    2018-05-01

    We study preconditioned iterative methods for the linear system arising in the numerical discretization of a two-dimensional space-fractional diffusion equation. Our approach is based on a formulation of the discrete problem that is shown to be the sum of two Kronecker products. By making use of an alternating Kronecker product splitting iteration technique we establish a class of fixed-point iteration methods. Theoretical analysis shows that the new method converges to the unique solution of the linear system. Moreover, the optimal choice of the involved iteration parameters and the corresponding asymptotic convergence rate are computed exactly when the eigenvalues of the system matrix are all real. The basic iteration is accelerated by a Krylov subspace method like GMRES. The corresponding preconditioner is in a form of a Kronecker product structure and requires at each iteration the solution of a set of discrete one-dimensional fractional diffusion equations. We use structure preserving approximations to the discrete one-dimensional fractional diffusion operators in the action of the preconditioning matrix. Numerical examples are presented to illustrate the effectiveness of this approach.

  10. Shape reanalysis and sensitivities utilizing preconditioned iterative boundary solvers

    NASA Technical Reports Server (NTRS)

    Guru Prasad, K.; Kane, J. H.

    1992-01-01

    The computational advantages associated with the utilization of preconditined iterative equation solvers are quantified for the reanalysis of perturbed shapes using continuum structural boundary element analysis (BEA). Both single- and multi-zone three-dimensional problems are examined. Significant reductions in computer time are obtained by making use of previously computed solution vectors and preconditioners in subsequent analyses. The effectiveness of this technique is demonstrated for the computation of shape response sensitivities required in shape optimization. Computer times and accuracies achieved using the preconditioned iterative solvers are compared with those obtained via direct solvers and implicit differentiation of the boundary integral equations. It is concluded that this approach employing preconditioned iterative equation solvers in reanalysis and sensitivity analysis can be competitive with if not superior to those involving direct solvers.

  11. Saturated mutagenesis of ketoisovalerate decarboxylase V461 enabled specific synthesis of 1-pentanol via the ketoacid elongation cycle.

    PubMed

    Chen, Grey S; Siao, Siang Wun; Shen, Claire R

    2017-09-12

    Iterative ketoacid elongation has been an essential tool in engineering artificial metabolism, in particular the synthetic alcohols. However, precise control of product specificity is still greatly challenged by the substrate promiscuity of the ketoacid decarboxylase, which unselectively hijacks ketoacid intermediates from the elongation cycle along with the target ketoacid. In this work, preferential tuning of the Lactococcus lactis ketoisovalerate decarboxylase (Kivd) specificity toward 1-pentanol synthesis was achieved via saturated mutagenesis of the key residue V461 followed by screening of the resulting alcohol spectrum. Substitution of V461 with the small and polar amino acid glycine or serine significantly improved the Kivd selectivity toward the 1-pentanol precursor 2-ketocaproate by lowering its catalytic efficiency for the upstream ketoacid 2-ketobutyrate and 2-ketovalerate. Conversely, replacing V461 with bulky or charged side chains displayed severely adverse effect. Increasing supply of the iterative addition unit acetyl-CoA by acetate feeding further drove 2-ketoacid flux into the elongation cycle and enhanced 1-pentanol productivity. The Kivd V461G variant enabled a 1-pentanol production specificity around 90% of the total alcohol content with or without oleyl alcohol extraction. This work adds insight to the selectivity of Kivd active site.

  12. Development of an Integrated Waste Plan for Chalk River Laboratories - 13376

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, L.

    2013-07-01

    To further its Strategic Planning, the Atomic Energy of Canada Limited (AECL) required an effective approach to developing a fully integrated waste plan for its Chalk River Laboratories (CRL) site. Production of the first Integrated Waste Plan (IWP) for Chalk River was a substantial task involving representatives from each of the major internal stakeholders. Since then, a second revision has been produced and a third is underway. The IWP remains an Interim IWP until all gaps have been resolved and all pathways are at an acceptable level of detail. Full completion will involve a number of iterations, typically annually formore » up to six years. The end result of completing this process is a comprehensive document and supporting information that includes: - An Integrated Waste Plan document summarizing the entire waste management picture in one place; - Details of all the wastes required to be managed, including volume and timings by waste stream; - Detailed waste stream pathway maps for the whole life-cycle for each waste stream to be managed from pre-generation planning through to final disposition; and - Critical decision points, i.e. decisions that need to be made and timings by when they need to be made. A waste inventory has been constructed that serves as the master reference inventory of all waste that has been or is committed to be managed at CRL. In the past, only the waste that is in storage has been effectively captured, and future predictions of wastes requiring to be managed were not available in one place. The IWP has also provided a detailed baseline plan at the current level of refinement. Waste flow maps for all identified waste streams, for the full waste life cycle complete to disposition have been constructed. The maps identify areas requiring further development, and show the complexities and inter-relationships between waste streams. Knowledge of these inter-dependencies is necessary in order to perform effective options studies for enabling facilities that may be necessary for multiple related waste streams. The next step is to engage external stakeholders in the optioneering work required to provide enhanced confidence that the path forward identified within future iterations of the IWP will be acceptable to all. (authors)« less

  13. Design of Chemistry Teacher Education Course on Nature of Science

    ERIC Educational Resources Information Center

    Vesterinen, Veli-Matti; Aksela, Maija

    2013-01-01

    To enhance students' understanding of nature of science (NOS), teachers need adequate pedagogical content knowledge related to NOS. The educational design research study presented here describes the design and development of a pre-service chemistry teacher education course on NOS instruction. The study documents two iterative cycles of…

  14. Developing Preservice Elementary Teachers' Knowledge and Practices through Modeling-Centered Scientific Inquiry

    ERIC Educational Resources Information Center

    Schwarz, Christina

    2009-01-01

    Preservice elementary teachers face many challenges in learning how to teach science effectively, such as engaging students in science, organizing instruction, and developing a productive learning community. This paper reports on several iterative cycles of design-based research aimed at fostering preservice teachers' principled reasoning around…

  15. Cultivating Critical Mindsets in the Digital Information Age: Teaching Meaningful Web Evaluation

    ERIC Educational Resources Information Center

    Johnson, Angela Kwasnik

    2017-01-01

    This dissertation examines the use of dialogic discussion to improve young adolescents' ability to critically evaluate web sites. An intervention unit comprised three iterations of an instructional cycle in which students independently annotated web sites about controversial issues and discussed the reliability of those sites in dialogic…

  16. Classroom Strategies to Make Sense and Persevere

    ERIC Educational Resources Information Center

    Wilburne, Jane M.; Wildmann, Tara; Morret, Michael; Stipanovic, Julie

    2014-01-01

    Three mid-level mathematics teachers (grades 7 and 8) and a university mathematics educator formed a year-long professional learning community. The objective was to collectively look at how they were promoting the Standards for Mathematical Practice (SMP) (CCSSI 2010) in their classes. The monthly discussions followed an iterative cycle in which…

  17. Safe Surgery Trainer Project Management Plan (PMP), Version 1.0

    DTIC Science & Technology

    2014-05-30

    Methodology including SCRUM (see http://en.wikipedia.org/wiki/Scrum_(management) for more info). Although this Safe Surgery Trainer - PMP Version 1.0 5...Agile method similar to Scrum . The internal development team works on a minor iteration cycle that begins/ends on Wednesday. At the beginning of

  18. Developing an Action Concept Inventory

    ERIC Educational Resources Information Center

    McGinness, Lachlan P.; Savage, C. M.

    2016-01-01

    We report on progress towards the development of an Action Concept Inventory (ACI), a test that measures student understanding of action principles in introductory mechanics and optics. The ACI also covers key concepts of many-paths quantum mechanics, from which classical action physics arises. We used a multistage iterative development cycle for…

  19. Theoretical analysis for the mechanical behavior caused by an electromagnetic cycle in ITER Nb3Sn cable-in-conduit conductors

    NASA Astrophysics Data System (ADS)

    Yue, Donghua; Zhang, Xingyi; Zhou, You-He

    2018-02-01

    The central solenoid (CS) is one of the key components of the International Thermonuclear Experimental Reactor (ITER) tokamak and which is often considered as the heart of this fusion reactor. This solenoid will be built by using Nb3Sn cable-in-conduit conductors (CICC), capable of generating a 13 T magnetic field. In order to assess the performance of the Nb3Sn CICC in nearly the ITER condition, many short samples have been evaluated at the SULTAN test facility (the background magnetic field is of 10.85 T with the uniform length of 400 mm at 1% homogeneity) in Centre de Recherches en Physique des Plasma (CRPP). It is found that the samples with pseudo-long twist pitch (including baseline specimens) show a significant degradation in the current-sharing temperature (Tcs), while the qualification tests of all short twist pitch (STP) samples, which show no degradation versus electromagnetic cycling, even exhibits an increase of Tcs. This behavior was perfectly reproduced in the coil experiments at the central solenoid model coil (CSMC) facility last year. In this paper, the complex structure of the Nb3Sn CICC would be simplified into a wire rope consisting of six petals and a cooling spiral. An analytical formula for the Tcs behavior as a function of the axial strain of the cable is presented. Based on this, the effects of twist pitch, axial and transverse stiffness, thermal mismatch, cycling number, magnetic distribution, etc., on the axial strain are discussed systematically. The calculated Tcs behavior with cycle number show consistency with the previous experimental results qualitatively and quantitatively. Lastly, we focus on the relationship between Tcs and axial strain of the cable, and we conclude that the Tcs behavior caused by electromagnetic cycles is determined by the cable axial strain. Once the cable is in a compression situation, this compression strain and its accumulation would lead to the Tcs degradation. The experimental observation of the Tcs enhancement in the CS STP samples should be considered as a contribution of the shorter length of the high field zone in SULTAN and CSMC devices, as well as the tight cable structure.

  20. From synthesis to function via iterative assembly of N-methyliminodiacetic acid boronate building blocks.

    PubMed

    Li, Junqi; Grillo, Anthony S; Burke, Martin D

    2015-08-18

    The study and optimization of small molecule function is often impeded by the time-intensive and specialist-dependent process that is typically used to make such compounds. In contrast, general and automated platforms have been developed for making peptides, oligonucleotides, and increasingly oligosaccharides, where synthesis is simplified to iterative applications of the same reactions. Inspired by the way natural products are biosynthesized via the iterative assembly of a defined set of building blocks, we developed a platform for small molecule synthesis involving the iterative coupling of haloboronic acids protected as the corresponding N-methyliminodiacetic acid (MIDA) boronates. Here we summarize our efforts thus far to develop this platform into a generalized and automated approach for small molecule synthesis. We and others have employed this approach to access many polyene-based compounds, including the polyene motifs found in >75% of all polyene natural products. This platform further allowed us to derivatize amphotericin B, the powerful and resistance-evasive but also highly toxic last line of defense in treating systemic fungal infections, and thereby understand its mechanism of action. This synthesis-enabled mechanistic understanding has led us to develop less toxic derivatives currently under evaluation as improved antifungal agents. To access more Csp(3)-containing small molecules, we gained a stereocontrolled entry into chiral, non-racemic α-boryl aldehydes through the discovery of a chiral derivative of MIDA. These α-boryl aldehydes are versatile intermediates for the synthesis of many Csp(3) boronate building blocks that are otherwise difficult to access. In addition, we demonstrated the utility of these types of building blocks in accessing pharmaceutically relevant targets via an iterative Csp(3) cross-coupling cycle. We have further expanded the scope of the platform to include stereochemically complex macrocyclic and polycyclic molecules using a linear-to-cyclized strategy, in which Csp(3) boronate building blocks are iteratively assembled into linear precursors that are then cyclized into the cyclic frameworks found in many natural products and natural product-like structures. Enabled by the serendipitous discovery of a catch-and-release protocol for generally purifying MIDA boronate intermediates, the platform has been automated. The synthesis of 14 distinct classes of small molecules, including pharmaceuticals, materials components, and polycyclic natural products, has been achieved using this new synthesis machine. It is anticipated that the scope of small molecules accessible by this platform will continue to expand via further developments in building block synthesis, Csp(3) cross-coupling methodologies, and cyclization strategies. Achieving these goals will enable the more generalized synthesis of small molecules and thereby help shift the rate-limiting step in small molecule science from synthesis to function.

  1. Nested Krylov methods and preserving the orthogonality

    NASA Technical Reports Server (NTRS)

    Desturler, Eric; Fokkema, Diederik R.

    1993-01-01

    Recently the GMRESR inner-outer iteraction scheme for the solution of linear systems of equations was proposed by Van der Vorst and Vuik. Similar methods have been proposed by Axelsson and Vassilevski and Saad (FGMRES). The outer iteration is GCR, which minimizes the residual over a given set of direction vectors. The inner iteration is GMRES, which at each step computes a new direction vector by approximately solving the residual equation. However, the optimality of the approximation over the space of outer search directions is ignored in the inner GMRES iteration. This leads to suboptimal corrections to the solution in the outer iteration, as components of the outer iteration directions may reenter in the inner iteration process. Therefore we propose to preserve the orthogonality relations of GCR in the inner GMRES iteration. This gives optimal corrections; however, it involves working with a singular, non-symmetric operator. We will discuss some important properties, and we will show by experiments that, in terms of matrix vector products, this modification (almost) always leads to better convergence. However, because we do more orthogonalizations, it does not always give an improved performance in CPU-time. Furthermore, we will discuss efficient implementations as well as the truncation possibilities of the outer GCR process. The experimental results indicate that for such methods it is advantageous to preserve the orthogonality in the inner iteration. Of course we can also use iteration schemes other than GMRES as the inner method; methods with short recurrences like GICGSTAB are of interest.

  2. Evaluating the iterative development of VR/AR human factors tools for manual work.

    PubMed

    Liston, Paul M; Kay, Alison; Cromie, Sam; Leva, Chiara; D'Cruz, Mirabelle; Patel, Harshada; Langley, Alyson; Sharples, Sarah; Aromaa, Susanna

    2012-01-01

    This paper outlines the approach taken to iteratively evaluate a set of VR/AR (virtual reality / augmented reality) applications for five different manual-work applications - terrestrial spacecraft assembly, assembly-line design, remote maintenance of trains, maintenance of nuclear reactors, and large-machine assembly process design - and examines the evaluation data for evidence of the effectiveness of the evaluation framework as well as the benefits to the development process of feedback from iterative evaluation. ManuVAR is an EU-funded research project that is working to develop an innovative technology platform and a framework to support high-value, high-knowledge manual work throughout the product lifecycle. The results of this study demonstrate the iterative improvements reached throughout the design cycles, observable through the trending of the quantitative results from three successive trials of the applications and the investigation of the qualitative interview findings. The paper discusses the limitations of evaluation in complex, multi-disciplinary development projects and finds evidence of the effectiveness of the use of the particular set of complementary evaluation methods incorporating a common inquiry structure used for the evaluation - particularly in facilitating triangulation of the data.

  3. Convergence of an iterative procedure for large-scale static analysis of structural components

    NASA Technical Reports Server (NTRS)

    Austin, F.; Ojalvo, I. U.

    1976-01-01

    The paper proves convergence of an iterative procedure for calculating the deflections of built-up component structures which can be represented as consisting of a dominant, relatively stiff primary structure and a less stiff secondary structure, which may be composed of one or more substructures that are not connected to one another but are all connected to the primary structure. The iteration consists in estimating the deformation of the primary structure in the absence of the secondary structure on the assumption that all mechanical loads are applied directly to the primary structure. The j-th iterate primary structure deflections at the interface are imposed on the secondary structure, and the boundary loads required to produce these deflections are computed. The cycle is completed by applying the interface reaction to the primary structure and computing its updated deflections. It is shown that the mathematical condition for convergence of this procedure is that the maximum eigenvalue of the equation relating primary-structure deflection to imposed secondary-structure deflection be less than unity, which is shown to correspond with the physical requirement that the secondary structure be more flexible at the interface boundary.

  4. Radioactivity measurements of ITER materials using the TFTR D-T neutron field

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Abdou, M. A.; Barnes, C. W.; Kugel, H. W.

    1994-06-01

    The availability of high D-T fusion neutron yields at TFTR has provided a useful opportunity to directly measure D-T neutron-induced radioactivity in a realistic tokamak fusion reactor environment for materials of vital interest to ITER. These measurements are valuable for characterizing radioactivity in various ITER candidate materials, for validating complex neutron transport calculations, and for meeting fusion reactor licensing requirements. The radioactivity measurements at TFTR involve potential ITER materials including stainless steel 316, vanadium, titanium, chromium, silicon, iron, cobalt, nickel, molybdenum, aluminum, copper, zinc, zirconium, niobium, and tungsten. Small samples of these materials were irradiated close to the plasma and just outside the vacuum vessel wall of TFTR, locations of different neutron energy spectra. Saturation activities for both threshold and capture reactions were measured. Data from dosimetric reactions have been used to obtain preliminary neutron energy spectra. Spectra from the first wall were compared to calculations from ITER and to measurements from accelerator-based tests.

  5. Using Minimum-Surface Bodies for Iteration Space Partitioning

    NASA Technical Reports Server (NTRS)

    Frumlin, Michael; VanderWijngaart, Rob F.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    A number of known techniques for improving cache performance in scientific computations involve the reordering of the iteration space. Some of these reorderings can be considered as coverings of the iteration space with the sets having good surface-to-volume ratio. Use of such sets reduces the number of cache misses in computations of local operators having the iteration space as a domain. We study coverings of iteration spaces represented by structured and unstructured grids. For structured grids we introduce a covering based on successive minima tiles of the interference lattice of the grid. We show that the covering has good surface-to-volume ratio and present a computer experiment showing actual reduction of the cache misses achieved by using these tiles. For unstructured grids no cache efficient covering can be guaranteed. We present a triangulation of a 3-dimensional cube such that any local operator on the corresponding grid has significantly larger number of cache misses than a similar operator on a structured grid.

  6. Decision support for water quality management of contaminants of emerging concern.

    PubMed

    Fischer, Astrid; Ter Laak, Thomas; Bronders, Jan; Desmet, Nele; Christoffels, Ekkehard; van Wezel, Annemarie; van der Hoek, Jan Peter

    2017-05-15

    Water authorities and drinking water companies are challenged with the question if, where and how to abate contaminants of emerging concern in the urban water cycle. The most effective strategy under given conditions is often unclear to these stakeholders as it requires insight into several aspects of the contaminants such as sources, properties, and mitigation options. Furthermore the various parties in the urban water cycle are not always aware of each other's requirements and priorities. Processes to set priorities and come to agreements are lacking, hampering the articulation and implementation of possible solutions. To support decision makers with this task, a decision support system was developed to serve as a point of departure for getting the relevant stakeholders together and finding common ground. The decision support system was iteratively developed in stages. Stakeholders were interviewed and a decision support system prototype developed. Subsequently, this prototype was evaluated by the stakeholders and adjusted accordingly. The iterative process lead to a final system focused on the management of contaminants of emerging concern within the urban water cycle, from wastewater, surface water and groundwater to drinking water, that suggests mitigation methods beyond technical solutions. Possible wastewater and drinking water treatment techniques in combination with decentralised and non-technical methods were taken into account in an integrated way. The system contains background information on contaminants of emerging concern such as physical/chemical characteristics, toxicity and legislative frameworks, water cycle entrance pathways and a database with associated possible mitigation methods. Monitoring data can be uploaded to assess environmental and human health risks in a specific water system. The developed system was received with great interest by potential users, and implemented in an international water cycle network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Transport synthetic acceleration with opposing reflecting boundary conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zika, M.R.; Adams, M.L.

    2000-02-01

    The transport synthetic acceleration (TSA) scheme is extended to problems with opposing reflecting boundary conditions. This synthetic method employs a simplified transport operator as its low-order approximation. A procedure is developed that allows the use of the conjugate gradient (CG) method to solve the resulting low-order system of equations. Several well-known transport iteration algorithms are cast in a linear algebraic form to show their equivalence to standard iterative techniques. Source iteration in the presence of opposing reflecting boundary conditions is shown to be equivalent to a (poorly) preconditioned stationary Richardson iteration, with the preconditioner defined by the method of iteratingmore » on the incident fluxes on the reflecting boundaries. The TSA method (and any synthetic method) amounts to a further preconditioning of the Richardson iteration. The presence of opposing reflecting boundary conditions requires special consideration when developing a procedure to realize the CG method for the proposed system of equations. The CG iteration may be applied only to symmetric positive definite matrices; this condition requires the algebraic elimination of the boundary angular corrections from the low-order equations. As a consequence of this elimination, evaluating the action of the resulting matrix on an arbitrary vector involves two transport sweeps and a transmission iteration. Results of applying the acceleration scheme to a simple test problem are presented.« less

  8. Adaptation of Organisms by Resonance of RNA Transcription with the Cellular Redox Cycle

    NASA Technical Reports Server (NTRS)

    Stolc, Viktor

    2012-01-01

    Sequence variation in organisms differs across the genome and the majority of mutations are caused by oxidation, yet its origin is not fully understood. It has also been shown that the reduction-oxidation reaction cycle is the fundamental biochemical cycle that coordinates the timing of all biochemical processes in that cell, including energy production, DNA replication, and RNA transcription. It is shown that the temporal resonance of transcriptome biosynthesis with the oscillating binary state of the reduction-oxidation reaction cycle serves as a basis for non-random sequence variation at specific genome-wide coordinates that change faster than by accumulation of chance mutations. This work demonstrates evidence for a universal, persistent and iterative feedback mechanism between the environment and heredity, whereby acquired variation between cell divisions can outweigh inherited variation.

  9. A sparse matrix algorithm on the Boolean vector machine

    NASA Technical Reports Server (NTRS)

    Wagner, Robert A.; Patrick, Merrell L.

    1988-01-01

    VLSI technology is being used to implement a prototype Boolean Vector Machine (BVM), which is a large network of very small processors with equally small memories that operate in SIMD mode; these use bit-serial arithmetic, and communicate via cube-connected cycles network. The BVM's bit-serial arithmetic and the small memories of individual processors are noted to compromise the system's effectiveness in large numerical problem applications. Attention is presently given to the implementation of a basic matrix-vector iteration algorithm for space matrices of the BVM, in order to generate over 1 billion useful floating-point operations/sec for this iteration algorithm. The algorithm is expressed in a novel language designated 'BVM'.

  10. Two-Level Hierarchical FEM Method for Modeling Passive Microwave Devices

    NASA Astrophysics Data System (ADS)

    Polstyanko, Sergey V.; Lee, Jin-Fa

    1998-03-01

    In recent years multigrid methods have been proven to be very efficient for solving large systems of linear equations resulting from the discretization of positive definite differential equations by either the finite difference method or theh-version of the finite element method. In this paper an iterative method of the multiple level type is proposed for solving systems of algebraic equations which arise from thep-version of the finite element analysis applied to indefinite problems. A two-levelV-cycle algorithm has been implemented and studied with a Gauss-Seidel iterative scheme used as a smoother. The convergence of the method has been investigated, and numerical results for a number of numerical examples are presented.

  11. Udzawa-type iterative method with parareal preconditioner for a parabolic optimal control problem

    NASA Astrophysics Data System (ADS)

    Lapin, A.; Romanenko, A.

    2016-11-01

    The article deals with the optimal control problem with the parabolic equation as state problem. There are point-wise constraints on the state and control functions. The objective functional involves the observation given in the domain at each moment. The conditions for convergence Udzawa's type iterative method are given. The parareal method to inverse preconditioner is given. The results of calculations are presented.

  12. Developing a Benchmark Tool for Sustainable Consumption: An Iterative Process

    ERIC Educational Resources Information Center

    Heiskanen, E.; Timonen, P.; Nissinen, A.; Gronroos, J.; Honkanen, A.; Katajajuuri, J. -M.; Kettunen, J.; Kurppa, S.; Makinen, T.; Seppala, J.; Silvenius, F.; Virtanen, Y.; Voutilainen, P.

    2007-01-01

    This article presents the development process of a consumer-oriented, illustrative benchmarking tool enabling consumers to use the results of environmental life cycle assessment (LCA) to make informed decisions. LCA provides a wealth of information on the environmental impacts of products, but its results are very difficult to present concisely…

  13. Agile informatics: application of agile project management to the development of a personal health application.

    PubMed

    Chung, Jeanhee; Pankey, Evan; Norris, Ryan J

    2007-10-11

    We describe the application of the Agile method-- a short iteration cycle, user responsive, measurable software development approach-- to the project management of a modular personal health record, iHealthSpace, to be deployed to the patients and providers of a large academic primary care practice.

  14. Three-dimensional focus of attention for iterative cone-beam micro-CT reconstruction

    NASA Astrophysics Data System (ADS)

    Benson, T. M.; Gregor, J.

    2006-09-01

    Three-dimensional iterative reconstruction of high-resolution, circular orbit cone-beam x-ray CT data is often considered impractical due to the demand for vast amounts of computer cycles and associated memory. In this paper, we show that the computational burden can be reduced by limiting the reconstruction to a small, well-defined portion of the image volume. We first discuss using the support region defined by the set of voxels covered by all of the projection views. We then present a data-driven preprocessing technique called focus of attention that heuristically separates both image and projection data into object and background before reconstruction, thereby further reducing the reconstruction region of interest. We present experimental results for both methods based on mouse data and a parallelized implementation of the SIRT algorithm. The computational savings associated with the support region are substantial. However, the results for focus of attention are even more impressive in that only about one quarter of the computer cycles and memory are needed compared with reconstruction of the entire image volume. The image quality is not compromised by either method.

  15. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE PAGES

    Grout, Ray; Kolla, Hemanth; Minion, Michael; ...

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  16. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  17. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  18. How can students contribute? A qualitative study of active student involvement in development of technological learning material for clinical skills training.

    PubMed

    Haraldseid, Cecilie; Friberg, Febe; Aase, Karina

    2016-01-01

    Policy initiatives and an increasing amount of the literature within higher education both call for students to become more involved in creating their own learning. However, there is a lack of studies in undergraduate nursing education that actively involve students in developing such learning material with descriptions of the students' roles in these interactive processes. Explorative qualitative study, using data from focus group interviews, field notes and student notes. The data has been subjected to qualitative content analysis. Active student involvement through an iterative process identified five different learning needs that are especially important to the students: clarification of learning expectations, help to recognize the bigger picture, stimulation of interaction, creation of structure, and receiving context- specific content. The iterative process involvement of students during the development of new technological learning material will enhance the identification of important learning needs for students. The use of student and teacher knowledge through an adapted co-design process is the most optimal level of that involvement.

  19. Efficiency Analysis of the Parallel Implementation of the SIMPLE Algorithm on Multiprocessor Computers

    NASA Astrophysics Data System (ADS)

    Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.

    2017-12-01

    This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.

  20. Random mutagenesis by error-prone pol plasmid replication in Escherichia coli.

    PubMed

    Alexander, David L; Lilly, Joshua; Hernandez, Jaime; Romsdahl, Jillian; Troll, Christopher J; Camps, Manel

    2014-01-01

    Directed evolution is an approach that mimics natural evolution in the laboratory with the goal of modifying existing enzymatic activities or of generating new ones. The identification of mutants with desired properties involves the generation of genetic diversity coupled with a functional selection or screen. Genetic diversity can be generated using PCR or using in vivo methods such as chemical mutagenesis or error-prone replication of the desired sequence in a mutator strain. In vivo mutagenesis methods facilitate iterative selection because they do not require cloning, but generally produce a low mutation density with mutations not restricted to specific genes or areas within a gene. For this reason, this approach is typically used to generate new biochemical properties when large numbers of mutants can be screened or selected. Here we describe protocols for an advanced in vivo mutagenesis method that is based on error-prone replication of a ColE1 plasmid bearing the gene of interest. Compared to other in vivo mutagenesis methods, this plasmid-targeted approach allows increased mutation loads and facilitates iterative selection approaches. We also describe the mutation spectrum for this mutagenesis methodology in detail, and, using cycle 3 GFP as a target for mutagenesis, we illustrate the phenotypic diversity that can be generated using our method. In sum, error-prone Pol I replication is a mutagenesis method that is ideally suited for the evolution of new biochemical activities when a functional selection is available.

  1. Genome-to-Watershed Predictive Understanding of Terrestrial Environments

    NASA Astrophysics Data System (ADS)

    Hubbard, S. S.; Agarwal, D.; Banfield, J. F.; Beller, H. R.; Brodie, E.; Long, P.; Nico, P. S.; Steefel, C. I.; Tokunaga, T. K.; Williams, K. H.

    2014-12-01

    Although terrestrial environments play a critical role in cycling water, greenhouse gasses, and other life-critical elements, the complexity of interactions among component microbes, plants, minerals, migrating fluids and dissolved constituents hinders predictive understanding of system behavior. The 'Sustainable Systems 2.0' project is developing genome-to-watershed scale predictive capabilities to quantify how the microbiome affects biogeochemical watershed functioning, how watershed-scale hydro-biogeochemical processes affect microbial functioning, and how these interactions co-evolve with climate and land-use changes. Development of such predictive capabilities is critical for guiding the optimal management of water resources, contaminant remediation, carbon stabilization, and agricultural sustainability - now and with global change. Initial investigations are focused on floodplains in the Colorado River Basin, and include iterative model development, experiments and observations with an early emphasis on subsurface aspects. Field experiments include local-scale experiments at Rifle CO to quantify spatiotemporal metabolic and geochemical responses to O2and nitrate amendments as well as floodplain-scale monitoring to quantify genomic and biogeochemical response to natural hydrological perturbations. Information obtained from such experiments are represented within GEWaSC, a Genome-Enabled Watershed Simulation Capability, which is being developed to allow mechanistic interrogation of how genomic information stored in a subsurface microbiome affects biogeochemical cycling. This presentation will describe the genome-to-watershed scale approach as well as early highlights associated with the project. Highlights include: first insights into the diversity of the subsurface microbiome and metabolic roles of organisms involved in subsurface nitrogen, sulfur and hydrogen and carbon cycling; the extreme variability of subsurface DOC and hydrological controls on carbon and nitrogen cycling; geophysical identification of floodplain hotspots that are useful for model parameterization; and GEWaSC demonstration of how incorporation of identified microbial metabolic processes improves prediction of the larger system biogeochemical behavior.

  2. Comparison between iteration schemes for three-dimensional coordinate-transformed saturated-unsaturated flow model

    NASA Astrophysics Data System (ADS)

    An, Hyunuk; Ichikawa, Yutaka; Tachikawa, Yasuto; Shiiba, Michiharu

    2012-11-01

    SummaryThree different iteration methods for a three-dimensional coordinate-transformed saturated-unsaturated flow model are compared in this study. The Picard and Newton iteration methods are the common approaches for solving Richards' equation. The Picard method is simple to implement and cost-efficient (on an individual iteration basis). However it converges slower than the Newton method. On the other hand, although the Newton method converges faster, it is more complex to implement and consumes more CPU resources per iteration than the Picard method. The comparison of the two methods in finite-element model (FEM) for saturated-unsaturated flow has been well evaluated in previous studies. However, two iteration methods might exhibit different behavior in the coordinate-transformed finite-difference model (FDM). In addition, the Newton-Krylov method could be a suitable alternative for the coordinate-transformed FDM because it requires the evaluation of a 19-point stencil matrix. The formation of a 19-point stencil is quite a complex and laborious procedure. Instead, the Newton-Krylov method calculates the matrix-vector product, which can be easily approximated by calculating the differences of the original nonlinear function. In this respect, the Newton-Krylov method might be the most appropriate iteration method for coordinate-transformed FDM. However, this method involves the additional cost of taking an approximation at each Krylov iteration in the Newton-Krylov method. In this paper, we evaluated the efficiency and robustness of three iteration methods—the Picard, Newton, and Newton-Krylov methods—for simulating saturated-unsaturated flow through porous media using a three-dimensional coordinate-transformed FDM.

  3. New design of cable-in-conduit conductor for application in future fusion reactors

    NASA Astrophysics Data System (ADS)

    Qin, Jinggang; Wu, Yu; Li, Jiangang; Liu, Fang; Dai, Chao; Shi, Yi; Liu, Huajun; Mao, Zhehua; Nijhuis, Arend; Zhou, Chao; Yagotintsev, Konstantin A.; Lubkemann, Ruben; Anvar, V. A.; Devred, Arnaud

    2017-11-01

    The China Fusion Engineering Test Reactor (CFETR) is a new tokamak device whose magnet system includes toroidal field, central solenoid (CS) and poloidal field coils. The main goal is to build a fusion engineering tokamak reactor with about 1 GW fusion power and self-sufficiency by blanket. In order to reach this high performance, the magnet field target is 15 T. However, the huge electromagnetic load caused by high field and current is a threat for conductor degradation under cycling. The conductor with a short-twist-pitch (STP) design has large stiffness, which enables a significant performance improvement in view of load and thermal cycling. But the conductor with STP design has a remarkable disadvantage: it can easily cause severe strand indentation during cabling. The indentation can reduce the strand performance, especially under high load cycling. In order to overcome this disadvantage, a new design is proposed. The main characteristic of this new design is an updated layout in the triplet. The triplet is made of two Nb3Sn strands and one soft copper strand. The twist pitch of the two Nb3Sn strands is large and cabled first. The copper strand is then wound around the two superconducting strands (CWS) with a shorter twist pitch. The following cable stages layout and twist pitches are similar to the ITER CS conductor with STP design. One short conductor sample with a similar scale to the ITER CS was manufactured and tested with the Twente Cable Press to investigate the mechanical properties, AC loss and internal inspection by destructive examination. The results are compared to the STP conductor (ITER CS and CFETR CSMC) tests. The results show that the new conductor design has similar stiffness, but much lower strand indentation than the STP design. The new design shows potential for application in future fusion reactors.

  4. PREFACE: Progress in the ITER Physics Basis

    NASA Astrophysics Data System (ADS)

    Ikeda, K.

    2007-06-01

    I would firstly like to congratulate all who have contributed to the preparation of the `Progress in the ITER Physics Basis' (PIPB) on its publication and express my deep appreciation of the hard work and commitment of the many scientists involved. With the signing of the ITER Joint Implementing Agreement in November 2006, the ITER Members have now established the framework for construction of the project, and the ITER Organization has begun work at Cadarache. The review of recent progress in the physics basis for burning plasma experiments encompassed by the PIPB will be a valuable resource for the project and, in particular, for the current Design Review. The ITER design has been derived from a physics basis developed through experimental, modelling and theoretical work on the properties of tokamak plasmas and, in particular, on studies of burning plasma physics. The `ITER Physics Basis' (IPB), published in 1999, has been the reference for the projection methodologies for the design of ITER, but the IPB also highlighted several key issues which needed to be resolved to provide a robust basis for ITER operation. In the intervening period scientists of the ITER Participant Teams have addressed these issues intensively. The International Tokamak Physics Activity (ITPA) has provided an excellent forum for scientists involved in these studies, focusing their work on the high priority physics issues for ITER. Significant progress has been made in many of the issues identified in the IPB and this progress is discussed in depth in the PIPB. In this respect, the publication of the PIPB symbolizes the strong interest and enthusiasm of the plasma physics community for the success of the ITER project, which we all recognize as one of the great scientific challenges of the 21st century. I wish to emphasize my appreciation of the work of the ITPA Coordinating Committee members, who are listed below. Their support and encouragement for the preparation of the PIPB were fundamental to its completion. I am pleased to witness the extensive collaborations, the excellent working relationships and the free exchange of views that have been developed among scientists working on magnetic fusion, and I would particularly like to acknowledge the importance which they assign to ITER in their research. This close collaboration and the spirit of free discussion will be essential to the success of ITER. Finally, the PIPB identifies issues which remain in the projection of burning plasma performance to the ITER scale and in the control of burning plasmas. Continued R&D is therefore called for to reduce the uncertainties associated with these issues and to ensure the efficient operation and exploitation of ITER. It is important that the international fusion community maintains a high level of collaboration in the future to address these issues and to prepare the physics basis for ITER operation. ITPA Coordination Committee R. Stambaugh (Chair of ITPA CC, General Atomics, USA) D.J. Campbell (Previous Chair of ITPA CC, European Fusion Development Agreement—Close Support Unit, ITER Organization) M. Shimada (Co-Chair of ITPA CC, ITER Organization) R. Aymar (ITER International Team, CERN) V. Chuyanov (ITER Organization) J.H. Han (Korea Basic Science Institute, Korea) Y. Huo (Zengzhou University, China) Y.S. Hwang (Seoul National University, Korea) N. Ivanov (Kurchatov Institute, Russia) Y. Kamada (Japan Atomic Energy Agency, Naka, Japan) P.K. Kaw (Institute for Plasma Research, India) S. Konovalov (Kurchatov Institute, Russia) M. Kwon (National Fusion Research Center, Korea) J. Li (Academy of Science, Institute of Plasma Physics, China) S. Mirnov (TRINITI, Russia) Y. Nakamura (National Institute for Fusion Studies, Japan) H. Ninomiya (Japan Atomic Energy Agency, Naka, Japan) E. Oktay (Department of Energy, USA) J. Pamela (European Fusion Development Agreement—Close Support Unit) C. Pan (Southwestern Institute of Physics, China) F. Romanelli (Ente per le Nuove tecnologie, l'Energia e l'Ambiente, Italy and European Fusion Development Agreement—Close Support Unit) N. Sauthoff (Princeton Plasma Physics Laboratory, USA and Oak Ridge National Laboratories, USA) Y. Saxena (Institute for Plasma Research, India) Y. Shimomura (ITER Organization) R. Singh (Institute for Plasma Research, India) S. Takamura (Nagoya University, Japan) K. Toi (National Institute for Fusion Studies, Japan) M. Wakatani (Kyoto University, Japan (deceased)) H. Zohm (Max-Planck-Institut für Plasmaphysik, Garching, Germany)

  5. Approximate Solution of Time-Fractional Advection-Dispersion Equation via Fractional Variational Iteration Method

    PubMed Central

    İbiş, Birol

    2014-01-01

    This paper aims to obtain the approximate solution of time-fractional advection-dispersion equation (FADE) involving Jumarie's modification of Riemann-Liouville derivative by the fractional variational iteration method (FVIM). FVIM provides an analytical approximate solution in the form of a convergent series. Some examples are given and the results indicate that the FVIM is of high accuracy, more efficient, and more convenient for solving time FADEs. PMID:24578662

  6. Shift of a limit cycle in biology: From pathological to physiological homeostasia*

    NASA Astrophysics Data System (ADS)

    Claude, Daniel

    1995-03-01

    Biological systems may show homeostatic behaviors that are similar to the ones of forced dynamic systems with a stable limit cycle. For a large class of dynamic systems, it is shown that a shift of a pathological limit cycle over the physiological limit cycle can never be executed by means of a control with a desired periodicity. The above statement shows that the only possibility is to reduce as much as possible the dimensions of a small residual limit cycle. Moreover, it is possible to give some information about the structure of feedback laws that would allow the shift of the limit cycle. The fact that it is generally not possible to recover a physiological limit cycle from a pathological one, results into the fear of never or hardly ever reaching a physiological behavior, and it seems that any hope of therapeutics is given up. This leads to introduce the locking concept, which permits system parameters to change and provides the basis for an adaptive and iterative control, which allows a step by step approach and to finally reach the physiological limit cycle.

  7. Development of laser-based techniques for in situ characterization of the first wall in ITER and future fusion devices

    NASA Astrophysics Data System (ADS)

    Philipps, V.; Malaquias, A.; Hakola, A.; Karhunen, J.; Maddaluno, G.; Almaviva, S.; Caneve, L.; Colao, F.; Fortuna, E.; Gasior, P.; Kubkowska, M.; Czarnecka, A.; Laan, M.; Lissovski, A.; Paris, P.; van der Meiden, H. J.; Petersson, P.; Rubel, M.; Huber, A.; Zlobinski, M.; Schweer, B.; Gierse, N.; Xiao, Q.; Sergienko, G.

    2013-09-01

    Analysis and understanding of wall erosion, material transport and fuel retention are among the most important tasks for ITER and future devices, since these questions determine largely the lifetime and availability of the fusion reactor. These data are also of extreme value to improve the understanding and validate the models of the in vessel build-up of the T inventory in ITER and future D-T devices. So far, research in these areas is largely supported by post-mortem analysis of wall tiles. However, access to samples will be very much restricted in the next-generation devices (such as ITER, JT-60SA, W7-X, etc) with actively cooled plasma-facing components (PFC) and increasing duty cycle. This has motivated the development of methods to measure the deposition of material and retention of plasma fuel on the walls of fusion devices in situ, without removal of PFC samples. For this purpose, laser-based methods are the most promising candidates. Their feasibility has been assessed in a cooperative undertaking in various European associations under EFDA coordination. Different laser techniques have been explored both under laboratory and tokamak conditions with the emphasis to develop a conceptual design for a laser-based wall diagnostic which is integrated into an ITER port plug, aiming to characterize in situ relevant parts of the inner wall, the upper region of the inner divertor, part of the dome and the upper X-point region.

  8. Iterative near-term ecological forecasting: Needs, opportunities, and challenges

    USGS Publications Warehouse

    Dietze, Michael C.; Fox, Andrew; Beck-Johnson, Lindsay; Betancourt, Julio L.; Hooten, Mevin B.; Jarnevich, Catherine S.; Keitt, Timothy H.; Kenney, Melissa A.; Laney, Christine M.; Larsen, Laurel G.; Loescher, Henry W.; Lunch, Claire K.; Pijanowski, Bryan; Randerson, James T.; Read, Emily; Tredennick, Andrew T.; Vargas, Rodrigo; Weathers, Kathleen C.; White, Ethan P.

    2018-01-01

    Two foundational questions about sustainability are “How are ecosystems and the services they provide going to change in the future?” and “How do human decisions affect these trajectories?” Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  9. Iterative near-term ecological forecasting: Needs, opportunities, and challenges.

    PubMed

    Dietze, Michael C; Fox, Andrew; Beck-Johnson, Lindsay M; Betancourt, Julio L; Hooten, Mevin B; Jarnevich, Catherine S; Keitt, Timothy H; Kenney, Melissa A; Laney, Christine M; Larsen, Laurel G; Loescher, Henry W; Lunch, Claire K; Pijanowski, Bryan C; Randerson, James T; Read, Emily K; Tredennick, Andrew T; Vargas, Rodrigo; Weathers, Kathleen C; White, Ethan P

    2018-02-13

    Two foundational questions about sustainability are "How are ecosystems and the services they provide going to change in the future?" and "How do human decisions affect these trajectories?" Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  10. Application of linear multifrequency-grey acceleration to preconditioned Krylov iterations for thermal radiation transport

    DOE PAGES

    Till, Andrew T.; Warsa, James S.; Morel, Jim E.

    2018-06-15

    The thermal radiative transfer (TRT) equations comprise a radiation equation coupled to the material internal energy equation. Linearization of these equations produces effective, thermally-redistributed scattering through absorption-reemission. In this paper, we investigate the effectiveness and efficiency of Linear-Multi-Frequency-Grey (LMFG) acceleration that has been reformulated for use as a preconditioner to Krylov iterative solution methods. We introduce two general frameworks, the scalar flux formulation (SFF) and the absorption rate formulation (ARF), and investigate their iterative properties in the absence and presence of true scattering. SFF has a group-dependent state size but may be formulated without inner iterations in the presence ofmore » scattering, while ARF has a group-independent state size but requires inner iterations when scattering is present. We compare and evaluate the computational cost and efficiency of LMFG applied to these two formulations using a direct solver for the preconditioners. Finally, this work is novel because the use of LMFG for the radiation transport equation, in conjunction with Krylov methods, involves special considerations not required for radiation diffusion.« less

  11. Participatory Knowledge Mobilisation: An Emerging Model for International Translational Research in Education

    ERIC Educational Resources Information Center

    Jones, Sarah-Louise; Procter, Richard; Younie, Sarah

    2015-01-01

    Research alone does not inform practice, rather a process of knowledge translation is required to enable research findings to become meaningful for practitioners in their contextual settings. However, the translational process needs to be an iterative cycle so that the practice itself can be reflected upon and thereby inform the ongoing research…

  12. Enhancement Process of Didactic Strategies in a Degree Course for Pre-Service Teachers

    ERIC Educational Resources Information Center

    Garcias, Adolfina Pérez; Marín, Victoria I.

    2017-01-01

    This paper presents a study on the enhancement of didactic strategies based on the idea of personal learning environments (PLE). It was conducted through three iterative cycles during three consecutive academic years according to the phases of design-based research applied to teaching in a university course for pre-service teachers in the…

  13. Using Performance Tasks to Improve Quantitative Reasoning in an Introductory Mathematics Course

    ERIC Educational Resources Information Center

    Kruse, Gerald; Drews, David

    2013-01-01

    A full-cycle assessment of our efforts to improve quantitative reasoning in an introductory math course is described. Our initial iteration substituted more open-ended performance tasks for the active learning projects than had been used. Using a quasi-experimental design, we compared multiple sections of the same course and found non-significant…

  14. An application generator for rapid prototyping of Ada real-time control software

    NASA Technical Reports Server (NTRS)

    Johnson, Jim; Biglari, Haik; Lehman, Larry

    1990-01-01

    The need to increase engineering productivity and decrease software life cycle costs in real-time system development establishes a motivation for a method of rapid prototyping. The design by iterative rapid prototyping technique is described. A tool which facilitates such a design methodology for the generation of embedded control software is described.

  15. Training Final Year Students in Data Presentation Skills with an Iterative Report-Feedback Cycle

    ERIC Educational Resources Information Center

    Verkade, Heather

    2015-01-01

    Although practical laboratory activities are often considered the linchpin of science education, asking students to produce many large practical reports can be problematic. Practical reports require diverse skills, and therefore do not focus the students' attention on any one skill where specific skills need to be enhanced. They are also…

  16. Development and tests of molybdenum armored copper components for MITICA ion source

    NASA Astrophysics Data System (ADS)

    Pavei, Mauro; Böswirth, Bernd; Greuner, Henri; Marcuzzi, Diego; Rizzolo, Andrea; Valente, Matteo

    2016-02-01

    In order to prevent detrimental material erosion of components impinged by back-streaming positive D or H ions in the megavolt ITER injector and concept advancement beam source, a solution based on explosion bonding technique has been identified for producing a 1 mm thick molybdenum armour layer on copper substrate, compatible with ITER requirements. Prototypes have been recently manufactured and tested in the high heat flux test facility Garching Large Divertor Sample Test Facility (GLADIS) to check the capability of the molybdenum-copper interface to withstand several thermal shock cycles at high power density. This paper presents both the numerical fluid-dynamic analyses of the prototypes simulating the test conditions in GLADIS as well as the experimental results.

  17. Development and tests of molybdenum armored copper components for MITICA ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavei, Mauro, E-mail: mauro.pavei@igi.cnr.it; Marcuzzi, Diego; Rizzolo, Andrea

    2016-02-15

    In order to prevent detrimental material erosion of components impinged by back-streaming positive D or H ions in the megavolt ITER injector and concept advancement beam source, a solution based on explosion bonding technique has been identified for producing a 1 mm thick molybdenum armour layer on copper substrate, compatible with ITER requirements. Prototypes have been recently manufactured and tested in the high heat flux test facility Garching Large Divertor Sample Test Facility (GLADIS) to check the capability of the molybdenum-copper interface to withstand several thermal shock cycles at high power density. This paper presents both the numerical fluid-dynamic analysesmore » of the prototypes simulating the test conditions in GLADIS as well as the experimental results.« less

  18. Development and tests of molybdenum armored copper components for MITICA ion source.

    PubMed

    Pavei, Mauro; Böswirth, Bernd; Greuner, Henri; Marcuzzi, Diego; Rizzolo, Andrea; Valente, Matteo

    2016-02-01

    In order to prevent detrimental material erosion of components impinged by back-streaming positive D or H ions in the megavolt ITER injector and concept advancement beam source, a solution based on explosion bonding technique has been identified for producing a 1 mm thick molybdenum armour layer on copper substrate, compatible with ITER requirements. Prototypes have been recently manufactured and tested in the high heat flux test facility Garching Large Divertor Sample Test Facility (GLADIS) to check the capability of the molybdenum-copper interface to withstand several thermal shock cycles at high power density. This paper presents both the numerical fluid-dynamic analyses of the prototypes simulating the test conditions in GLADIS as well as the experimental results.

  19. Static shape of an acoustically levitated drop with wave-drop interaction

    NASA Astrophysics Data System (ADS)

    Lee, C. P.; Anilkumar, A. V.; Wang, T. G.

    1994-11-01

    The static shape of a drop levitated and flattened by an acoustic standing wave field in air is calculated, requiring self-consistency between the drop shape and the wave. The wave is calculated for a given shape using the boundary integral method. From the resulting radiation stress on the drop surface, the shape is determined by solving the Young-Laplace equation, completing an iteration cycle. The iteration is continued until both the shape and the wave converge. Of particular interest are the shapes of large drops that sustain equilibrium, beyond a certain degree of flattening, by becoming more flattened at a decreasing sound pressure level. The predictions for flattening versus acoustic radiation stress, for drops of different sizes, compare favorably with experimental data.

  20. Genome scale engineering techniques for metabolic engineering.

    PubMed

    Liu, Rongming; Bassalo, Marcelo C; Zeitoun, Ramsey I; Gill, Ryan T

    2015-11-01

    Metabolic engineering has expanded from a focus on designs requiring a small number of genetic modifications to increasingly complex designs driven by advances in genome-scale engineering technologies. Metabolic engineering has been generally defined by the use of iterative cycles of rational genome modifications, strain analysis and characterization, and a synthesis step that fuels additional hypothesis generation. This cycle mirrors the Design-Build-Test-Learn cycle followed throughout various engineering fields that has recently become a defining aspect of synthetic biology. This review will attempt to summarize recent genome-scale design, build, test, and learn technologies and relate their use to a range of metabolic engineering applications. Copyright © 2015 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  1. Adjoint-Based Methodology for Time-Dependent Optimization

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2008-01-01

    This paper presents a discrete adjoint method for a broad class of time-dependent optimization problems. The time-dependent adjoint equations are derived in terms of the discrete residual of an arbitrary finite volume scheme which approximates unsteady conservation law equations. Although only the 2-D unsteady Euler equations are considered in the present analysis, this time-dependent adjoint method is applicable to the 3-D unsteady Reynolds-averaged Navier-Stokes equations with minor modifications. The discrete adjoint operators involving the derivatives of the discrete residual and the cost functional with respect to the flow variables are computed using a complex-variable approach, which provides discrete consistency and drastically reduces the implementation and debugging cycle. The implementation of the time-dependent adjoint method is validated by comparing the sensitivity derivative with that obtained by forward mode differentiation. Our numerical results show that O(10) optimization iterations of the steepest descent method are needed to reduce the objective functional by 3-6 orders of magnitude for test problems considered.

  2. The development of Drink Less: an alcohol reduction smartphone app for excessive drinkers.

    PubMed

    Garnett, Claire; Crane, David; West, Robert; Brown, Jamie; Michie, Susan

    2018-05-04

    Excessive alcohol consumption poses a serious problem for public health. Digital behavior change interventions have the potential to help users reduce their drinking. In accordance with Open Science principles, this paper describes the development of a smartphone app to help individuals who drink excessively to reduce their alcohol consumption. Following the UK Medical Research Council's guidance and the Multiphase Optimization Strategy, development consisted of two phases: (i) selection of intervention components and (ii) design and development work to implement the chosen components into modules to be evaluated further for inclusion in the app. Phase 1 involved a scoping literature review, expert consensus study and content analysis of existing alcohol apps. Findings were integrated within a broad model of behavior change (Capability, Opportunity, Motivation-Behavior). Phase 2 involved a highly iterative process and used the "Person-Based" approach to promote engagement. From Phase 1, five intervention components were selected: (i) Normative Feedback, (ii) Cognitive Bias Re-training, (iii) Self-monitoring and Feedback, (iv) Action Planning, and (v) Identity Change. Phase 2 indicated that each of these components presented different challenges for implementation as app modules; all required multiple iterations and design changes to arrive at versions that would be suitable for inclusion in a subsequent evaluation study. The development of the Drink Less app involved a thorough process of component identification with a scoping literature review, expert consensus, and review of other apps. Translation of the components into app modules required a highly iterative process involving user testing and design modification.

  3. The Sustainability Cycle and Loop: models for a more unified understanding of sustainability.

    PubMed

    Hay, Laura; Duffy, Alex; Whitfield, R I

    2014-01-15

    In spite of the considerable research on sustainability, reports suggest that we are barely any closer to a more sustainable society. As such, there is an urgent need to improve the effectiveness of human efforts towards sustainability. A clearer and more unified understanding of sustainability among different people and sectors could help to facilitate this. This paper presents the results of an inductive literature investigation, aiming to develop models to explain the nature of sustainability in the Earth system, and how humans can effectively strive for it. The major contributions are two general and complementary models, that may be applied in any context to provide a common basis for understanding sustainability: the Sustainability Cycle (S-Cycle), and the Sustainability Loop (S-Loop). Literature spanning multiple sectors is examined from the perspective of three concepts, emerging as significant in relation to our aim. Systems are shown to provide the context for human action towards sustainability, and the nature of the Earth system and its sub-systems is explored. Activities are outlined as a fundamental target that humans need to sustain, since they produce the entities both needed and desired by society. The basic behaviour of activities operating in the Earth system is outlined. Finally, knowledge is positioned as the driver of human action towards sustainability, and the key components of knowledge involved are examined. The S-Cycle and S-Loop models are developed via a process of induction from the reviewed literature. The S-Cycle describes the operation of activities in a system from the perspective of sustainability. The sustainability of activities in a system depends upon the availability of resources, and the availability of resources depends upon the rate that activities consume and produce them. Humans may intervene in these dynamics via an iterative process of interpretation and action, described in the S-Loop model. The models are briefly applied to a system described in the literature. It is shown that the S-Loop may be used to guide efforts towards sustainability in a particular system of interest, by prescribing the basic activities involved. The S-Cycle may be applied complementary to the S-Loop, to support the interpretation of activity behaviour described in the latter. Given their general nature, the models provide the basis for a more unified understanding of sustainability. It is hoped that their use may go some way towards improving the effectiveness of human action towards sustainability. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Computation of optimal output-feedback compensators for linear time-invariant systems

    NASA Technical Reports Server (NTRS)

    Platzman, L. K.

    1972-01-01

    The control of linear time-invariant systems with respect to a quadratic performance criterion was considered, subject to the constraint that the control vector be a constant linear transformation of the output vector. The optimal feedback matrix, f*, was selected to optimize the expected performance, given the covariance of the initial state. It is first shown that the expected performance criterion can be expressed as the ratio of two multinomials in the element of f. This expression provides the basis for a feasible method of determining f* in the case of single-input single-output systems. A number of iterative algorithms are then proposed for the calculation of f* for multiple input-output systems. For two of these, monotone convergence is proved, but they involve the solution of nonlinear matrix equations at each iteration. Another is proposed involving the solution of Lyapunov equations at each iteration, and the gradual increase of the magnitude of a penalty function. Experience with this algorithm will be needed to determine whether or not it does, indeed, possess desirable convergence properties, and whether it can be used to determine the globally optimal f*.

  5. Solving Upwind-Biased Discretizations: Defect-Correction Iterations

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    1999-01-01

    This paper considers defect-correction solvers for a second order upwind-biased discretization of the 2D convection equation. The following important features are reported: (1) The asymptotic convergence rate is about 0.5 per defect-correction iteration. (2) If the operators involved in defect-correction iterations have different approximation order, then the initial convergence rates may be very slow. The number of iterations required to get into the asymptotic convergence regime might grow on fine grids as a negative power of h. In the case of a second order target operator and a first order driver operator, this number of iterations is roughly proportional to h-1/3. (3) If both the operators have the second approximation order, the defect-correction solver demonstrates the asymptotic convergence rate after three iterations at most. The same three iterations are required to converge algebraic error below the truncation error level. A novel comprehensive half-space Fourier mode analysis (which, by the way, can take into account the influence of discretized outflow boundary conditions as well) for the defect-correction method is developed. This analysis explains many phenomena observed in solving non-elliptic equations and provides a close prediction of the actual solution behavior. It predicts the convergence rate for each iteration and the asymptotic convergence rate. As a result of this analysis, a new very efficient adaptive multigrid algorithm solving the discrete problem to within a given accuracy is proposed. Numerical simulations confirm the accuracy of the analysis and the efficiency of the proposed algorithm. The results of the numerical tests are reported.

  6. Staying on the Journey: Maintaining a Change Momentum with PB4L "School-Wide"

    ERIC Educational Resources Information Center

    Boyd, Sally

    2016-01-01

    How do schools maintain momentum with change and enter new cycles of growth when they are attempting to do things differently? This article draws on a two-year evaluation of the "Positive Behaviour for Learning School-Wide" initiative to identify key factors that enabled schools to engage in a long-term and iterative change process.…

  7. An Ecological Exploration of Young Children's Digital Play: Framing Children's Social Experiences with Technologies in Early Childhood

    ERIC Educational Resources Information Center

    Arnott, Lorna

    2016-01-01

    This article outlines an ecological framework for describing children's social experiences during digital play. It presents evidence from a study that explored how 3- to 5-year-old children negotiated their social experiences as they used technologies in preschool. Utilising a systematic and iterative cycle of data collection and analysis,…

  8. Bi-Level Integrated System Synthesis (BLISS)

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Agte, Jeremy S.; Sandusky, Robert R., Jr.

    1998-01-01

    BLISS is a method for optimization of engineering systems by decomposition. It separates the system level optimization, having a relatively small number of design variables, from the potentially numerous subsystem optimizations that may each have a large number of local design variables. The subsystem optimizations are autonomous and may be conducted concurrently. Subsystem and system optimizations alternate, linked by sensitivity data, producing a design improvement in each iteration. Starting from a best guess initial design, the method improves that design in iterative cycles, each cycle comprised of two steps. In step one, the system level variables are frozen and the improvement is achieved by separate, concurrent, and autonomous optimizations in the local variable subdomains. In step two, further improvement is sought in the space of the system level variables. Optimum sensitivity data link the second step to the first. The method prototype was implemented using MATLAB and iSIGHT programming software and tested on a simplified, conceptual level supersonic business jet design, and a detailed design of an electronic device. Satisfactory convergence and favorable agreement with the benchmark results were observed. Modularity of the method is intended to fit the human organization and map well on the computing technology of concurrent processing.

  9. Optimal Area Profiles for Ideal Single Nozzle Air-Breathing Pulse Detonation Engines

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.

    2003-01-01

    The effects of cross-sectional area variation on idealized Pulse Detonation Engine performance are examined numerically. A quasi-one-dimensional, reacting, numerical code is used as the kernel of an algorithm that iteratively determines the correct sequencing of inlet air, inlet fuel, detonation initiation, and cycle time to achieve a limit cycle with specified fuel fraction, and volumetric purge fraction. The algorithm is exercised on a tube with a cross sectional area profile containing two degrees of freedom: overall exit-to-inlet area ratio, and the distance along the tube at which continuous transition from inlet to exit area begins. These two parameters are varied over three flight conditions (defined by inlet total temperature, inlet total pressure and ambient static pressure) and the performance is compared to a straight tube. It is shown that compared to straight tubes, increases of 20 to 35 percent in specific impulse and specific thrust are obtained with tubes of relatively modest area change. The iterative algorithm is described, and its limitations are noted and discussed. Optimized results are presented showing performance measurements, wave diagrams, and area profiles. Suggestions for future investigation are also discussed.

  10. A user-centered model for designing consumer mobile health (mHealth) applications (apps).

    PubMed

    Schnall, Rebecca; Rojas, Marlene; Bakken, Suzanne; Brown, William; Carballo-Dieguez, Alex; Carry, Monique; Gelaude, Deborah; Mosley, Jocelyn Patterson; Travers, Jasmine

    2016-04-01

    Mobile technologies are a useful platform for the delivery of health behavior interventions. Yet little work has been done to create a rigorous and standardized process for the design of mobile health (mHealth) apps. This project sought to explore the use of the Information Systems Research (ISR) framework as guide for the design of mHealth apps. Our work was guided by the ISR framework which is comprised of 3 cycles: Relevance, Rigor and Design. In the Relevance cycle, we conducted 5 focus groups with 33 targeted end-users. In the Rigor cycle, we performed a review to identify technology-based interventions for meeting the health prevention needs of our target population. In the Design Cycle, we employed usability evaluation methods to iteratively develop and refine mock-ups for a mHealth app. Through an iterative process, we identified barriers and facilitators to the use of mHealth technology for HIV prevention for high-risk MSM, developed 'use cases' and identified relevant functional content and features for inclusion in a design document to guide future app development. Findings from our work support the use of the ISR framework as a guide for designing future mHealth apps. Results from this work provide detailed descriptions of the user-centered design and system development and have heuristic value for those venturing into the area of technology-based intervention work. Findings from this study support the use of the ISR framework as a guide for future mHealth app development. Use of the ISR framework is a potentially useful approach for the design of a mobile app that incorporates end-users' design preferences. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Finite element analysis of heat load of tungsten relevant to ITER conditions

    NASA Astrophysics Data System (ADS)

    Zinovev, A.; Terentyev, D.; Delannay, L.

    2017-12-01

    A computational procedure is proposed in order to predict the initiation of intergranular cracks in tungsten with ITER specification microstructure (i.e. characterised by elongated micrometre-sized grains). Damage is caused by a cyclic heat load, which emerges from plasma instabilities during operation of thermonuclear devices. First, a macroscopic thermo-mechanical simulation is performed in order to obtain temperature- and strain field in the material. The strain path is recorded at a selected point of interest of the macroscopic specimen, and is then applied at the microscopic level to a finite element mesh of a polycrystal. In the microscopic simulation, the stress state at the grain boundaries serves as the marker of cracking initiation. The simulated heat load cycle is a representative of edge-localized modes, which are anticipated during normal operations of ITER. Normal stresses at the grain boundary interfaces were shown to strongly depend on the direction of grain orientation with respect to the heat flux direction and to attain higher values if the flux is perpendicular to the elongated grains, where it apparently promotes crack initiation.

  12. Multilevel acceleration of scattering-source iterations with application to electron transport

    DOE PAGES

    Drumm, Clif; Fan, Wesley

    2017-08-18

    Acceleration/preconditioning strategies available in the SCEPTRE radiation transport code are described. A flexible transport synthetic acceleration (TSA) algorithm that uses a low-order discrete-ordinates (S N) or spherical-harmonics (P N) solve to accelerate convergence of a high-order S N source-iteration (SI) solve is described. Convergence of the low-order solves can be further accelerated by applying off-the-shelf incomplete-factorization or algebraic-multigrid methods. Also available is an algorithm that uses a generalized minimum residual (GMRES) iterative method rather than SI for convergence, using a parallel sweep-based solver to build up a Krylov subspace. TSA has been applied as a preconditioner to accelerate the convergencemore » of the GMRES iterations. The methods are applied to several problems involving electron transport and problems with artificial cross sections with large scattering ratios. These methods were compared and evaluated by considering material discontinuities and scattering anisotropy. Observed accelerations obtained are highly problem dependent, but speedup factors around 10 have been observed in typical applications.« less

  13. Pseudo-time methods for constrained optimization problems governed by PDE

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1995-01-01

    In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.

  14. Instrumentation for Examining Microbial Response to Changes In Environmental Pressures

    NASA Technical Reports Server (NTRS)

    Blaich, Justin; Storrs, Aaron; Wang, Jonathan; Ouandji, Cynthia; Arismendi, Dillon; Hernandez, Juliana; Sardesh, Nina; Ibanez, Cory; Owyang, Stephanie; Gentry, Diana

    2016-01-01

    The Automated Adaptive Directed Evolution Chamber (AADEC) is a device that allows operators to generate a micro-scale analog of real world systems that can be used to model the local-scale effects of climate change on microbial ecosystems. The AADEC uses an artificial environment to expose cultures of micro-organisms to environmental pressures, such as UV-C radiation, chemical toxins, and temperature. The AADEC autonomously exposes micro-organisms to selection pressures. This improves upon standard manual laboratory techniques: the process can take place over a longer period of time, involve more stressors, implement real-time adjustments based on the state of the population, and minimize the risk of contamination. We currently use UV-C radiation as the main selection pressure, UV-C is well studied both for its cell and DNA damaging effects as a type of selection pressure and for its related effectiveness as a mutagen; having these functions united makes it a good choice for a proof of concept. The AADEC roadmap includes expansion to different selection pressures, including heavy metal toxicity, temperature, and other forms of radiation.The AADEC uses closed-loop control to feedback the current state of the culture to the AADEC controller that modifies selection pressure intensity during experimentation, in this case culture density and growth rate. Culture density and growth rate are determined by measuring the optical density of the culture using 600 nm light. An array of 600 nm LEDs illuminate the culture and photodiodes are used to measure the shadow on the opposite side of the chamber.Previous experiments showed that we can produce a million fold increase to UV-C radiation over seven iterations. The most recent implements a microfluidic system that can expose cultures to multiple different selection pressures, perform non-survival based selection, and autonomously perform hundreds of exposure cycles. A scalable pump system gives the ability to pump in various different growth media to individual cultures and introduce chemical toxins during experimentation; AADEC can perform freeze and thaw cycles. We improved our baseline characterization by building a custom UV-C exposure hood, a shutter operates on a preset timer allowing the user to set exposure intensity consistently for multiple iterations.

  15. Effect of HIP temperature on microstructure and low cycle fatigue strength of CuCrZr alloy

    NASA Astrophysics Data System (ADS)

    Nishi, Hiroshi; Enoeda, Mikio

    2011-10-01

    In order to investigate the effect of the HIP cycle temperatures on the metallurgic degradation and the mechanical properties of CuCrZr alloy, assessments of the microstructure, tensile test, Charpy impact test and low cycle fatigue test are performed for various heat treated CuCrZr alloys, which were solution-annealed followed by water-quenched and aged state of CuCrZr with simulated HIP cycle at temperatures of 980 and 1045 °C. Grain growth occurred on 1045 °C HIP CuCrZr, though slightly on 980 °C HIP CuCrZr. Metallurgic degradation such as voids was not found by optical and SEM observations. There were coarse precipitates in all the CuCrZr and the precipitates did not easily dissolve at 980 °C. The low cycle fatigue strength of 1045 °C HIP CuCrZr was lower than that of other CuCrZr because of the metallurgic degradation caused by the heat cycle, while that of other CuCrZr was corresponding to the best fit curve of ITER MPH.

  16. Status of Europe's contribution to the ITER EC system

    NASA Astrophysics Data System (ADS)

    Albajar, F.; Aiello, G.; Alberti, S.; Arnold, F.; Avramidis, K.; Bader, M.; Batista, R.; Bertizzolo, R.; Bonicelli, T.; Braunmueller, F.; Brescan, C.; Bruschi, A.; von Burg, B.; Camino, K.; Carannante, G.; Casarin, V.; Castillo, A.; Cauvard, F.; Cavalieri, C.; Cavinato, M.; Chavan, R.; Chelis, J.; Cismondi, F.; Combescure, D.; Darbos, C.; Farina, D.; Fasel, D.; Figini, L.; Gagliardi, M.; Gandini, F.; Gantenbein, G.; Gassmann, T.; Gessner, R.; Goodman, T. P.; Gracia, V.; Grossetti, G.; Heemskerk, C.; Henderson, M.; Hermann, V.; Hogge, J. P.; Illy, S.; Ioannidis, Z.; Jelonnek, J.; Jin, J.; Kasparek, W.; Koning, J.; Krause, A. S.; Landis, J. D.; Latsas, G.; Li, F.; Mazzocchi, F.; Meier, A.; Moro, A.; Nousiainen, R.; Purohit, D.; Nowak, S.; Omori, T.; van Oosterhout, J.; Pacheco, J.; Pagonakis, I.; Platania, P.; Poli, E.; Preis, A. K.; Ronden, D.; Rozier, Y.; Rzesnicki, T.; Saibene, G.; Sanchez, F.; Sartori, F.; Sauter, O.; Scherer, T.; Schlatter, C.; Schreck, S.; Serikov, A.; Siravo, U.; Sozzi, C.; Spaeh, P.; Spichiger, A.; Strauss, D.; Takahashi, K.; Thumm, M.; Tigelis, I.; Vaccaro, A.; Vomvoridis, J.; Tran, M. Q.; Weinhorst, B.

    2015-03-01

    The electron cyclotron (EC) system of ITER for the initial configuration is designed to provide 20MW of RF power into the plasma during 3600s and a duty cycle of up to 25% for heating and (co and counter) non-inductive current drive, also used to control the MHD plasma instabilities. The EC system is being procured by 5 domestic agencies plus the ITER Organization (IO). F4E has the largest fraction of the EC procurements, which includes 8 high voltage power supplies (HVPS), 6 gyrotrons, the ex-vessel waveguides (includes isolation valves and diamond windows) for all launchers, 4 upper launchers and the main control system. F4E is working with IO to improve the overall design of the EC system by integrating consolidated technological advances, simplifying the interfaces, and doing global engineering analysis and assessments of EC heating and current drive physics and technology capabilities. Examples are the optimization of the HVPS and gyrotron requirements and performance relative to power modulation for MHD control, common qualification programs for diamond window procurements, assessment of the EC grounding system, and the optimization of the launcher steering angles for improved EC access. Here we provide an update on the status of Europe's contribution to the ITER EC system, and a summary of the global activities underway by F4E in collaboration with IO for the optimization of the subsystems.

  17. Georgia Tech Studies of Sub-Critical Advanced Burner Reactors with a D-T Fusion Tokamak Neutron Source for the Transmutation of Spent Nuclear Fuel

    NASA Astrophysics Data System (ADS)

    Stacey, W. M.

    2009-09-01

    The possibility that a tokamak D-T fusion neutron source, based on ITER physics and technology, could be used to drive sub-critical, fast-spectrum nuclear reactors fueled with the transuranics (TRU) in spent nuclear fuel discharged from conventional nuclear reactors has been investigated at Georgia Tech in a series of studies which are summarized in this paper. It is found that sub-critical operation of such fast transmutation reactors is advantageous in allowing longer fuel residence time, hence greater TRU burnup between fuel reprocessing stages, and in allowing higher TRU loading without compromising safety, relative to what could be achieved in a similar critical transmutation reactor. The required plasma and fusion technology operating parameter range of the fusion neutron source is generally within the anticipated operational range of ITER. The implications of these results for fusion development policy, if they hold up under more extensive and detailed analysis, is that a D-T fusion tokamak neutron source for a sub-critical transmutation reactor, built on the basis of the ITER operating experience, could possibly be a logical next step after ITER on the path to fusion electrical power reactors. At the same time, such an application would allow fusion to contribute to meeting the nation's energy needs at an earlier stage by helping to close the fission reactor nuclear fuel cycle.

  18. Theory of wing rock

    NASA Technical Reports Server (NTRS)

    Hsu, C. H.; Lan, C. E.

    1984-01-01

    A theory is developed for predicting wing rock characteristics. From available data, it can be concluded that wing rock is triggered by flow asymmetries, developed by negative or weakly positive roll damping, and sustained by nonlinear aerodynamic roll damping. A new nonlinear aerodynamic model that includes all essential aerodynamic nonlinearities is developed. The Beecham-Titchener method is applied to obtain approximate analytic solutions for the amplitude and frequency of the limit cycle based on the three degree-of-freedom equations of motion. An iterative scheme is developed to calculate the average aerodynamic derivatives and dynamic characteristics at limit cycle conditions. Good agreement between theoretical and experimental results is obtained.

  19. The rock-paper-scissors game

    NASA Astrophysics Data System (ADS)

    Zhou, Hai-Jun

    2016-04-01

    Rock-Paper-Scissors (RPS), a game of cyclic dominance, is not merely a popular children's game but also a basic model system for studying decision-making in non-cooperative strategic interactions. Aimed at students of physics with no background in game theory, this paper introduces the concepts of Nash equilibrium and evolutionarily stable strategy, and reviews some recent theoretical and empirical efforts on the non-equilibrium properties of the iterated RPS, including collective cycling, conditional response patterns and microscopic mechanisms that facilitate cooperation. We also introduce several dynamical processes to illustrate the applications of RPS as a simplified model of species competition in ecological systems and price cycling in economic markets.

  20. Run-time parallelization and scheduling of loops

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay

    1991-01-01

    Run-time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run-time, wavefronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing, and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run-time reordering of loop indexes can have a significant impact on performance.

  1. Analytic approximation for random muffin-tin alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, R.; Gray, L.J.; Kaplan, T.

    1983-03-15

    The methods introduced in a previous paper under the name of ''traveling-cluster approximation'' (TCA) are applied, in a multiple-scattering approach, to the case of a random muffin-tin substitutional alloy. This permits the iterative part of a self-consistent calculation to be carried out entirely in terms of on-the-energy-shell scattering amplitudes. Off-shell components of the mean resolvent, needed for the calculation of spectral functions, are obtained by standard methods involving single-site scattering wave functions. The single-site TCA is just the usual coherent-potential approximation, expressed in a form particularly suited for iteration. A fixed-point theorem is proved for the general t-matrix TCA, ensuringmore » convergence upon iteration to a unique self-consistent solution with the physically essential Herglotz properties.« less

  2. Improving the efficiency of molecular replacement by utilizing a new iterative transform phasing algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Hongxing; Fang, Hengrui; Miller, Mitchell D.

    2016-07-15

    An iterative transform algorithm is proposed to improve the conventional molecular-replacement method for solving the phase problem in X-ray crystallography. Several examples of successful trial calculations carried out with real diffraction data are presented. An iterative transform method proposed previously for direct phasing of high-solvent-content protein crystals is employed for enhancing the molecular-replacement (MR) algorithm in protein crystallography. Target structures that are resistant to conventional MR due to insufficient similarity between the template and target structures might be tractable with this modified phasing method. Trial calculations involving three different structures are described to test and illustrate the methodology. The relationshipmore » of the approach to PHENIX Phaser-MR and MR-Rosetta is discussed.« less

  3. BeeSign: Designing to Support Mediated Group Inquiry of Complex Science by Early Elementary Students

    ERIC Educational Resources Information Center

    Danish, Joshua A.; Peppler, Kylie; Phelps, David

    2010-01-01

    All too often, designers assume that complex science and cycles of inquiry are beyond the capabilities of young children (5-8 years old). However, with carefully designed mediators, we argue that such concepts are well within their grasp. In this paper we describe two design iterations of the BeeSign simulation software that was designed to help…

  4. What Are Our International Students Telling Us? Further Explorations of a Formative Feedback Intervention, to Support Academic Literacy

    ERIC Educational Resources Information Center

    Burns, Caroline; Foo, Martin

    2014-01-01

    This study reports on a further iteration of an action research cycle, discussed in Burns and Foo (2012, 2013). It explores how formative feedback on academic literacy was used and acted upon, and if a Formative Feedback Intervention (FFI) increased the students' confidence in future assignments. It also considers whether the assignment of a grade…

  5. The child's perspective as a guiding principle: Young children as co-designers in the design of an interactive application meant to facilitate participation in healthcare situations.

    PubMed

    Stålberg, Anna; Sandberg, Anette; Söderbäck, Maja; Larsson, Thomas

    2016-06-01

    During the last decade, interactive technology has entered mainstream society. Its many users also include children, even the youngest ones, who use the technology in different situations for both fun and learning. When designing technology for children, it is crucial to involve children in the process in order to arrive at an age-appropriate end product. In this study we describe the specific iterative process by which an interactive application was developed. This application is intended to facilitate young children's, three-to five years old, participation in healthcare situations. We also describe the specific contributions of the children, who tested the prototypes in a preschool, a primary health care clinic and an outpatient unit at a hospital, during the development process. The iterative phases enabled the children to be involved at different stages of the process and to evaluate modifications and improvements made after each prior iteration. The children contributed their own perspectives (the child's perspective) on the usability, content and graphic design of the application, substantially improving the software and resulting in an age-appropriate product. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction

    PubMed Central

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-01-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate Ki as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting Ki images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit Ki bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source Software for Tomographic Image Reconstruction (STIR) platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced Ki target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D vs. the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10–20 sub-iterations. Moreover, systematic reduction in Ki % bias and improved TBR were observed for gPatlak vs. sPatlak. Finally, validation on clinical WB dynamic data demonstrated the clinical feasibility and superior Ki CNR performance for the proposed 4D framework compared to indirect Patlak and SUV imaging. PMID:27383991

  7. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-08-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were observed for gPatlak versus sPatlak. Finally, validation on clinical WB dynamic data demonstrated the clinical feasibility and superior K i CNR performance for the proposed 4D framework compared to indirect Patlak and SUV imaging.

  8. Vectorized and multitasked solution of the few-group neutron diffusion equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zee, S.K.; Turinsky, P.J.; Shayer, Z.

    1989-03-01

    A numerical algorithm with parallelism was used to solve the two-group, multidimensional neutron diffusion equations on computers characterized by shared memory, vector pipeline, and multi-CPU architecture features. Specifically, solutions were obtained on the Cray X/MP-48, the IBM-3090 with vector facilities, and the FPS-164. The material-centered mesh finite difference method approximation and outer-inner iteration method were employed. Parallelism was introduced in the inner iterations using the cyclic line successive overrelaxation iterative method and solving in parallel across lines. The outer iterations were completed using the Chebyshev semi-iterative method that allows parallelism to be introduced in both space and energy groups. Formore » the three-dimensional model, power, soluble boron, and transient fission product feedbacks were included. Concentrating on the pressurized water reactor (PWR), the thermal-hydraulic calculation of moderator density assumed single-phase flow and a closed flow channel, allowing parallelism to be introduced in the solution across the radial plane. Using a pinwise detail, quarter-core model of a typical PWR in cycle 1, for the two-dimensional model without feedback the measured million floating point operations per second (MFLOPS)/vector speedups were 83/11.7. 18/2.2, and 2.4/5.6 on the Cray, IBM, and FPS without multitasking, respectively. Lower performance was observed with a coarser mesh, i.e., shorter vector length, due to vector pipeline start-up. For an 18 x 18 x 30 (x-y-z) three-dimensional model with feedback of the same core, MFLOPS/vector speedups of --61/6.7 and an execution time of 0.8 CPU seconds on the Cray without multitasking were measured. Finally, using two CPUs and the vector pipelines of the Cray, a multitasking efficiency of 81% was noted for the three-dimensional model.« less

  9. Numerical analysis of modified Central Solenoid insert design

    DOE PAGES

    Khodak, Andrei; Martovetsky, Nicolai; Smirnov, Aleksandre; ...

    2015-06-21

    The United States ITER Project Office (USIPO) is responsible for fabrication of the Central Solenoid (CS) for ITER project. The ITER machine is currently under construction by seven parties in Cadarache, France. The CS Insert (CSI) project should provide a verification of the conductor performance in relevant conditions of temperature, field, currents and mechanical strain. The US IPO designed the CSI that will be tested at the Central Solenoid Model Coil (CSMC) Test Facility at JAEA, Naka. To validate the modified design we performed three-dimensional numerical simulations using coupled solver for simultaneous structural, thermal and electromagnetic analysis. Thermal and electromagneticmore » simulations supported structural calculations providing necessary loads and strains. According to current analysis design of the modified coil satisfies ITER magnet structural design criteria for the following conditions: (1) room temperature, no current, (2) temperature 4K, no current, (3) temperature 4K, current 60 kA direct charge, and (4) temperature 4K, current 60 kA reverse charge. Fatigue life assessment analysis is performed for the alternating conditions of: temperature 4K, no current, and temperature 4K, current 45 kA direct charge. Results of fatigue analysis show that parts of the coil assembly can be qualified for up to 1 million cycles. Distributions of the Current Sharing Temperature (TCS) in the superconductor were obtained from numerical results using parameterization of the critical surface in the form similar to that proposed for ITER. Lastly, special ADPL scripts were developed for ANSYS allowing one-dimensional representation of TCS along the cable, as well as three-dimensional fields of TCS in superconductor material. Published by Elsevier B.V.« less

  10. Scientific and technical challenges on the road towards fusion electricity

    NASA Astrophysics Data System (ADS)

    Donné, A. J. H.; Federici, G.; Litaudon, X.; McDonald, D. C.

    2017-10-01

    The goal of the European Fusion Roadmap is to deliver fusion electricity to the grid early in the second half of this century. It breaks the quest for fusion energy into eight missions, and for each of them it describes a research and development programme to address all the open technical gaps in physics and technology and estimates the required resources. It points out the needs to intensify industrial involvement and to seek all opportunities for collaboration outside Europe. The roadmap covers three periods: the short term, which runs parallel to the European Research Framework Programme Horizon 2020, the medium term and the long term. ITER is the key facility of the roadmap as it is expected to achieve most of the important milestones on the path to fusion power. Thus, the vast majority of present resources are dedicated to ITER and its accompanying experiments. The medium term is focussed on taking ITER into operation and bringing it to full power, as well as on preparing the construction of a demonstration power plant DEMO, which will for the first time demonstrate fusion electricity to the grid around the middle of this century. Building and operating DEMO is the subject of the last roadmap phase: the long term. Clearly, the Fusion Roadmap is tightly connected to the ITER schedule. Three key milestones are the first operation of ITER, the start of the DT operation in ITER and reaching the full performance at which the thermal fusion power is 10 times the power put in to the plasma. The Engineering Design Activity of DEMO needs to start a few years after the first ITER plasma, while the start of the construction phase will be a few years after ITER reaches full performance. In this way ITER can give viable input to the design and development of DEMO. Because the neutron fluence in DEMO will be much higher than in ITER, it is important to develop and validate materials that can handle these very high neutron loads. For the testing of the materials, a dedicated 14 MeV neutron source is needed. This DEMO Oriented Neutron Source (DONES) is therefore an important facility to support the fusion roadmap.

  11. Calculation of cracking under pulsed heat loads in tungsten manufactured according to ITER specifications

    NASA Astrophysics Data System (ADS)

    Arakcheev, A. S.; Skovorodin, D. I.; Burdakov, A. V.; Shoshin, A. A.; Polosatkin, S. V.; Vasilyev, A. A.; Postupaev, V. V.; Vyacheslavov, L. N.; Kasatov, A. A.; Huber, A.; Mertens, Ph; Wirtz, M.; Linsmeier, Ch; Kreter, A.; Löwenhoff, Th; Begrambekov, L.; Grunin, A.; Sadovskiy, Ya

    2015-12-01

    A mathematical model of surface cracking under pulsed heat load was developed. The model correctly describes a smooth brittle-ductile transition. The elastic deformation is described in a thin-heated-layer approximation. The plastic deformation is described with the Hollomon equation. The time dependence of the deformation and stresses is described for one heating-cooling cycle for a material without initial plastic deformation. The model can be applied to tungsten manufactured according to ITER specifications. The model shows that the stability of stress-relieved tungsten deteriorates when the base temperature increases. This proved to be a result of the close ultimate tensile and yield strengths. For a heat load of arbitrary magnitude a stability criterion was obtained in the form of condition on the relation of the ultimate tensile and yield strengths.

  12. An iterative method for analysis of hadron ratios and Spectra in relativistic heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Choi, Suk; Lee, Kang Seog

    2016-04-01

    A new iteration method is proposed for analyzing both the multiplicities and the transverse momentum spectra measured within a small rapidity interval with low momentum cut-off without assuming the invariance of the rapidity distribution under the Lorentz-boost and is applied to the hadron data measured by the ALICE collaboration for Pb+Pb collisions at √ {^sNN} = 2.76 TeV. In order to correctly consider the resonance contribution only to the small rapidity interval measured, we only consider ratios involving only those hadrons whose transverse momentum spectrum is available. In spite of the small number of ratios considered, the quality of fitting both of the ratios and the transverse momentum spectra is excellent. Also, the calculated ratios involving strange baryons with the fitted parameters agree with the data surprisingly well.

  13. Improved Savitzky-Golay-method-based fluorescence subtraction algorithm for rapid recovery of Raman spectra.

    PubMed

    Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan

    2014-08-20

    In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.

  14. Progress and achievements of R&D activities for the ITER vacuum vessel

    NASA Astrophysics Data System (ADS)

    Nakahira, M.; Takahashi, H.; Koizumi, K.; Onozuka, M.; Ioki, K.

    2001-04-01

    The Full Scale Sector Model Project, which was initiated in 1995 as one of the Seven Large Projects for ITER R&D, has been continued with the joint effort of the ITER Joint Central Team and the Japanese, Russian Federation and United States Home Teams. The fabrication of a full scale 18° toroidal sector, which is composed of two 9° sectors spliced at the port centre, was successfully completed in September 1997 with a dimensional accuracy of +/-3 mm for the total height and total width. Both sectors were shipped to the test site at the Japan Atomic Energy Research Institute and the integration test of the sectors was begun in October 1997. The integration test involves the adjustment of field joints, automatic narrow gap tungsten inert gas welding of field joints with splice plates and inspection of the joints by ultrasonic testing, as required for the initial assembly of the ITER vacuum vessel. This first demonstration of field joint welding and the performance test of the mechanical characteristics were completed in May 1998, and all the results obtained have satisfied the ITER design. In addition to these tests, integration with the midplane port extension fabricated by the Russian Home Team by using a fully remotized welding and cutting system developed by the US Home Team was completed in March 2000. The article describes the progress, achievements and latest status of the R&D activities for the ITER vacuum vessel.

  15. Automatic Parameter Tuning for the Morpheus Vehicle Using Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Birge, B.

    2013-01-01

    A high fidelity simulation using a PC based Trick framework has been developed for Johnson Space Center's Morpheus test bed flight vehicle. There is an iterative development loop of refining and testing the hardware, refining the software, comparing the software simulation to hardware performance and adjusting either or both the hardware and the simulation to extract the best performance from the hardware as well as the most realistic representation of the hardware from the software. A Particle Swarm Optimization (PSO) based technique has been developed that increases speed and accuracy of the iterative development cycle. Parameters in software can be automatically tuned to make the simulation match real world subsystem data from test flights. Special considerations for scale, linearity, discontinuities, can be all but ignored with this technique, allowing fast turnaround both for simulation tune up to match hardware changes as well as during the test and validation phase to help identify hardware issues. Software models with insufficient control authority to match hardware test data can be immediately identified and using this technique requires very little to no specialized knowledge of optimization, freeing model developers to concentrate on spacecraft engineering. Integration of the PSO into the Morpheus development cycle will be discussed as well as a case study highlighting the tool's effectiveness.

  16. VR-simulation cataract surgery in non-experienced trainees: evolution of surgical skill

    NASA Astrophysics Data System (ADS)

    Söderberg, Per; Erngrund, Markus; Skarman, Eva; Nordh, Leif; Laurell, Carl-Gustaf

    2011-03-01

    Conclusion: The current data imply that the performance index as defined herein is a valid measure of the performance of a trainee using the virtual reality phacoemulsification simulator. Further, the performance index increase linearly with measurement cycles for less than five measurement cycles. To fully use the learning potential of the simulator more than four measurement cycles are required. Materials and methods: Altogether, 10 trainees were introduced to the simulator by an instructor and then performed a training program including four measurement cycles with three iterated measurements of the simulation at the end of each cycle. The simulation characteristics was standardized and defined in 14 parameters. The simulation was measured separately for the sculpting phase in 21 variables, and for the evacuation phase in 22 variables. A performance index based on all measured variables was estimated for the sculpting phase and the evacuation phase, respectively, for each measurement and the three measurements for each cycle were averaged. Finally, the performance as a function of measurement cycle was estimated for each trainee with regression, assuming a straight line. The estimated intercept and inclination coefficients, respectively, were finally averaged for all trainees. Results: The performance increased linearly with the number of measurement cycles both for the sculpting and for the evacuation phase.

  17. Optimized Deconvolution for Maximum Axial Resolution in Three-Dimensional Aberration-Corrected Scanning Transmission Electron Microscopy

    PubMed Central

    Ramachandra, Ranjan; de Jonge, Niels

    2012-01-01

    Three-dimensional (3D) data sets were recorded of gold nanoparticles placed on both sides of silicon nitride membranes using focal series aberration-corrected scanning transmission electron microscopy (STEM). The deconvolution of the 3D datasets was optimized to obtain the highest possible axial resolution. The deconvolution involved two different point spread function (PSF)s, each calculated iteratively via blind deconvolution.. Supporting membranes of different thicknesses were tested to study the effect of beam broadening on the deconvolution. It was found that several iterations of deconvolution was efficient in reducing the imaging noise. With an increasing number of iterations, the axial resolution was increased, and most of the structural information was preserved. Additional iterations improved the axial resolution by maximal a factor of 4 to 6, depending on the particular dataset, and up to 8 nm maximal, but at the cost of a reduction of the lateral size of the nanoparticles in the image. Thus, the deconvolution procedure optimized for highest axial resolution is best suited for applications where one is interested in the 3D locations of nanoparticles only. PMID:22152090

  18. Doppler Lidar System Design via Interdisciplinary Design Concept at NASA Langley Research Center - Part III

    NASA Technical Reports Server (NTRS)

    Barnes, Bruce W.; Sessions, Alaric M.; Beyon, Jeffrey; Petway, Larry B.

    2014-01-01

    Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. The existing power system was analyzed to rank components in terms of inefficiency, power dissipation, footprint and mass. Design considerations and priorities are compared along with the results of each design iteration. Overall power system improvements are summarized for design implementations.

  19. Assessment and selection of materials for ITER in-vessel components

    NASA Astrophysics Data System (ADS)

    Kalinin, G.; Barabash, V.; Cardella, A.; Dietz, J.; Ioki, K.; Matera, R.; Santoro, R. T.; Tivey, R.; ITER Home Teams

    2000-12-01

    During the international thermonuclear experimental reactor (ITER) engineering design activities (EDA) significant progress has been made in the selection of materials for the in-vessel components of the reactor. This progress is a result of the worldwide collaboration of material scientists and industries which focused their effort on the optimisation of material and component manufacturing and on the investigation of the most critical material properties. Austenitic stainless steels 316L(N)-IG and 316L, nickel-based alloys Inconel 718 and Inconel 625, Ti-6Al-4V alloy and two copper alloys, CuCrZr-IG and CuAl25-IG, have been proposed as reference structural materials, and ferritic steel 430, and austenitic steel 304B7 with the addition of boron have been selected for some specific parts of the ITER in-vessel components. Beryllium, tungsten and carbon fibre composites are considered as plasma facing armour materials. The data base on the properties of all these materials is critically assessed and briefly reviewed in this paper together with the justification of the material selection (e.g., effect of neutron irradiation on the mechanical properties of materials, effect of manufacturing cycle, etc.).

  20. Closed-loop control of artificial pancreatic Beta -cell in type 1 diabetes mellitus using model predictive iterative learning control.

    PubMed

    Wang, Youqing; Dassau, Eyal; Doyle, Francis J

    2010-02-01

    A novel combination of iterative learning control (ILC) and model predictive control (MPC), referred to here as model predictive iterative learning control (MPILC), is proposed for glycemic control in type 1 diabetes mellitus. MPILC exploits two key factors: frequent glucose readings made possible by continuous glucose monitoring technology; and the repetitive nature of glucose-meal-insulin dynamics with a 24-h cycle. The proposed algorithm can learn from an individual's lifestyle, allowing the control performance to be improved from day to day. After less than 10 days, the blood glucose concentrations can be kept within a range of 90-170 mg/dL. Generally, control performance under MPILC is better than that under MPC. The proposed methodology is robust to random variations in meal timings within +/-60 min or meal amounts within +/-75% of the nominal value, which validates MPILC's superior robustness compared to run-to-run control. Moreover, to further improve the algorithm's robustness, an automatic scheme for setpoint update that ensures safe convergence is proposed. Furthermore, the proposed method does not require user intervention; hence, the algorithm should be of particular interest for glycemic control in children and adolescents.

  1. Scalable splitting algorithms for big-data interferometric imaging in the SKA era

    NASA Astrophysics Data System (ADS)

    Onose, Alexandru; Carrillo, Rafael E.; Repetti, Audrey; McEwen, Jason D.; Thiran, Jean-Philippe; Pesquet, Jean-Christophe; Wiaux, Yves

    2016-11-01

    In the context of next-generation radio telescopes, like the Square Kilometre Array (SKA), the efficient processing of large-scale data sets is extremely important. Convex optimization tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus herein mainly on scalability and propose two new convex optimization algorithmic structures able to solve the convex optimization tasks arising in radio-interferometric imaging. They rely on proximal splitting and forward-backward iterations and can be seen, by analogy, with the CLEAN major-minor cycle, as running sophisticated CLEAN-like iterations in parallel in multiple data, prior, and image spaces. Both methods support any convex regularization function, in particular, the well-studied ℓ1 priors promoting image sparsity in an adequate domain. Tailored for big-data, they employ parallel and distributed computations to achieve scalability, in terms of memory and computational requirements. One of them also exploits randomization, over data blocks at each iteration, offering further flexibility. We present simulation results showing the feasibility of the proposed methods as well as their advantages compared to state-of-the-art algorithmic solvers. Our MATLAB code is available online on GitHub.

  2. A More Efficient Contextuality Distillation Protocol

    NASA Astrophysics Data System (ADS)

    Meng, Hui-xian; Cao, Huai-xin; Wang, Wen-hua; Fan, Ya-jing; Chen, Liang

    2018-03-01

    Based on the fact that both nonlocality and contextuality are resource theories, it is natural to ask how to amplify them more efficiently. In this paper, we present a contextuality distillation protocol which produces an n-cycle box B ∗ B ' from two given n-cycle boxes B and B '. It works efficiently for a class of contextual n-cycle ( n ≥ 4) boxes which we termed as "the generalized correlated contextual n-cycle boxes". For any two generalized correlated contextual n-cycle boxes B and B ', B ∗ B ' is more contextual than both B and B '. Moreover, they can be distilled toward to the maximally contextual box C H n as the times of iteration goes to infinity. Among the known protocols, our protocol has the strongest approximate ability and is optimal in terms of its distillation rate. What is worth noting is that our protocol can witness a larger set of nonlocal boxes that make communication complexity trivial than the protocol in Brunner and Skrzypczyk (Phys. Rev. Lett. 102, 160403 2009), this might be helpful for exploring the problem that why quantum nonlocality is limited.

  3. A More Efficient Contextuality Distillation Protocol

    NASA Astrophysics Data System (ADS)

    Meng, Hui-xian; Cao, Huai-xin; Wang, Wen-hua; Fan, Ya-jing; Chen, Liang

    2017-12-01

    Based on the fact that both nonlocality and contextuality are resource theories, it is natural to ask how to amplify them more efficiently. In this paper, we present a contextuality distillation protocol which produces an n-cycle box B ∗ B ' from two given n-cycle boxes B and B '. It works efficiently for a class of contextual n-cycle (n ≥ 4) boxes which we termed as "the generalized correlated contextual n-cycle boxes". For any two generalized correlated contextual n-cycle boxes B and B ', B ∗ B ' is more contextual than both B and B '. Moreover, they can be distilled toward to the maximally contextual box C H n as the times of iteration goes to infinity. Among the known protocols, our protocol has the strongest approximate ability and is optimal in terms of its distillation rate. What is worth noting is that our protocol can witness a larger set of nonlocal boxes that make communication complexity trivial than the protocol in Brunner and Skrzypczyk (Phys. Rev. Lett. 102, 160403 2009), this might be helpful for exploring the problem that why quantum nonlocality is limited.

  4. Dynamic NF-κB and E2F interactions control the priority and timing of inflammatory signalling and cell proliferation

    PubMed Central

    Ankers, John M; Awais, Raheela; Jones, Nicholas A; Boyd, James; Ryan, Sheila; Adamson, Antony D; Harper, Claire V; Bridge, Lloyd; Spiller, David G; Jackson, Dean A; Paszek, Pawel; Sée, Violaine; White, Michael RH

    2016-01-01

    Dynamic cellular systems reprogram gene expression to ensure appropriate cellular fate responses to specific extracellular cues. Here we demonstrate that the dynamics of Nuclear Factor kappa B (NF-κB) signalling and the cell cycle are prioritised differently depending on the timing of an inflammatory signal. Using iterative experimental and computational analyses, we show physical and functional interactions between NF-κB and the E2 Factor 1 (E2F-1) and E2 Factor 4 (E2F-4) cell cycle regulators. These interactions modulate the NF-κB response. In S-phase, the NF-κB response was delayed or repressed, while cell cycle progression was unimpeded. By contrast, activation of NF-κB at the G1/S boundary resulted in a longer cell cycle and more synchronous initial NF-κB responses between cells. These data identify new mechanisms by which the cellular response to stress is differentially controlled at different stages of the cell cycle. DOI: http://dx.doi.org/10.7554/eLife.10473.001 PMID:27185527

  5. SACFIR: SDN-Based Application-Aware Centralized Adaptive Flow Iterative Reconfiguring Routing Protocol for WSNs.

    PubMed

    Aslam, Muhammad; Hu, Xiaopeng; Wang, Fan

    2017-12-13

    Smart reconfiguration of a dynamic networking environment is offered by the central control of Software-Defined Networking (SDN). Centralized SDN-based management architectures are capable of retrieving global topology intelligence and decoupling the forwarding plane from the control plane. Routing protocols developed for conventional Wireless Sensor Networks (WSNs) utilize limited iterative reconfiguration methods to optimize environmental reporting. However, the challenging networking scenarios of WSNs involve a performance overhead due to constant periodic iterative reconfigurations. In this paper, we propose the SDN-based Application-aware Centralized adaptive Flow Iterative Reconfiguring (SACFIR) routing protocol with the centralized SDN iterative solver controller to maintain the load-balancing between flow reconfigurations and flow allocation cost. The proposed SACFIR's routing protocol offers a unique iterative path-selection algorithm, which initially computes suitable clustering based on residual resources at the control layer and then implements application-aware threshold-based multi-hop report transmissions on the forwarding plane. The operation of the SACFIR algorithm is centrally supervised by the SDN controller residing at the Base Station (BS). This paper extends SACFIR to SDN-based Application-aware Main-value Centralized adaptive Flow Iterative Reconfiguring (SAMCFIR) to establish both proactive and reactive reporting. The SAMCFIR transmission phase enables sensor nodes to trigger direct transmissions for main-value reports, while in the case of SACFIR, all reports follow computed routes. Our SDN-enabled proposed models adjust the reconfiguration period according to the traffic burden on sensor nodes, which results in heterogeneity awareness, load-balancing and application-specific reconfigurations of WSNs. Extensive experimental simulation-based results show that SACFIR and SAMCFIR yield the maximum scalability, network lifetime and stability period when compared to existing routing protocols.

  6. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    NASA Astrophysics Data System (ADS)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  7. SACFIR: SDN-Based Application-Aware Centralized Adaptive Flow Iterative Reconfiguring Routing Protocol for WSNs

    PubMed Central

    Hu, Xiaopeng; Wang, Fan

    2017-01-01

    Smart reconfiguration of a dynamic networking environment is offered by the central control of Software-Defined Networking (SDN). Centralized SDN-based management architectures are capable of retrieving global topology intelligence and decoupling the forwarding plane from the control plane. Routing protocols developed for conventional Wireless Sensor Networks (WSNs) utilize limited iterative reconfiguration methods to optimize environmental reporting. However, the challenging networking scenarios of WSNs involve a performance overhead due to constant periodic iterative reconfigurations. In this paper, we propose the SDN-based Application-aware Centralized adaptive Flow Iterative Reconfiguring (SACFIR) routing protocol with the centralized SDN iterative solver controller to maintain the load-balancing between flow reconfigurations and flow allocation cost. The proposed SACFIR’s routing protocol offers a unique iterative path-selection algorithm, which initially computes suitable clustering based on residual resources at the control layer and then implements application-aware threshold-based multi-hop report transmissions on the forwarding plane. The operation of the SACFIR algorithm is centrally supervised by the SDN controller residing at the Base Station (BS). This paper extends SACFIR to SDN-based Application-aware Main-value Centralized adaptive Flow Iterative Reconfiguring (SAMCFIR) to establish both proactive and reactive reporting. The SAMCFIR transmission phase enables sensor nodes to trigger direct transmissions for main-value reports, while in the case of SACFIR, all reports follow computed routes. Our SDN-enabled proposed models adjust the reconfiguration period according to the traffic burden on sensor nodes, which results in heterogeneity awareness, load-balancing and application-specific reconfigurations of WSNs. Extensive experimental simulation-based results show that SACFIR and SAMCFIR yield the maximum scalability, network lifetime and stability period when compared to existing routing protocols. PMID:29236031

  8. Research on WNN Modeling for Gold Price Forecasting Based on Improved Artificial Bee Colony Algorithm

    PubMed Central

    2014-01-01

    Gold price forecasting has been a hot issue in economics recently. In this work, wavelet neural network (WNN) combined with a novel artificial bee colony (ABC) algorithm is proposed for this gold price forecasting issue. In this improved algorithm, the conventional roulette selection strategy is discarded. Besides, the convergence statuses in a previous cycle of iteration are fully utilized as feedback messages to manipulate the searching intensity in a subsequent cycle. Experimental results confirm that this new algorithm converges faster than the conventional ABC when tested on some classical benchmark functions and is effective to improve modeling capacity of WNN regarding the gold price forecasting scheme. PMID:24744773

  9. Status of RF beryllium characterization for ITER Fist Wall

    NASA Astrophysics Data System (ADS)

    Kupriyanov, I. B.; Nikolaev, G. N.; Roedig, M.; Gervash, A. А.; Linke, I. J.; Kurbatova, L. A.; Perevalov, S. I.; Giniyatulin, R. N.

    2011-10-01

    The status of RF R&D activities in production and characterization of TGP-56FW beryllium grade is presented. The results of metallographic studies of microstructure and cracks morphology are reported for full-scale Be tiles (56 × 56 × 10 mm) subjected to VDE simulation tests in TSEFEY-M testing facility (VDE-10 MJ/m 2 during 0.1 s, 1 shot ) and following low cycle thermal fatigue tests (500 thermal cycles at 1.5 MW/m 2). First results of plasma disruption tests ( E = 1.2-5 MJ/m 2, 5 ms), which were obtained during the realization of Thermal Shock/VDE Qualification program of RF beryllium in JUDITH-1 facility, are also discussed.

  10. Key technologies for tritium storage bed development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, S.H.; Chang, M.H.; Kang, H.G.

    2015-03-15

    ITER Storage and Delivery System (SDS) is a complex system involving tens of storage beds. The most important SDS getter bed will be used for the absorption and desorption of hydrogen isotopes in accordance with the fusion fuel cycle scenario. In this paper the current status concerning research/development activities for the optimal approach to the final SDS design is introduced. A thermal analysis is performed and discussed on the aspect of heat losses considering whether the reflector and/or the feed-through is present or not. A thermal hydraulic simulation shows that the presence of 3 or 4 reflectors minimize the heatmore » loss. Another important point is to introduce the real-time gas analysis in the He{sup 3} collection system. In this study 2 independent strength methods based on gas chromatography and quadruple mass spectrometer for one and on a modified self-assaying quadruple mass spectrometer for the second are applied to separate the hydrogen isotopes in helium gas. Another issue is the possibility of using depleted uranium getter material for the storage of hydrogen isotopes, especially of tritium.« less

  11. Receiver function stacks: initial steps for seismic imaging of Cotopaxi volcano, Ecuador

    NASA Astrophysics Data System (ADS)

    Bishop, J. W.; Lees, J. M.; Ruiz, M. C.

    2017-12-01

    Cotopaxi volcano is a large, andesitic stratovolcano located within 50 km of the the Ecuadorean capital of Quito. Cotopaxi most recently erupted for the first time in 73 years during August 2015. This eruptive cycle (VEI = 1) featured phreatic explosions and ejection of an ash column 9 km above the volcano edifice. Following this event, ash covered approximately 500 km2 of the surrounding area. Analysis of Multi-GAS data suggests that this eruption was fed from a shallow source. However, stratigraphic evidence surveying the last 800 years of Cotopaxi's activity suggests that there may be a deep magmatic source. To establish a geophysical framework for Cotopaxi's activity, receiver functions were calculated from well recorded earthquakes detected from April 2015 to December 2015 at 9 permanent broadband seismic stations around the volcano. These events were located, and phase arrivals were manually picked. Radial teleseismic receiver functions were then calculated using an iterative deconvolution technique with a Gaussian width of 2.5. A maximum of 200 iterations was allowed in each deconvolution. Iterations were stopped when either the maximum iteration number was reached or the percent change fell beneath a pre-determined tolerance. Receiver functions were then visually inspected for anomalous pulses before the initial P arrival or later peaks larger than the initial P-wave correlated pulse, which were also discarded. Using this data, initial crustal thickness and slab depth estimates beneath the volcano were obtained. Estimates of crustal Vp/Vs ratio for the region were also calculated.

  12. Convergent Polishing: A Simple, Rapid, Full Aperture Polishing Process of High Quality Optical Flats & Spheres

    PubMed Central

    Suratwala, Tayyab; Steele, Rusty; Feit, Michael; Dylla-Spears, Rebecca; Desjardin, Richard; Mason, Dan; Wong, Lana; Geraghty, Paul; Miller, Phil; Shen, Nan

    2014-01-01

    Convergent Polishing is a novel polishing system and method for finishing flat and spherical glass optics in which a workpiece, independent of its initial shape (i.e., surface figure), will converge to final surface figure with excellent surface quality under a fixed, unchanging set of polishing parameters in a single polishing iteration. In contrast, conventional full aperture polishing methods require multiple, often long, iterative cycles involving polishing, metrology and process changes to achieve the desired surface figure. The Convergent Polishing process is based on the concept of workpiece-lap height mismatch resulting in pressure differential that decreases with removal and results in the workpiece converging to the shape of the lap. The successful implementation of the Convergent Polishing process is a result of the combination of a number of technologies to remove all sources of non-uniform spatial material removal (except for workpiece-lap mismatch) for surface figure convergence and to reduce the number of rogue particles in the system for low scratch densities and low roughness. The Convergent Polishing process has been demonstrated for the fabrication of both flats and spheres of various shapes, sizes, and aspect ratios on various glass materials. The practical impact is that high quality optical components can be fabricated more rapidly, more repeatedly, with less metrology, and with less labor, resulting in lower unit costs. In this study, the Convergent Polishing protocol is specifically described for fabricating 26.5 cm square fused silica flats from a fine ground surface to a polished ~λ/2 surface figure after polishing 4 hr per surface on a 81 cm diameter polisher. PMID:25489745

  13. Exact exchange potential evaluated from occupied Kohn-Sham and Hartree-Fock solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cinal, M.; Holas, A.

    2011-06-15

    The reported algorithm determines the exact exchange potential v{sub x} in an iterative way using energy shifts (ESs) and orbital shifts (OSs) obtained with finite-difference formulas from the solutions (occupied orbitals and their energies) of the Hartree-Fock-like equation and the Kohn-Sham-like equation, the former used for the initial approximation to v{sub x} and the latter for increments of ES and OS due to subsequent changes of v{sub x}. Thus, the need for solution of the differential equations for OSs, used by Kuemmel and Perdew [Phys. Rev. Lett. 90, 043004 (2003)], is bypassed. The iterated exchange potential, expressed in terms ofmore » ESs and OSs, is improved by modifying ESs at odd iteration steps and OSs at even steps. The modification formulas are related to the optimized-effective-potential equation (satisfied at convergence) written as the condition of vanishing density shift (DS). They are obtained, respectively, by enforcing its satisfaction through corrections to approximate OSs and by determining the optimal ESs that minimize the DS norm. The proposed method, successfully tested for several closed-(sub)shell atoms, from Be to Kr, within the density functional theory exchange-only approximation, proves highly efficient. The calculations using the pseudospectral method for representing orbitals give iterative sequences of approximate exchange potentials (starting with the Krieger-Li-Iafrate approximation) that rapidly approach the exact v{sub x} so that, for Ne, Ar, and Zn, the corresponding DS norm becomes less than 10{sup -6} after 13, 13, and 9 iteration steps for a given electron density. In self-consistent density calculations, orbital energies of 10{sup -4} hartree accuracy are obtained for these atoms after, respectively, 9, 12, and 12 density iteration steps, each involving just two steps of v{sub x} iteration, while the accuracy limit of 10{sup -6} to 10{sup -7} hartree is reached after 20 density iterations.« less

  14. Exact exchange potential evaluated from occupied Kohn-Sham and Hartree-Fock solutions

    NASA Astrophysics Data System (ADS)

    Cinal, M.; Holas, A.

    2011-06-01

    The reported algorithm determines the exact exchange potential vx in an iterative way using energy shifts (ESs) and orbital shifts (OSs) obtained with finite-difference formulas from the solutions (occupied orbitals and their energies) of the Hartree-Fock-like equation and the Kohn-Sham-like equation, the former used for the initial approximation to vx and the latter for increments of ES and OS due to subsequent changes of vx. Thus, the need for solution of the differential equations for OSs, used by Kümmel and Perdew [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.90.043004 90, 043004 (2003)], is bypassed. The iterated exchange potential, expressed in terms of ESs and OSs, is improved by modifying ESs at odd iteration steps and OSs at even steps. The modification formulas are related to the optimized-effective-potential equation (satisfied at convergence) written as the condition of vanishing density shift (DS). They are obtained, respectively, by enforcing its satisfaction through corrections to approximate OSs and by determining the optimal ESs that minimize the DS norm. The proposed method, successfully tested for several closed-(sub)shell atoms, from Be to Kr, within the density functional theory exchange-only approximation, proves highly efficient. The calculations using the pseudospectral method for representing orbitals give iterative sequences of approximate exchange potentials (starting with the Krieger-Li-Iafrate approximation) that rapidly approach the exact vx so that, for Ne, Ar, and Zn, the corresponding DS norm becomes less than 10-6 after 13, 13, and 9 iteration steps for a given electron density. In self-consistent density calculations, orbital energies of 10-4 hartree accuracy are obtained for these atoms after, respectively, 9, 12, and 12 density iteration steps, each involving just two steps of vx iteration, while the accuracy limit of 10-6 to 10-7 hartree is reached after 20 density iterations.

  15. Progress on the application of ELM control schemes to ITER scenarios from the non-active phase to DT operation

    NASA Astrophysics Data System (ADS)

    Loarte, A.; Huijsmans, G.; Futatani, S.; Baylor, L. R.; Evans, T. E.; Orlov, D. M.; Schmitz, O.; Becoulet, M.; Cahyna, P.; Gribov, Y.; Kavin, A.; Sashala Naik, A.; Campbell, D. J.; Casper, T.; Daly, E.; Frerichs, H.; Kischner, A.; Laengner, R.; Lisgo, S.; Pitts, R. A.; Saibene, G.; Wingen, A.

    2014-03-01

    Progress in the definition of the requirements for edge localized mode (ELM) control and the application of ELM control methods both for high fusion performance DT operation and non-active low-current operation in ITER is described. Evaluation of the power fluxes for low plasma current H-modes in ITER shows that uncontrolled ELMs will not lead to damage to the tungsten (W) divertor target, unlike for high-current H-modes in which divertor damage by uncontrolled ELMs is expected. Despite the lack of divertor damage at lower currents, ELM control is found to be required in ITER under these conditions to prevent an excessive contamination of the plasma by W, which could eventually lead to an increased disruptivity. Modelling with the non-linear MHD code JOREK of the physics processes determining the flow of energy from the confined plasma onto the plasma-facing components during ELMs at the ITER scale shows that the relative contribution of conductive and convective losses is intrinsically linked to the magnitude of the ELM energy loss. Modelling of the triggering of ELMs by pellet injection for DIII-D and ITER has identified the minimum pellet size required to trigger ELMs and, from this, the required fuel throughput for the application of this technique to ITER is evaluated and shown to be compatible with the installed fuelling and tritium re-processing capabilities in ITER. The evaluation of the capabilities of the ELM control coil system in ITER for ELM suppression is carried out (in the vacuum approximation) and found to have a factor of ˜2 margin in terms of coil current to achieve its design criterion, although such a margin could be substantially reduced when plasma shielding effects are taken into account. The consequences for the spatial distribution of the power fluxes at the divertor of ELM control by three-dimensional (3D) fields are evaluated and found to lead to substantial toroidal asymmetries in zones of the divertor target away from the separatrix. Therefore, specifications for the rotation of the 3D perturbation applied for ELM control in order to avoid excessive localized erosion of the ITER divertor target are derived. It is shown that a rotation frequency in excess of 1 Hz for the whole toroidally asymmetric divertor power flux pattern is required (corresponding to n Hz frequency in the variation of currents in the coils, where n is the toroidal symmetry of the perturbation applied) in order to avoid unacceptable thermal cycling of the divertor target for the highest power fluxes and worst toroidal power flux asymmetries expected. The possible use of the in-vessel vertical stability coils for ELM control as a back-up to the main ELM control systems in ITER is described and the feasibility of its application to control ELMs in low plasma current H-modes, foreseen for initial ITER operation, is evaluated and found to be viable for plasma currents up to 5-10 MA depending on modelling assumptions.

  16. An efficient strongly coupled immersed boundary method for deforming bodies

    NASA Astrophysics Data System (ADS)

    Goza, Andres; Colonius, Tim

    2016-11-01

    Immersed boundary methods treat the fluid and immersed solid with separate domains. As a result, a nonlinear interface constraint must be satisfied when these methods are applied to flow-structure interaction problems. This typically results in a large nonlinear system of equations that is difficult to solve efficiently. Often, this system is solved with a block Gauss-Seidel procedure, which is easy to implement but can require many iterations to converge for small solid-to-fluid mass ratios. Alternatively, a Newton-Raphson procedure can be used to solve the nonlinear system. This typically leads to convergence in a small number of iterations for arbitrary mass ratios, but involves the use of large Jacobian matrices. We present an immersed boundary formulation that, like the Newton-Raphson approach, uses a linearization of the system to perform iterations. It therefore inherits the same favorable convergence behavior. However, we avoid large Jacobian matrices by using a block LU factorization of the linearized system. We derive our method for general deforming surfaces and perform verification on 2D test problems of flow past beams. These test problems involve large amplitude flapping and a wide range of mass ratios. This work was partially supported by the Jet Propulsion Laboratory and Air Force Office of Scientific Research.

  17. Method for protein structure alignment

    DOEpatents

    Blankenbecler, Richard; Ohlsson, Mattias; Peterson, Carsten; Ringner, Markus

    2005-02-22

    This invention provides a method for protein structure alignment. More particularly, the present invention provides a method for identification, classification and prediction of protein structures. The present invention involves two key ingredients. First, an energy or cost function formulation of the problem simultaneously in terms of binary (Potts) assignment variables and real-valued atomic coordinates. Second, a minimization of the energy or cost function by an iterative method, where in each iteration (1) a mean field method is employed for the assignment variables and (2) exact rotation and/or translation of atomic coordinates is performed, weighted with the corresponding assignment variables.

  18. Fourier-Accelerated Nodal Solvers (FANS) for homogenization problems

    NASA Astrophysics Data System (ADS)

    Leuschner, Matthias; Fritzen, Felix

    2017-11-01

    Fourier-based homogenization schemes are useful to analyze heterogeneous microstructures represented by 2D or 3D image data. These iterative schemes involve discrete periodic convolutions with global ansatz functions (mostly fundamental solutions). The convolutions are efficiently computed using the fast Fourier transform. FANS operates on nodal variables on regular grids and converges to finite element solutions. Compared to established Fourier-based methods, the number of convolutions is reduced by FANS. Additionally, fast iterations are possible by assembling the stiffness matrix. Due to the related memory requirement, the method is best suited for medium-sized problems. A comparative study involving established Fourier-based homogenization schemes is conducted for a thermal benchmark problem with a closed-form solution. Detailed technical and algorithmic descriptions are given for all methods considered in the comparison. Furthermore, many numerical examples focusing on convergence properties for both thermal and mechanical problems, including also plasticity, are presented.

  19. Modeling the dynamics of evaluation: a multilevel neural network implementation of the iterative reprocessing model.

    PubMed

    Ehret, Phillip J; Monroe, Brian M; Read, Stephen J

    2015-05-01

    We present a neural network implementation of central components of the iterative reprocessing (IR) model. The IR model argues that the evaluation of social stimuli (attitudes, stereotypes) is the result of the IR of stimuli in a hierarchy of neural systems: The evaluation of social stimuli develops and changes over processing. The network has a multilevel, bidirectional feedback evaluation system that integrates initial perceptual processing and later developing semantic processing. The network processes stimuli (e.g., an individual's appearance) over repeated iterations, with increasingly higher levels of semantic processing over time. As a result, the network's evaluations of stimuli evolve. We discuss the implications of the network for a number of different issues involved in attitudes and social evaluation. The success of the network supports the IR model framework and provides new insights into attitude theory. © 2014 by the Society for Personality and Social Psychology, Inc.

  20. On iterative algorithms for quantitative photoacoustic tomography in the radiative transport regime

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Zhou, Tie

    2017-11-01

    In this paper, we present a numerical reconstruction method for quantitative photoacoustic tomography (QPAT), based on the radiative transfer equation (RTE), which models light propagation more accurately than diffusion approximation (DA). We investigate the reconstruction of absorption coefficient and scattering coefficient of biological tissues. An improved fixed-point iterative method to retrieve the absorption coefficient, given the scattering coefficient, is proposed for its cheap computational cost; the convergence of this method is also proved. The Barzilai-Borwein (BB) method is applied to retrieve two coefficients simultaneously. Since the reconstruction of optical coefficients involves the solutions of original and adjoint RTEs in the framework of optimization, an efficient solver with high accuracy is developed from Gao and Zhao (2009 Transp. Theory Stat. Phys. 38 149-92). Simulation experiments illustrate that the improved fixed-point iterative method and the BB method are competitive methods for QPAT in the relevant cases.

  1. A novel dynamical community detection algorithm based on weighting scheme

    NASA Astrophysics Data System (ADS)

    Li, Ju; Yu, Kai; Hu, Ke

    2015-12-01

    Network dynamics plays an important role in analyzing the correlation between the function properties and the topological structure. In this paper, we propose a novel dynamical iteration (DI) algorithm, which incorporates the iterative process of membership vector with weighting scheme, i.e. weighting W and tightness T. These new elements can be used to adjust the link strength and the node compactness for improving the speed and accuracy of community structure detection. To estimate the optimal stop time of iteration, we utilize a new stability measure which is defined as the Markov random walk auto-covariance. We do not need to specify the number of communities in advance. It naturally supports the overlapping communities by associating each node with a membership vector describing the node's involvement in each community. Theoretical analysis and experiments show that the algorithm can uncover communities effectively and efficiently.

  2. Auxiliary principle technique and iterative algorithm for a perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems.

    PubMed

    Rahaman, Mijanur; Pang, Chin-Tzong; Ishtyak, Mohd; Ahmad, Rais

    2017-01-01

    In this article, we introduce a perturbed system of generalized mixed quasi-equilibrium-like problems involving multi-valued mappings in Hilbert spaces. To calculate the approximate solutions of the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems, firstly we develop a perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems, and then by using the celebrated Fan-KKM technique, we establish the existence and uniqueness of solutions of the perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems. By deploying an auxiliary principle technique and an existence result, we formulate an iterative algorithm for solving the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems. Lastly, we study the strong convergence analysis of the proposed iterative sequences under monotonicity and some mild conditions. These results are new and generalize some known results in this field.

  3. Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis

    NASA Technical Reports Server (NTRS)

    Padovan, J.

    1981-01-01

    A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated.

  4. Fast polar decomposition of an arbitrary matrix

    NASA Technical Reports Server (NTRS)

    Higham, Nicholas J.; Schreiber, Robert S.

    1988-01-01

    The polar decomposition of an m x n matrix A of full rank, where m is greater than or equal to n, can be computed using a quadratically convergent algorithm. The algorithm is based on a Newton iteration involving a matrix inverse. With the use of a preliminary complete orthogonal decomposition the algorithm can be extended to arbitrary A. How to use the algorithm to compute the positive semi-definite square root of a Hermitian positive semi-definite matrix is described. A hybrid algorithm which adaptively switches from the matrix inversion based iteration to a matrix multiplication based iteration due to Kovarik, and to Bjorck and Bowie is formulated. The decision when to switch is made using a condition estimator. This matrix multiplication rich algorithm is shown to be more efficient on machines for which matrix multiplication can be executed 1.5 times faster than matrix inversion.

  5. Run-time parallelization and scheduling of loops

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay

    1990-01-01

    Run time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases, where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run time, wave fronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run time reordering of loop indices can have a significant impact on performance. Furthermore, the overheads associated with this type of reordering are amortized when the loop is executed several times with the same dependency structure.

  6. Diagonalization of complex symmetric matrices: Generalized Householder reflections, iterative deflation and implicit shifts

    NASA Astrophysics Data System (ADS)

    Noble, J. H.; Lubasch, M.; Stevens, J.; Jentschura, U. D.

    2017-12-01

    We describe a matrix diagonalization algorithm for complex symmetric (not Hermitian) matrices, A ̲ =A̲T, which is based on a two-step algorithm involving generalized Householder reflections based on the indefinite inner product 〈 u ̲ , v ̲ 〉 ∗ =∑iuivi. This inner product is linear in both arguments and avoids complex conjugation. The complex symmetric input matrix is transformed to tridiagonal form using generalized Householder transformations (first step). An iterative, generalized QL decomposition of the tridiagonal matrix employing an implicit shift converges toward diagonal form (second step). The QL algorithm employs iterative deflation techniques when a machine-precision zero is encountered "prematurely" on the super-/sub-diagonal. The algorithm allows for a reliable and computationally efficient computation of resonance and antiresonance energies which emerge from complex-scaled Hamiltonians, and for the numerical determination of the real energy eigenvalues of pseudo-Hermitian and PT-symmetric Hamilton matrices. Numerical reference values are provided.

  7. Stakeholder engagement and public policy evaluation: factors contributing to the development and implementation of a regional network for geriatric care.

    PubMed

    Glover, Catherine; Hillier, Loretta M; Gutmanis, Iris

    2007-01-01

    The development and implementation of a regional network that provides universally accessible and consistent services to the frail elderly living in Southwestern Ontario is described. Through continuous stakeholder engagement, clear network goals were identified and operationalized. Stakeholder commitment to the integration of expertise and specialized services, to evidence-based public policy and to iterative evaluation cycles were key to network success.

  8. SOFTWARE DESIGN FOR REAL-TIME SYSTEMS.

    DTIC Science & Technology

    Real-time computer systems and real-time computations are defined for the purposes of this report. The design of software for real - time systems is...discussed, employing the concept that all real - time systems belong to one of two types. The types are classified according to the type of control...program used; namely: Pre-assigned Iterative Cycle and Real-time Queueing. The two types of real - time systems are described in general, with supplemental

  9. Fast simulation techniques for switching converters

    NASA Technical Reports Server (NTRS)

    King, Roger J.

    1987-01-01

    Techniques for simulating a switching converter are examined. The state equations for the equivalent circuits, which represent the switching converter, are presented and explained. The uses of the Newton-Raphson iteration, low ripple approximation, half-cycle symmetry, and discrete time equations to compute the interval durations are described. An example is presented in which these methods are illustrated by applying them to a parallel-loaded resonant inverter with three equivalent circuits for its continuous mode of operation.

  10. Department of Defense Costing References Web. Phase 1. Establishing the Foundation.

    DTIC Science & Technology

    1997-03-01

    a functional economic analysis under one set of constraints and having to repeat the entire process for the MAISRC. Recommendations for automated...MAISRC s acquisition oversight process . The cost and cycle time for each iteration can be in the order of $300,000 and 6 months, respectively...Institute resources were expected to become available at the conclusion of another BPR project. The contents list for the first Business Process

  11. Reliability issues for a bolometer detector for ITER at high operating temperatures.

    PubMed

    Meister, H; Kannamüller, M; Koll, J; Pathak, A; Penzel, F; Trautmann, T; Detemple, P; Schmitt, S; Langer, H

    2012-10-01

    The first detector prototypes for the ITER bolometer diagnostic featuring a 12.5 μm thick Pt-absorber have been realized and characterized in laboratory tests. The results show linear dependencies of the calibration parameters and are in line with measurements of prototypes with thinner absorbers. However, thermal cycling tests up to 450 °C of the prototypes with thick absorbers demonstrated that their reliability at these elevated operating temperatures is not yet sufficient. Profilometer measurements showed a deflection of the membrane hinting to stresses due to the deposition processes of the absorber. Finite element analysis (FEA) managed to reproduce the deflection and identified the highest stresses in the membrane in the region around the corners of the absorber. FEA was further used to identify changes in the geometry of the absorber with a positive impact on the intrinsic stresses of the membrane. However, further improvements are still necessary.

  12. Analog Design for Digital Deployment of a Serious Leadership Game

    NASA Technical Reports Server (NTRS)

    Maxwell, Nicholas; Lang, Tristan; Herman, Jeffrey L.; Phares, Richard

    2012-01-01

    This paper presents the design, development, and user testing of a leadership development simulation. The authors share lessons learned from using a design process for a board game to allow for quick and inexpensive revision cycles during the development of a serious leadership development game. The goal of this leadership simulation is to accelerate the development of leadership capacity in high-potential mid-level managers (GS-15 level) in a federal government agency. Simulation design included a mixed-method needs analysis, using both quantitative and qualitative approaches to determine organizational leadership needs. Eight design iterations were conducted, including three user testing phases. Three re-design iterations followed initial development, enabling game testing as part of comprehensive instructional events. Subsequent design, development and testing processes targeted digital application to a computer- and tablet-based environment. Recommendations include pros and cons of development and learner testing of an initial analog simulation prior to full digital simulation development.

  13. A Robust Locally Preconditioned Semi-Coarsening Multigrid Algorithm for the 2-D Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Cain, Michael D.

    1999-01-01

    The goal of this thesis is to develop an efficient and robust locally preconditioned semi-coarsening multigrid algorithm for the two-dimensional Navier-Stokes equations. This thesis examines the performance of the multigrid algorithm with local preconditioning for an upwind-discretization of the Navier-Stokes equations. A block Jacobi iterative scheme is used because of its high frequency error mode damping ability. At low Mach numbers, the performance of a flux preconditioner is investigated. The flux preconditioner utilizes a new limiting technique based on local information that was developed by Siu. Full-coarsening and-semi-coarsening are examined as well as the multigrid V-cycle and full multigrid. The numerical tests were performed on a NACA 0012 airfoil at a range of Mach numbers. The tests show that semi-coarsening with flux preconditioning is the most efficient and robust combination of coarsening strategy, and iterative scheme - especially at low Mach numbers.

  14. Mechanical strength of an ITER coil insulation system under static and dynamic load after reactor irradiation

    NASA Astrophysics Data System (ADS)

    Bittner-Rohrhofer, K.; Humer, K.; Weber, H. W.; Hamada, K.; Sugimoto, M.; Okuno, K.

    2002-12-01

    The insulation system proposed by the Japanese Home Team for the ITER Toroidal Field coil (TF coil) is a T-glass-fiber/Kapton reinforced epoxy prepreg system. In order to assess the material performance under the actual operating conditions of the coils, the insulation system was irradiated in the TRIGA reactor (Vienna) to a fast neutron fluence of 2×10 22 m -2 ( E>0.1 MeV). After measurements of swelling, all mechanical tests were carried out at 77 K. Tensile and short-beam-shear (SBS) tests were performed under static loading conditions. In addition, tension-tension fatigue experiments up to about 10 6 cycles were made. The laminate swells in the through-thickness direction by 0.86% at the highest dose level. The fatigue tests as well as the static tests do not show significant influences of the irradiation on the mechanical behavior of this composite.

  15. Using the tritium plasma experiment to evaluate ITER PFC safety

    NASA Astrophysics Data System (ADS)

    Longhurst, Glen R.; Anderl, Robert A.; Bartlit, John R.; Causey, Rion A.; Haines, John R.

    1993-06-01

    The Tritium Plasma Experiment was assembled at Sandia National Laboratories, Livermore and is being moved to the Tritium Systems Test Assembly facility at Los Alamos National Laboratory to investigate interactions between dense plasmas at low energies and plasma-facing component materials. This apparatus has the unique capabilty of replicating plasma conditions in a tokamak divertor with particle flux densities of 2 × 1023 ions/m2.s and a plasma temperature of about 15 eV using a plasma that includes tritium. An experimental program has been initiated using the Tritium Plasma Experiment to examine safety issues related to tritium in plasma-facing components, particularly the ITER divertor. Those issues include tritium retention and release characteristics, tritium permeation rates and transient times to coolant streams, surface modification and erosion by the plasma, the effects of thermal loads and cycling, and particulate production. An industrial consortium led by McDonnell Douglas will design and fabricate the test fixtures.

  16. Robust Multivariable Optimization and Performance Simulation for ASIC Design

    NASA Technical Reports Server (NTRS)

    DuMonthier, Jeffrey; Suarez, George

    2013-01-01

    Application-specific-integrated-circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power, and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem, which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques, which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable, are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way that facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as a framework of software modules, templates, and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation.

  17. Clinician user involvement in the real world: Designing an electronic tool to improve interprofessional communication and collaboration in a hospital setting.

    PubMed

    Tang, Terence; Lim, Morgan E; Mansfield, Elizabeth; McLachlan, Alexander; Quan, Sherman D

    2018-02-01

    User involvement is vital to the success of health information technology implementation. However, involving clinician users effectively and meaningfully in complex healthcare organizations remains challenging. The objective of this paper is to share our real-world experience of applying a variety of user involvement methods in the design and implementation of a clinical communication and collaboration platform aimed at facilitating care of complex hospitalized patients by an interprofessional team of clinicians. We designed and implemented an electronic clinical communication and collaboration platform in a large community teaching hospital. The design team consisted of both technical and healthcare professionals. Agile software development methodology was used to facilitate rapid iterative design and user input. We involved clinician users at all stages of the development lifecycle using a variety of user-centered, user co-design, and participatory design methods. Thirty-six software releases were delivered over 24 months. User involvement has resulted in improvement in user interface design, identification of software defects, creation of new modules that facilitated workflow, and identification of necessary changes to the scope of the project early on. A variety of user involvement methods were complementary and benefited the design and implementation of a complex health IT solution. Combining these methods with agile software development methodology can turn designs into functioning clinical system to support iterative improvement. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Alternatives for Developing User Documentation for Applications Software

    DTIC Science & Technology

    1991-09-01

    style that is designed to match adult reading behaviors, using reader-based writing techniques, developing effective graphics , creating reference aids...involves research, analysis, design , and testing. The writer must have a solid understanding of the technical aspects of the document being prepared, good...ABSTRACT The preparation of software documentation is an iterative process that involves research, analysis, design , and testing. The writer must have

  19. An improved 3D MoF method based on analytical partial derivatives

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Zhang, Xiong

    2016-12-01

    MoF (Moment of Fluid) method is one of the most accurate approaches among various surface reconstruction algorithms. As other second order methods, MoF method needs to solve an implicit optimization problem to obtain the optimal approximate surface. Therefore, the partial derivatives of the objective function have to be involved during the iteration for efficiency and accuracy. However, to the best of our knowledge, the derivatives are currently estimated numerically by finite difference approximation because it is very difficult to obtain the analytical derivatives of the object function for an implicit optimization problem. Employing numerical derivatives in an iteration not only increase the computational cost, but also deteriorate the convergence rate and robustness of the iteration due to their numerical error. In this paper, the analytical first order partial derivatives of the objective function are deduced for 3D problems. The analytical derivatives can be calculated accurately, so they are incorporated into the MoF method to improve its accuracy, efficiency and robustness. Numerical studies show that by using the analytical derivatives the iterations are converged in all mixed cells with the efficiency improvement of 3 to 4 times.

  20. Iterative load-balancing method with multigrid level relaxation for particle simulation with short-range interactions

    NASA Astrophysics Data System (ADS)

    Furuichi, Mikito; Nishiura, Daisuke

    2017-10-01

    We developed dynamic load-balancing algorithms for Particle Simulation Methods (PSM) involving short-range interactions, such as Smoothed Particle Hydrodynamics (SPH), Moving Particle Semi-implicit method (MPS), and Discrete Element method (DEM). These are needed to handle billions of particles modeled in large distributed-memory computer systems. Our method utilizes flexible orthogonal domain decomposition, allowing the sub-domain boundaries in the column to be different for each row. The imbalances in the execution time between parallel logical processes are treated as a nonlinear residual. Load-balancing is achieved by minimizing the residual within the framework of an iterative nonlinear solver, combined with a multigrid technique in the local smoother. Our iterative method is suitable for adjusting the sub-domain frequently by monitoring the performance of each computational process because it is computationally cheaper in terms of communication and memory costs than non-iterative methods. Numerical tests demonstrated the ability of our approach to handle workload imbalances arising from a non-uniform particle distribution, differences in particle types, or heterogeneous computer architecture which was difficult with previously proposed methods. We analyzed the parallel efficiency and scalability of our method using Earth simulator and K-computer supercomputer systems.

  1. Minimizing Cache Misses Using Minimum-Surface Bodies

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; VanderWijngaart, Rob; Biegel, Bryan (Technical Monitor)

    2002-01-01

    A number of known techniques for improving cache performance in scientific computations involve the reordering of the iteration space. Some of these reorderings can be considered as coverings of the iteration space with the sets having good surface-to-volume ratio. Use of such sets reduces the number of cache misses in computations of local operators having the iteration space as a domain. First, we derive lower bounds which any algorithm must suffer while computing a local operator on a grid. Then we explore coverings of iteration spaces represented by structured and unstructured grids which allow us to approach these lower bounds. For structured grids we introduce a covering by successive minima tiles of the interference lattice of the grid. We show that the covering has low surface-to-volume ratio and present a computer experiment showing actual reduction of the cache misses achieved by using these tiles. For planar unstructured grids we show existence of a covering which reduces the number of cache misses to the level of structured grids. On the other hand, we present a triangulation of a 3-dimensional cube such that any local operator on the corresponding grid has significantly larger number of cache misses than a similar operator on a structured grid.

  2. The Military Spouse Education and Career Opportunities Program: Recommendations for an Internal Monitoring System

    DTIC Science & Technology

    2016-01-01

    Family Policy’s SECO program, which reviewed existing SECO metrics and data sources, as well as analytic methods of previ- ous research, to determine ...process that requires an iterative cycle of assessment of collected data (typically, but not solely, quantitative data) to determine whether SECO...RAND suggests five steps to develop and implement the SECO inter- nal monitoring system: Step 1. Describe the logic or theory of how activities are

  3. Instrumentation

    for Examining

    Microbial Response

    to Changes In Environmental Pressures

    NASA Astrophysics Data System (ADS)

    Blaich, J.; Storrs, A.; Wang, J.; Ouandji, C.; Arismendi, D.; Hernandez, J.; Sardesh, N.; Ibanez, C. R.; Owyang, S.; Gentry, D.

    2016-12-01

    The Automated Adaptive Directed Evolution Chamber (AADEC) is a device that allows operators to generate a micro-scale analog of real world systems that can be used to model the local-scale effects of climate change on microbial ecosystems. The AADEC uses an artificial environment to expose cultures of micro-organisms to environmental pressures, such as UV-C radiation, chemical toxins, and temperature. The AADEC autonomously exposes micro-organisms to slection pressures. This improves upon standard manual laboratory techniques: the process can take place over a longer period of time, involve more stressors, implement real-time adjustments based on the state of the population, and minimize the risk of contamination. We currently use UV-C radiation as the main selection pressure, UV-C is well studied both for its cell and DNA damaging effects as a type of selection pressure and for its related effectiveness as a mutagen; having these functions united makes it a good choice for a proof of concept. The AADEC roadmap includes expansion to different selection pressures, including heavy metal toxicity, temperature, and other forms of radiation. The AADEC uses closed-loop control to feedback the current state of the culture to the AADEC controller that modifies selection pressure intensity during experimentation, in this case culture density and growth rate. Culture density and growth rate are determined by measuring the optical density of the culture using 600 nm light. An array of 600 nm LEDs illuminate the culture and photodiodes are used to measure the shadow on the opposite side of the chamber. Previous experiments showed that we can produce a million fold increase to UV-C radiation over seven iterations. The most recent implements a microfluidic system that can expose cultures to multiple different selection pressures, perform non-survival based selection, and autonomously perform hundreds of exposure cycles. A scalable pump system gives the ability to pump in various different growth media to individual cultures and introduce chemical toxins during experimentation; AADEC can perform freeze and thaw cycles. We improved our baseline characterization by building a custom UV-C exposure hood, a shutter operates on a preset timer allowing the user to set exposure intensity consistently for multiple iterations.

  4. Automated detection using natural language processing of radiologists recommendations for additional imaging of incidental findings.

    PubMed

    Dutta, Sayon; Long, William J; Brown, David F M; Reisner, Andrew T

    2013-08-01

    As use of radiology studies increases, there is a concurrent increase in incidental findings (eg, lung nodules) for which the radiologist issues recommendations for additional imaging for follow-up. Busy emergency physicians may be challenged to carefully communicate recommendations for additional imaging not relevant to the patient's primary evaluation. The emergence of electronic health records and natural language processing algorithms may help address this quality gap. We seek to describe recommendations for additional imaging from our institution and develop and validate an automated natural language processing algorithm to reliably identify recommendations for additional imaging. We developed a natural language processing algorithm to detect recommendations for additional imaging, using 3 iterative cycles of training and validation. The third cycle used 3,235 radiology reports (1,600 for algorithm training and 1,635 for validation) of discharged emergency department (ED) patients from which we determined the incidence of discharge-relevant recommendations for additional imaging and the frequency of appropriate discharge documentation. The test characteristics of the 3 natural language processing algorithm iterations were compared, using blinded chart review as the criterion standard. Discharge-relevant recommendations for additional imaging were found in 4.5% (95% confidence interval [CI] 3.5% to 5.5%) of ED radiology reports, but 51% (95% CI 43% to 59%) of discharge instructions failed to note those findings. The final natural language processing algorithm had 89% (95% CI 82% to 94%) sensitivity and 98% (95% CI 97% to 98%) specificity for detecting recommendations for additional imaging. For discharge-relevant recommendations for additional imaging, sensitivity improved to 97% (95% CI 89% to 100%). Recommendations for additional imaging are common, and failure to document relevant recommendations for additional imaging in ED discharge instructions occurs frequently. The natural language processing algorithm's performance improved with each iteration and offers a promising error-prevention tool. Copyright © 2013 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.

  5. Applying Evidence-Based Medicine in Telehealth: An Interactive Pattern Recognition Approximation

    PubMed Central

    Fernández-Llatas, Carlos; Meneu, Teresa; Traver, Vicente; Benedi, José-Miguel

    2013-01-01

    Born in the early nineteen nineties, evidence-based medicine (EBM) is a paradigm intended to promote the integration of biomedical evidence into the physicians daily practice. This paradigm requires the continuous study of diseases to provide the best scientific knowledge for supporting physicians in their diagnosis and treatments in a close way. Within this paradigm, usually, health experts create and publish clinical guidelines, which provide holistic guidance for the care for a certain disease. The creation of these clinical guidelines requires hard iterative processes in which each iteration supposes scientific progress in the knowledge of the disease. To perform this guidance through telehealth, the use of formal clinical guidelines will allow the building of care processes that can be interpreted and executed directly by computers. In addition, the formalization of clinical guidelines allows for the possibility to build automatic methods, using pattern recognition techniques, to estimate the proper models, as well as the mathematical models for optimizing the iterative cycle for the continuous improvement of the guidelines. However, to ensure the efficiency of the system, it is necessary to build a probabilistic model of the problem. In this paper, an interactive pattern recognition approach to support professionals in evidence-based medicine is formalized. PMID:24185841

  6. An efficient numerical algorithm for transverse impact problems

    NASA Technical Reports Server (NTRS)

    Sankar, B. V.; Sun, C. T.

    1985-01-01

    Transverse impact problems in which the elastic and plastic indentation effects are considered, involve a nonlinear integral equation for the contact force, which, in practice, is usually solved by an iterative scheme with small increments in time. In this paper, a numerical method is proposed wherein the iterations of the nonlinear problem are separated from the structural response computations. This makes the numerical procedures much simpler and also efficient. The proposed method is applied to some impact problems for which solutions are available, and they are found to be in good agreement. The effect of the magnitude of time increment on the results is also discussed.

  7. Flow balancing orifice for ITER toroidal field coil

    NASA Astrophysics Data System (ADS)

    Litvinovich, A. V.; Y Rodin, I.; Kovalchuk, O. A.; Safonov, A. V.; Stepanov, D. B.; Guryeva, T. M.

    2017-12-01

    Flow balancing orifices (FBOs) are used in in International thermonuclear experimental reactor (ITER) Toroidal Field coil to uniform flow rate of cooling gas in the side double pancakes which have a different conductor length: 99 m and 305 m, respectively. FBOs consist of straight parts, elbows produced from a 316L stainless steel tube 21.34 x 2.11 mm and orifices made from a 316L stainless steel rod. Each of right and left FBOs contains 6 orifices, straight FBOs contain 4 and 6 orifices. Before manufacturing of qualification samples D.V. Efremov Institute of Electrophysical Apparatus (JSC NIIEFA) proposed to ITER a new approach to provide the seamless connection between a tube and a plate therefore the most critical weld between the orifice with 1 mm thickness and the tube removed from the FBOs final design. The proposed orifice diameter is three times less than the minimum requirement of the ISO 5167, therefore it was tasked to define accuracy of calculation flow characteristics at room temperature and compare with the experimental data. In 2015 the qualification samples of flow balancing orifices were produced and tested. The results of experimental data showed that the deviation of calculated data is less than 7%. Based on this result and other tests ITER approved the design of FBOs, which made it possible to start the serial production. In 2016 JSC NIIEFA delivered 50 FBOs to ITER, i.e. 24 left side, 24 right side and 2 straight FBOs. In order to define the quality of FBOs the test facility in JSC NIIEFA was prepared. The helium tightness test at 10-9 m3·Pa/s the pressure up to 3 MPa, flow rate measuring at the various pressure drops, the non-destructive tests of orifices and weld seams (ISO 5817, class B) were conducted. Other tests such as check dimensions and thermo cycling 300 - 80 - 300 K also were carried out for each FBO.

  8. Adaptive Management for Urban Watersheds: The Slavic Village Pilot Project

    EPA Science Inventory

    Adaptive management is an environmental management strategy that uses an iterative process of decision-making to reduce the uncertainty in environmental management via system monitoring. A central tenet of adaptive management is that management involves a learning process that ca...

  9. A novel decoding algorithm based on the hierarchical reliable strategy for SCG-LDPC codes in optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Tong, Qing-zhen; Huang, Sheng; Wang, Yong

    2013-11-01

    An effective hierarchical reliable belief propagation (HRBP) decoding algorithm is proposed according to the structural characteristics of systematically constructed Gallager low-density parity-check (SCG-LDPC) codes. The novel decoding algorithm combines the layered iteration with the reliability judgment, and can greatly reduce the number of the variable nodes involved in the subsequent iteration process and accelerate the convergence rate. The result of simulation for SCG-LDPC(3969,3720) code shows that the novel HRBP decoding algorithm can greatly reduce the computing amount at the condition of ensuring the performance compared with the traditional belief propagation (BP) algorithm. The bit error rate (BER) of the HRBP algorithm is considerable at the threshold value of 15, but in the subsequent iteration process, the number of the variable nodes for the HRBP algorithm can be reduced by about 70% at the high signal-to-noise ratio (SNR) compared with the BP algorithm. When the threshold value is further increased, the HRBP algorithm will gradually degenerate into the layered-BP algorithm, but at the BER of 10-7 and the maximal iteration number of 30, the net coding gain (NCG) of the HRBP algorithm is 0.2 dB more than that of the BP algorithm, and the average iteration times can be reduced by about 40% at the high SNR. Therefore, the novel HRBP decoding algorithm is more suitable for optical communication systems.

  10. The neural correlates of theory of mind and their role during empathy and the game of chess: A functional magnetic resonance imaging study.

    PubMed

    Powell, Joanne L; Grossi, Davide; Corcoran, Rhiannon; Gobet, Fernand; García-Fiñana, Marta

    2017-07-04

    Chess involves the capacity to reason iteratively about potential intentional choices of an opponent and therefore involves high levels of explicit theory of mind [ToM] (i.e. ability to infer mental states of others) alongside clear, strategic rule-based decision-making. Functional magnetic resonance imaging was used on 12 healthy male novice chess players to identify cortical regions associated with chess, ToM and empathizing. The blood-oxygenation-level-dependent (BOLD) response for chess and empathizing tasks was extracted from each ToM region. Results showed neural overlap between ToM, chess and empathizing tasks in right-hemisphere temporo-parietal junction (TPJ) [BA40], left-hemisphere superior temporal gyrus [BA22] and posterior cingulate gyrus [BA23/31]. TPJ is suggested to underlie the capacity to reason iteratively about another's internal state in a range of tasks. Areas activated by ToM and empathy included right-hemisphere orbitofrontal cortex and bilateral middle temporal gyrus: areas that become active when there is need to inhibit one's own experience when considering the internal state of another and for visual evaluation of action rationality. Results support previous findings, that ToM recruits a neural network with each region sub-serving a supporting role depending on the nature of the task itself. In contrast, a network of cortical regions primarily located within right- and left-hemisphere medial-frontal and parietal cortex, outside the internal representational network, was selectively recruited during the chess task. We hypothesize that in our cohort of novice chess players the strategy was to employ an iterative thinking pattern which in part involved mentalizing processes and recruited core ToM-related regions. Copyright © 2017. Published by Elsevier Ltd.

  11. iCycle: Integrated, multicriterial beam angle, and profile optimization for generation of coplanar and noncoplanar IMRT plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Breedveld, Sebastiaan; Storchi, Pascal R. M.; Voet, Peter W. J.

    2012-02-15

    Purpose: To introduce iCycle, a novel algorithm for integrated, multicriterial optimization of beam angles, and intensity modulated radiotherapy (IMRT) profiles. Methods: A multicriterial plan optimization with iCycle is based on a prescription called wish-list, containing hard constraints and objectives with ascribed priorities. Priorities are ordinal parameters used for relative importance ranking of the objectives. The higher an objective priority is, the higher the probability that the corresponding objective will be met. Beam directions are selected from an input set of candidate directions. Input sets can be restricted, e.g., to allow only generation of coplanar plans, or to avoid collisions betweenmore » patient/couch and the gantry in a noncoplanar setup. Obtaining clinically feasible calculation times was an important design criterium for development of iCycle. This could be realized by sequentially adding beams to the treatment plan in an iterative procedure. Each iteration loop starts with selection of the optimal direction to be added. Then, a Pareto-optimal IMRT plan is generated for the (fixed) beam setup that includes all so far selected directions, using a previously published algorithm for multicriterial optimization of fluence profiles for a fixed beam arrangement Breedveld et al.[Phys. Med. Biol. 54, 7199-7209 (2009)]. To select the next direction, each not yet selected candidate direction is temporarily added to the plan and an optimization problem, derived from the Lagrangian obtained from the just performed optimization for establishing the Pareto-optimal plan, is solved. For each patient, a single one-beam, two-beam, three-beam, etc. Pareto-optimal plan is generated until addition of beams does no longer result in significant plan quality improvement. Plan generation with iCycle is fully automated. Results: Performance and characteristics of iCycle are demonstrated by generating plans for a maxillary sinus case, a cervical cancer patient, and a liver patient treated with SBRT. Plans generated with beam angle optimization did better meet the clinical goals than equiangular or manually selected configurations. For the maxillary sinus and liver cases, significant improvements for noncoplanar setups were seen. The cervix case showed that also in IMRT with coplanar setups, beam angle optimization with iCycle may improve plan quality. Computation times for coplanar plans were around 1-2 h and for noncoplanar plans 4-7 h, depending on the number of beams and the complexity of the site. Conclusions: Integrated beam angle and profile optimization with iCycle may result in significant improvements in treatment plan quality. Due to automation, the plan generation workload is minimal. Clinical application has started.« less

  12. An Iterative Learning Control Approach to Improving Fidelity in Internet-Distributed Hardware-in-the-Loop Simulation

    DTIC Science & Technology

    2012-06-15

    pp. 535-543. [17] Compere , M., Goodell, J., Simon, M., Smith, W., and Brudnak, M., 2006, "Robust Control Techniques Enabling Duty Cycle...Technical Paper, 2006-01-3077. [18] Goodell, J., Compere , M., Simon, M., Smith, W., Wright, R., and Brudnak, M., 2006, "Robust Control Techniques for...Smith, W., Compere , M., Goodell, J., Holtz, D., Mortsfield, T., and Shvartsman, A., 2007, "Soldier/Harware-in-the-Loop Simulation- Based Combat Vehicle

  13. NEW Manning System Field Evaluation

    DTIC Science & Technology

    1986-12-15

    M’xzimum 200 wor(ds) 14 . SUBJECT TERMS 15. NUMBER OF PAGES 16. PRICE COf 17. SECURITY CLASSIFI.ATION 18. SEC.URITY CLASSIFIC,,TIO 19 SECURITY...three- year life cycle geared to the fLrst-term soldIer’s enlistment. tn the majority or cases, these units were deployed OCONUS for a part of the unit’s...soldiers of selected COHORT and monCO9ORT battalions and companies/batterles )oth in CONUS and USARSUR (five iterations over three years ). The primary

  14. Migration without migraines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lines, L.; Burton, A.; Lu, H.X.

    Accurate velocity models are a necessity for reliable migration results. Velocity analysis generally involves the use of methods such as normal moveout analysis (NMO), seismic traveltime tomography, or iterative prestack migration. These techniques can be effective, and each has its own advantage or disadvantage. Conventional NMO methods are relatively inexpensive but basically require simplifying assumptions about geology. Tomography is a more general method but requires traveltime interpretation of prestack data. Iterative prestack depth migration is very general but is computationally expensive. In some cases, there is the opportunity to estimate vertical velocities by use of well information. The well informationmore » can be used to optimize poststack migrations, thereby eliminating some of the time and expense of iterative prestack migration. The optimized poststack migration procedure defined here computes the velocity model which minimizes the depth differences between seismic images and formation depths at the well by using a least squares inversion method. The optimization methods described in this paper will hopefully produce ``migrations without migraines.``« less

  15. Eigensolutions of nonviscously damped systems based on the fixed-point iteration

    NASA Astrophysics Data System (ADS)

    Lázaro, Mario

    2018-03-01

    In this paper, nonviscous, nonproportional, symmetric vibrating structures are considered. Nonviscously damped systems present dissipative forces depending on the time history of the response via kernel hereditary functions. Solutions of the free motion equation leads to a nonlinear eigenvalue problem involving mass, stiffness and damping matrices, this latter as dependent on frequency. Viscous damping can be considered as a particular case, involving damping forces as function of the instantaneous velocity of the degrees of freedom. In this work, a new numerical procedure to compute eigensolutions is proposed. The method is based on the construction of certain recursive functions which, under a iterative scheme, allow to reach eigenvalues and eigenvectors simultaneously and avoiding computation of eigensensitivities. Eigenvalues can be read then as fixed-points of those functions. A deep analysis of the convergence is carried out, focusing specially on relating the convergence conditions and error-decay rate to the damping model features, such as the nonproportionality and the viscoelasticity. The method is validated using two 6 degrees of freedom numerical examples involving both nonviscous and viscous damping and a continuous system with a local nonviscous damper. The convergence and the sequences behavior are in agreement with the results foreseen by the theory.

  16. Linkage between Researchers and Practitioners: A Qualitative Study.

    ERIC Educational Resources Information Center

    Huberman, Michael

    1990-01-01

    A multiple-case, "tracer" study was undertaken involving 11 research projects of the "Education et Vie Active" (Education and the Active Life)--a national vocational education program in Switzerland--to assess the importance of contacts between researchers and practitioners. Iterative data from interviews, observations, and…

  17. Designing Needs Statements in a Systematic Iterative Way

    ERIC Educational Resources Information Center

    Verstegen, D. M. L.; Barnard, Y. F.; Pilot, A.

    2009-01-01

    Designing specifications for technically advanced instructional products, such as e-learning, simulations or simulators requires different kinds of expertise. The SLIM method proposes to involve all stakeholders from the beginning in a series of workshops under the guidance of experienced instructional designers. These instructional designers…

  18. Designing a composite correlation filter based on iterative optimization of training images for distortion invariant face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Elbouz, M.; Alfalou, A.; Brosseau, C.

    2017-06-01

    We present a novel method to optimize the discrimination ability and noise robustness of composite filters. This method is based on the iterative preprocessing of training images which can extract boundary and detailed feature information of authentic training faces, thereby improving the peak-to-correlation energy (PCE) ratio of authentic faces and to be immune to intra-class variance and noise interference. By adding the training images directly, one can obtain a composite template with high discrimination ability and robustness for face recognition task. The proposed composite correlation filter does not involve any complicated mathematical analysis and computation which are often required in the design of correlation algorithms. Simulation tests have been conducted to check the effectiveness and feasibility of our proposal. Moreover, to assess robustness of composite filters using receiver operating characteristic (ROC) curves, we devise a new method to count the true positive and false positive rates for which the difference between PCE and threshold is involved.

  19. Tigres Workflow Library: Supporting Scientific Pipelines on HPC Systems

    DOE PAGES

    Hendrix, Valerie; Fox, James; Ghoshal, Devarshi; ...

    2016-07-21

    The growth in scientific data volumes has resulted in the need for new tools that enable users to operate on and analyze data on large-scale resources. In the last decade, a number of scientific workflow tools have emerged. These tools often target distributed environments, and often need expert help to compose and execute the workflows. Data-intensive workflows are often ad-hoc, they involve an iterative development process that includes users composing and testing their workflows on desktops, and scaling up to larger systems. In this paper, we present the design and implementation of Tigres, a workflow library that supports the iterativemore » workflow development cycle of data-intensive workflows. Tigres provides an application programming interface to a set of programming templates i.e., sequence, parallel, split, merge, that can be used to compose and execute computational and data pipelines. We discuss the results of our evaluation of scientific and synthetic workflows showing Tigres performs with minimal template overheads (mean of 13 seconds over all experiments). We also discuss various factors (e.g., I/O performance, execution mechanisms) that affect the performance of scientific workflows on HPC systems.« less

  20. Design principles for simulation games for learning clinical reasoning: A design-based research approach.

    PubMed

    Koivisto, J-M; Haavisto, E; Niemi, H; Haho, P; Nylund, S; Multisilta, J

    2018-01-01

    Nurses sometimes lack the competence needed for recognising deterioration in patient conditions and this is often due to poor clinical reasoning. There is a need to develop new possibilities for learning this crucial competence area. In addition, educators need to be future oriented; they need to be able to design and adopt new pedagogical innovations. The purpose of the study is to describe the development process and to generate principles for the design of nursing simulation games. A design-based research methodology is applied in this study. Iterative cycles of analysis, design, development, testing and refinement were conducted via collaboration among researchers, educators, students, and game designers. The study facilitated the generation of reusable design principles for simulation games to guide future designers when designing and developing simulation games for learning clinical reasoning. This study makes a major contribution to research on simulation game development in the field of nursing education. The results of this study provide important insights into the significance of involving nurse educators in the design and development process of educational simulation games for the purpose of nursing education. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Tigres Workflow Library: Supporting Scientific Pipelines on HPC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendrix, Valerie; Fox, James; Ghoshal, Devarshi

    The growth in scientific data volumes has resulted in the need for new tools that enable users to operate on and analyze data on large-scale resources. In the last decade, a number of scientific workflow tools have emerged. These tools often target distributed environments, and often need expert help to compose and execute the workflows. Data-intensive workflows are often ad-hoc, they involve an iterative development process that includes users composing and testing their workflows on desktops, and scaling up to larger systems. In this paper, we present the design and implementation of Tigres, a workflow library that supports the iterativemore » workflow development cycle of data-intensive workflows. Tigres provides an application programming interface to a set of programming templates i.e., sequence, parallel, split, merge, that can be used to compose and execute computational and data pipelines. We discuss the results of our evaluation of scientific and synthetic workflows showing Tigres performs with minimal template overheads (mean of 13 seconds over all experiments). We also discuss various factors (e.g., I/O performance, execution mechanisms) that affect the performance of scientific workflows on HPC systems.« less

  2. Developing and implementing a heart failure data mart for research and quality improvement.

    PubMed

    Abu-Rish Blakeney, Erin; Wolpin, Seth; Lavallee, Danielle C; Dardas, Todd; Cheng, Richard; Zierler, Brenda

    2018-04-19

    The purpose of this project was to build and formatively evaluate a near-real time heart failure (HF) data mart. Heart Failure (HF) is a leading cause of hospital readmissions. Increased efforts to use data meaningfully may enable healthcare organizations to better evaluate effectiveness of care pathways and quality improvements, and to prospectively identify risk among HF patients. We followed a modified version of the Systems Development Life Cycle: 1) Conceptualization, 2) Requirements Analysis, 3) Iterative Development, and 4) Application Release. This foundational work reflects the first of a two-phase project. Phase two (in process) involves the implementation and evaluation of predictive analytics for clinical decision support. We engaged stakeholders to build working definitions and established automated processes for creating an HF data mart containing actionable information for diverse audiences. As of December 2017, the data mart contains information from over 175,000 distinct patients and >100 variables from each of their nearly 300,000 visits. The HF data mart will be used to enhance care, assist in clinical decision-making, and improve overall quality of care. This model holds the potential to be scaled and generalized beyond the initial focus and setting.

  3. Periodic Application of Stochastic Cost Optimization Methodology to Achieve Remediation Objectives with Minimized Life Cycle Cost

    NASA Astrophysics Data System (ADS)

    Kim, U.; Parker, J.

    2016-12-01

    Many dense non-aqueous phase liquid (DNAPL) contaminated sites in the U.S. are reported as "remediation in progress" (RIP). However, the cost to complete (CTC) remediation at these sites is highly uncertain and in many cases, the current remediation plan may need to be modified or replaced to achieve remediation objectives. This study evaluates the effectiveness of iterative stochastic cost optimization that incorporates new field data for periodic parameter recalibration to incrementally reduce prediction uncertainty and implement remediation design modifications as needed to minimize the life cycle cost (i.e., CTC). This systematic approach, using the Stochastic Cost Optimization Toolkit (SCOToolkit), enables early identification and correction of problems to stay on track for completion while minimizing the expected (i.e., probability-weighted average) CTC. This study considers a hypothetical site involving multiple DNAPL sources in an unconfined aquifer using thermal treatment for source reduction and electron donor injection for dissolved plume control. The initial design is based on stochastic optimization using model parameters and their joint uncertainty based on calibration to site characterization data. The model is periodically recalibrated using new monitoring data and performance data for the operating remediation systems. Projected future performance using the current remediation plan is assessed and reoptimization of operational variables for the current system or consideration of alternative designs are considered depending on the assessment results. We compare remediation duration and cost for the stepwise re-optimization approach with single stage optimization as well as with a non-optimized design based on typical engineering practice.

  4. Upper limb stroke rehabilitation: the effectiveness of Stimulation Assistance through Iterative Learning (SAIL).

    PubMed

    Meadmore, Katie L; Cai, Zhonglun; Tong, Daisy; Hughes, Ann-Marie; Freeman, Chris T; Rogers, Eric; Burridge, Jane H

    2011-01-01

    A novel system has been developed which combines robotic therapy with electrical stimulation (ES) for upper limb stroke rehabilitation. This technology, termed SAIL: Stimulation Assistance through Iterative Learning, employs advanced model-based iterative learning control (ILC) algorithms to precisely assist participant's completion of 3D tracking tasks with their impaired arm. Data is reported from a preliminary study with unimpaired participants, and also from a single hemiparetic stroke participant with reduced upper limb function who has used the system in a clinical trial. All participants completed tasks which involved moving their (impaired) arm to follow an image of a slowing moving sphere along a trajectory. The participants' arm was supported by a robot and ES was applied to the triceps brachii and anterior deltoid muscles. During each task, the same tracking trajectory was repeated 6 times and ILC was used to compute the stimulation signals to be applied on the next iteration. Unimpaired participants took part in a single, one hour training session and the stroke participant undertook 18, 1 hour treatment sessions composed of tracking tasks varying in length, orientation and speed. The results reported describe changes in tracking ability and demonstrate feasibility of the SAIL system for upper limb rehabilitation. © 2011 IEEE

  5. Learning Efficient Sparse and Low Rank Models.

    PubMed

    Sprechmann, P; Bronstein, A M; Sapiro, G

    2015-09-01

    Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.

  6. Engineering Design of ITER Prototype Fast Plant System Controller

    NASA Astrophysics Data System (ADS)

    Goncalves, B.; Sousa, J.; Carvalho, B.; Rodrigues, A. P.; Correia, M.; Batista, A.; Vega, J.; Ruiz, M.; Lopez, J. M.; Rojo, R. Castro; Wallander, A.; Utzel, N.; Neto, A.; Alves, D.; Valcarcel, D.

    2011-08-01

    The ITER control, data access and communication (CODAC) design team identified the need for two types of plant systems. A slow control plant system is based on industrial automation technology with maximum sampling rates below 100 Hz, and a fast control plant system is based on embedded technology with higher sampling rates and more stringent real-time requirements than that required for slow controllers. The latter is applicable to diagnostics and plant systems in closed-control loops whose cycle times are below 1 ms. Fast controllers will be dedicated industrial controllers with the ability to supervise other fast and/or slow controllers, interface to actuators and sensors and, if necessary, high performance networks. Two prototypes of a fast plant system controller specialized for data acquisition and constrained by ITER technological choices are being built using two different form factors. This prototyping activity contributes to the Plant Control Design Handbook effort of standardization, specifically regarding fast controller characteristics. Envisaging a general purpose fast controller design, diagnostic use cases with specific requirements were analyzed and will be presented along with the interface with CODAC and sensors. The requirements and constraints that real-time plasma control imposes on the design were also taken into consideration. Functional specifications and technology neutral architecture, together with its implications on the engineering design, were considered. The detailed engineering design compliant with ITER standards was performed and will be discussed in detail. Emphasis will be given to the integration of the controller in the standard CODAC environment. Requirements for the EPICS IOC providing the interface to the outside world, the prototype decisions on form factor, real-time operating system, and high-performance networks will also be discussed, as well as the requirements for data streaming to CODAC for visualization and archiving.

  7. Analysis of the ITER central solenoid insert (CSI) coil stability tests

    NASA Astrophysics Data System (ADS)

    Savoldi, L.; Bonifetto, R.; Breschi, M.; Isono, T.; Martovetsky, N.; Ozeki, H.; Zanino, R.

    2017-07-01

    At the end of the test campaign of the ITER Central Solenoid Insert (CSI) coil in 2015, after 16,000 electromagnetic (EM) cycles, some tests were devoted to the study of the conductor stability, through the measurement of the Minimum Quench Energy (MQE). The tests were performed by means of an inductive heater (IH), located in the high-field region of the CSI and wrapped around the conductor. The calorimetric calibration of the IH is presented here, aimed at assessing the energy deposited in the conductor for different values of the IH electrical operating conditions. The MQE of the conductor of the ITER CS module 3L can be estimated as ∼200 J ± 20%, deposited on the whole conductor on a length of ∼10 cm (the IH length) in ∼40 ms, at current and magnetic field conditions relevant for the ITER CS operation. The repartition of the energy deposited in the conductor under the IH is computed to be ∼10% in the cable and 90% in the jacket by means of a 3D Finite Elements EM model. It is shown how this repartition implies that the bundle (cable + helium) heat capacity is fully available for stability on the time scale of the tested disturbances. This repartition is used in input to the thermal-hydraulic analysis performed with the 4C code, to assess the capability of the model to accurately reproduce the stability threshold of the conductor. The MQE computed by the code for this disturbance is in good agreement with the measured value, with an underestimation within 15% of the experimental value.

  8. A fast, time-accurate unsteady full potential scheme

    NASA Technical Reports Server (NTRS)

    Shankar, V.; Ide, H.; Gorski, J.; Osher, S.

    1985-01-01

    The unsteady form of the full potential equation is solved in conservation form by an implicit method based on approximate factorization. At each time level, internal Newton iterations are performed to achieve time accuracy and computational efficiency. A local time linearization procedure is introduced to provide a good initial guess for the Newton iteration. A novel flux-biasing technique is applied to generate proper forms of the artificial viscosity to treat hyperbolic regions with shocks and sonic lines present. The wake is properly modeled by accounting not only for jumps in phi, but also for jumps in higher derivatives of phi, obtained by imposing the density to be continuous across the wake. The far field is modeled using the Riemann invariants to simulate nonreflecting boundary conditions. The resulting unsteady method performs well which, even at low reduced frequency levels of 0.1 or less, requires fewer than 100 time steps per cycle at transonic Mach numbers. The code is fully vectorized for the CRAY-XMP and the VPS-32 computers.

  9. Results of high heat flux qualification tests of W monoblock components for WEST

    NASA Astrophysics Data System (ADS)

    Greuner, H.; Böswirth, B.; Lipa, M.; Missirlian, M.; Richou, M.

    2017-12-01

    One goal of the WEST project (W Environment in Steady-state Tokamak) is the manufacturing, quality assessment and operation of ITER-like actively water-cooled divertor plasma facing components made of tungsten. Six W monoblock plasma facing units (PFUs) from different suppliers have been successfully evaluated in the high heat flux test facility GLADIS at IPP. Each PFU is equipped with 35 W monoblocks of an ITER-like geometry. However, the W blocks are made of different tungsten grades and the suppliers applied different bonding techniques between tungsten and the inserted Cu-alloy cooling tubes. The intention of the HHF test campaign was to assess the manufacturing quality of the PFUs on the basis of a statistical analysis of the surface temperature evolution of the individual W monoblocks during thermal loading with 100 cycles at 10 MW m-2. These tests confirm the non-destructive examinations performed by the manufacturer and CEA prior to the installation of the WEST platform, and no defects of the components were detected.

  10. A novel surrogate-based approach for optimal design of electromagnetic-based circuits

    NASA Astrophysics Data System (ADS)

    Hassan, Abdel-Karim S. O.; Mohamed, Ahmed S. A.; Rabie, Azza A.; Etman, Ahmed S.

    2016-02-01

    A new geometric design centring approach for optimal design of central processing unit-intensive electromagnetic (EM)-based circuits is introduced. The approach uses norms related to the probability distribution of the circuit parameters to find distances from a point to the feasible region boundaries by solving nonlinear optimization problems. Based on these normed distances, the design centring problem is formulated as a max-min optimization problem. A convergent iterative boundary search technique is exploited to find the normed distances. To alleviate the computation cost associated with the EM-based circuits design cycle, space-mapping (SM) surrogates are used to create a sequence of iteratively updated feasible region approximations. In each SM feasible region approximation, the centring process using normed distances is implemented, leading to a better centre point. The process is repeated until a final design centre is attained. Practical examples are given to show the effectiveness of the new design centring method for EM-based circuits.

  11. Performance analysis of the toroidal field ITER production conductors

    NASA Astrophysics Data System (ADS)

    Breschi, M.; Macioce, D.; Devred, A.

    2017-05-01

    The production of the superconducting cables for the toroidal field (TF) magnets of the ITER machine has recently been completed at the manufacturing companies selected during the previous qualification phase. The quality assurance/quality control programs that have been implemented to ensure production uniformity across numerous suppliers include performance tests of several conductor samples from selected unit lengths. The short full-size samples (4 m long) were subjected to DC and AC tests in the SULTAN facility at CRPP in Villigen, Switzerland. In a previous work the results of the tests of the conductor performance qualification samples were reported. This work reports the analyses of the results of the tests of the production conductor samples. The results reported here concern the values of current sharing temperature, critical current, effective strain and n-value from the DC tests and the energy dissipated per cycle from the AC loss tests. A detailed comparison is also presented between the performance of the conductors and that of their constituting strands.

  12. Development of MCAERO wing design panel method with interactive graphics module

    NASA Technical Reports Server (NTRS)

    Hawk, J. D.; Bristow, D. R.

    1984-01-01

    A reliable and efficient iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical pressure distribution. The design process is initialized by using MCAERO (MCAIR 3-D Subsonic Potential Flow Analysis Code) to analyze a baseline configuration. A second program DMCAERO is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter by applying a first-order expansion to the baseline equations in MCAERO. This matrix is calculated only once but is used in each iteration cycle to calculate the geometry perturbation and to analyze the perturbed geometry. The potential on the new geometry is calculated by linear extrapolation from the baseline solution. This extrapolated potential is converted to velocity by numerical differentiation, and velocity is converted to pressure by using Bernoulli's equation. There is an interactive graphics option which allows the user to graphically display the results of the design process and to interactively change either the geometry or the prescribed pressure distribution.

  13. Toward a first-principles integrated simulation of tokamak edge plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, C S; Klasky, Scott A; Cummings, Julian

    2008-01-01

    Performance of the ITER is anticipated to be highly sensitive to the edge plasma condition. The edge pedestal in ITER needs to be predicted from an integrated simulation of the necessary firstprinciples, multi-scale physics codes. The mission of the SciDAC Fusion Simulation Project (FSP) Prototype Center for Plasma Edge Simulation (CPES) is to deliver such a code integration framework by (1) building new kinetic codes XGC0 and XGC1, which can simulate the edge pedestal buildup; (2) using and improving the existing MHD codes ELITE, M3D-OMP, M3D-MPP and NIMROD, for study of large-scale edge instabilities called Edge Localized Modes (ELMs); andmore » (3) integrating the codes into a framework using cutting-edge computer science technology. Collaborative effort among physics, computer science, and applied mathematics within CPES has created the first working version of the End-to-end Framework for Fusion Integrated Simulation (EFFIS), which can be used to study the pedestal-ELM cycles.« less

  14. Healing journey: a qualitative analysis of the healing experiences of Americans suffering from trauma and illness

    PubMed Central

    Scott, John Glenn; Warber, Sara L; Dieppe, Paul; Jones, David; Stange, Kurt C

    2017-01-01

    Objectives To elucidate pathways to healing for people having suffered injury to the integrity of their function as a human being. Methods A team of physician-analysts conducted thematic analyses of in-depth interviews of 23 patients who experienced healing, as identified by six primary care physicians purposefully selected as exemplary healers. Results People in the sample experienced healing journeys that spanned a spectrum from overcoming unspeakable trauma and then becoming healers themselves to everyday heroes functioning well despite ongoing serious health challenges. The degree and quality of suffering experienced by each individual is framed by contextual factors that include personal characteristics, timing of their initial or ongoing wounding in the developmental life cycle and prior and current relationships. In the healing journey, bridges from suffering are developed to healing resources/skills and connections to helpers outside themselves. These bridges often evolve in fits and starts and involve persistence and developing a sense of safety and trust. From the iteration between suffering and developing resources and connections, a new state emerges that involves hope, self-acceptance and helping others. Over time, this leads to healing that includes a sense of integrity and flourishing in the pursuit of meaningful goals and purpose. Conclusion Moving from being wounded, through suffering to healing, is possible. It is facilitated by developing safe, trusting relationships and by positive reframing that moves through the weight of responsibility to the ability to respond. PMID:28903969

  15. Determining solid-fluid interface temperature distribution during phase change of cryogenic propellants using transient thermal modeling

    NASA Astrophysics Data System (ADS)

    Bellur, K.; Médici, E. F.; Hermanson, J. C.; Choi, C. K.; Allen, J. S.

    2018-04-01

    Control of boil-off of cryogenic propellants is a continuing technical challenge for long duration space missions. Predicting phase change rates of cryogenic liquids requires an accurate estimation of solid-fluid interface temperature distributions in regions where a contact line or a thin liquid film exists. This paper described a methodology to predict inner wall temperature gradients with and without evaporation using discrete temperature measurements on the outer wall of a container. Phase change experiments with liquid hydrogen and methane in cylindrical test cells of various materials and sizes were conducted at the Neutron Imaging Facility at the National Institute of Standards and Technology. Two types of tests were conducted. The first type of testing involved thermal cycling of an evacuated cell (dry) and the second involved controlled phase change with cryogenic liquids (wet). During both types of tests, temperatures were measured using Si-diode sensors mounted on the exterior surface of the test cells. Heat is transferred to the test cell by conduction through a helium exchange gas and through the cryostat sample holder. Thermal conduction through the sample holder is shown to be the dominant mode with the rate of heat transfer limited by six independent contact resistances. An iterative methodology is employed to determine contact resistances between the various components of the cryostat stick insert, test cell and lid using the dry test data. After the contact resistances are established, inner wall temperature distributions during wet tests are calculated.

  16. Optoelectronic Inner-Product Neural Associative Memory

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1993-01-01

    Optoelectronic apparatus acts as artificial neural network performing associative recall of binary images. Recall process is iterative one involving optical computation of inner products between binary input vector and one or more reference binary vectors in memory. Inner-product method requires far less memory space than matrix-vector method.

  17. WESTERN RESEARCH INSTITUTE CONTAINED RECOVERY OF OILY WASTES (CROW) PROCESS - ITER

    EPA Science Inventory

    This report summarizes the findings of an evaluation of the Contained Recovery of Oily Wastes (CROW) technology developed by the Western Research Institute. The process involves the injection of heated water into the subsurface to mobilize oily wastes, which are removed from the ...

  18. Learner Centred Design for a Hybrid Interaction Application

    ERIC Educational Resources Information Center

    Wood, Simon; Romero, Pablo

    2010-01-01

    Learner centred design methods highlight the importance of involving the stakeholders of the learning process (learners, teachers, educational researchers) at all stages of the design of educational applications and of refining the design through an iterative prototyping process. These methods have been used successfully when designing systems…

  19. Design Evolution of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) and Lessons Learned

    NASA Technical Reports Server (NTRS)

    Peabody, Hume; Peters, Carlton; Rodriguez, Juan; McDonald, Carson; Content, David A.; Jackson, Cliff

    2015-01-01

    The design of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) continues to evolve as each design cycle is analyzed. In 2012, two Hubble sized (2.4 m diameter) telescopes were donated to NASA from elsewhere in the Federal Government. NASA began investigating potential uses for these telescopes and identified WFIRST as a mission to benefit from these assets. With an updated, deeper, and sharper field of view than previous design iterations with a smaller telescope, the optical designs of the WFIRST instruments were updated and the mechanical and thermal designs evolved around the new optical layout. Beginning with Design Cycle 3, significant analysis efforts yielded a design and model that could be evaluated for Structural-Thermal-Optical-Performance (STOP) purposes for the Wide Field Imager (WFI) and provided the basis for evaluating the high level observatory requirements. Development of the Cycle 3 thermal model provided some valuable analysis lessons learned and established best practices for future design cycles. However, the Cycle 3 design did include some major liens and evolving requirements which were addressed in the Cycle 4 Design. Some of the design changes are driven by requirements changes, while others are optimizations or solutions to liens from previous cycles. Again in Cycle 4, STOP analysis was performed and further insights into the overall design were gained leading to the Cycle 5 design effort currently underway. This paper seeks to capture the thermal design evolution, with focus on major design drivers, key decisions and their rationale, and lessons learned as the design evolved.

  20. Design Evolution of the Wide Field Infrared Survey Telescope Using Astrophysics Focused Telescope Assets (WFIRST-AFTA) and Lessons Learned

    NASA Technical Reports Server (NTRS)

    Peabody, Hume L.; Peters, Carlton V.; Rodriguez-Ruiz, Juan E.; McDonald, Carson S.; Content, David A.; Jackson, Clifton E.

    2015-01-01

    The design of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) continues to evolve as each design cycle is analyzed. In 2012, two Hubble sized (2.4 m diameter) telescopes were donated to NASA from elsewhere in the Federal Government. NASA began investigating potential uses for these telescopes and identified WFIRST as a mission to benefit from these assets. With an updated, deeper, and sharper field of view than previous design iterations with a smaller telescope, the optical designs of the WFIRST instruments were updated and the mechanical and thermal designs evolved around the new optical layout. Beginning with Design Cycle 3, significant analysis efforts yielded a design and model that could be evaluated for Structural-Thermal-Optical-Performance (STOP) purposes for the Wide Field Imager (WFI) and provided the basis for evaluating the high level observatory requirements. Development of the Cycle 3 thermal model provided some valuable analysis lessons learned and established best practices for future design cycles. However, the Cycle 3 design did include some major liens and evolving requirements which were addressed in the Cycle 4 Design. Some of the design changes are driven by requirements changes, while others are optimizations or solutions to liens from previous cycles. Again in Cycle 4, STOP analysis was performed and further insights into the overall design were gained leading to the Cycle 5 design effort currently underway. This paper seeks to capture the thermal design evolution, with focus on major design drivers, key decisions and their rationale, and lessons learned as the design evolved.

  1. A new extrapolation cascadic multigrid method for three dimensional elliptic boundary value problems

    NASA Astrophysics Data System (ADS)

    Pan, Kejia; He, Dongdong; Hu, Hongling; Ren, Zhengyong

    2017-09-01

    In this paper, we develop a new extrapolation cascadic multigrid method, which makes it possible to solve three dimensional elliptic boundary value problems with over 100 million unknowns on a desktop computer in half a minute. First, by combining Richardson extrapolation and quadratic finite element (FE) interpolation for the numerical solutions on two-level of grids (current and previous grids), we provide a quite good initial guess for the iterative solution on the next finer grid, which is a third-order approximation to the FE solution. And the resulting large linear system from the FE discretization is then solved by the Jacobi-preconditioned conjugate gradient (JCG) method with the obtained initial guess. Additionally, instead of performing a fixed number of iterations as used in existing cascadic multigrid methods, a relative residual tolerance is introduced in the JCG solver, which enables us to obtain conveniently the numerical solution with the desired accuracy. Moreover, a simple method based on the midpoint extrapolation formula is proposed to achieve higher-order accuracy on the finest grid cheaply and directly. Test results from four examples including two smooth problems with both constant and variable coefficients, an H3-regular problem as well as an anisotropic problem are reported to show that the proposed method has much better efficiency compared to the classical V-cycle and W-cycle multigrid methods. Finally, we present the reason why our method is highly efficient for solving these elliptic problems.

  2. Implementing partnership-driven clinical federated electronic health record data sharing networks.

    PubMed

    Stephens, Kari A; Anderson, Nicholas; Lin, Ching-Ping; Estiri, Hossein

    2016-09-01

    Building federated data sharing architectures requires supporting a range of data owners, effective and validated semantic alignment between data resources, and consistent focus on end-users. Establishing these resources requires development methodologies that support internal validation of data extraction and translation processes, sustaining meaningful partnerships, and delivering clear and measurable system utility. We describe findings from two federated data sharing case examples that detail critical factors, shared outcomes, and production environment results. Two federated data sharing pilot architectures developed to support network-based research associated with the University of Washington's Institute of Translational Health Sciences provided the basis for the findings. A spiral model for implementation and evaluation was used to structure iterations of development and support knowledge share between the two network development teams, which cross collaborated to support and manage common stages. We found that using a spiral model of software development and multiple cycles of iteration was effective in achieving early network design goals. Both networks required time and resource intensive efforts to establish a trusted environment to create the data sharing architectures. Both networks were challenged by the need for adaptive use cases to define and test utility. An iterative cyclical model of development provided a process for developing trust with data partners and refining the design, and supported measureable success in the development of new federated data sharing architectures. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Pigeons ("Columba Livia") Approach Nash Equilibrium in Experimental Matching Pennies Competitions

    ERIC Educational Resources Information Center

    Sanabria, Federico; Thrailkill, Eric

    2009-01-01

    The game of Matching Pennies (MP), a simplified version of the more popular Rock, Papers, Scissors, schematically represents competitions between organisms with incentives to predict each other's behavior. Optimal performance in iterated MP competitions involves the production of random choice patterns and the detection of nonrandomness in the…

  4. Citizen science in natural resources: Lessons learned from stakeholder engagement in participatory research using collaborative adaptive management

    USDA-ARS?s Scientific Manuscript database

    Under the traditional “loading-dock” model of research, stakeholders are involved in determining priorities prior to research activities and then recieve one-way communication about findings after research is completed. This approach lacks iterative engagement of stakeholders during the research pro...

  5. First Steps in Computational Systems Biology: A Practical Session in Metabolic Modeling and Simulation

    ERIC Educational Resources Information Center

    Reyes-Palomares, Armando; Sanchez-Jimenez, Francisca; Medina, Miguel Angel

    2009-01-01

    A comprehensive understanding of biological functions requires new systemic perspectives, such as those provided by systems biology. Systems biology approaches are hypothesis-driven and involve iterative rounds of model building, prediction, experimentation, model refinement, and development. Developments in computer science are allowing for ever…

  6. Designing Instructor-Led Schools with Rapid Prototyping.

    ERIC Educational Resources Information Center

    Lange, Steven R.; And Others

    1996-01-01

    Rapid prototyping involves abandoning many of the linear steps of traditional prototyping; it is instead a series of design iterations representing each major stage. This article describes the development of an instructor-led course for midlevel auditors using the principles and procedures of rapid prototyping, focusing on the savings in time and…

  7. Development and Validation of the Homeostasis Concept Inventory

    ERIC Educational Resources Information Center

    McFarland, Jenny L.; Price, Rebecca M.; Wenderoth, Mary Pat; Martinková, Patrícia; Cliff, William; Michael, Joel; Modell, Harold; Wright, Ann

    2017-01-01

    We present the Homeostasis Concept Inventory (HCI), a 20-item multiple-choice instrument that assesses how well undergraduates understand this critical physiological concept. We used an iterative process to develop a set of questions based on elements in the Homeostasis Concept Framework. This process involved faculty experts and undergraduate…

  8. Music Regions and Mental Maps: Teaching Cultural Geography

    ERIC Educational Resources Information Center

    Shobe, Hunter; Banis, David

    2010-01-01

    Music informs understandings of place and is an excellent vehicle for teaching cultural geography. A study was developed of geography students' perception of where music genres predominate in the United States. Its approach, involving mental map exercises, reveals the usefulness and importance of maps as an iterative process in teaching cultural…

  9. Entanglement-assisted quantum quasicyclic low-density parity-check codes

    NASA Astrophysics Data System (ADS)

    Hsieh, Min-Hsiu; Brun, Todd A.; Devetak, Igor

    2009-03-01

    We investigate the construction of quantum low-density parity-check (LDPC) codes from classical quasicyclic (QC) LDPC codes with girth greater than or equal to 6. We have shown that the classical codes in the generalized Calderbank-Skor-Steane construction do not need to satisfy the dual-containing property as long as preshared entanglement is available to both sender and receiver. We can use this to avoid the many four cycles which typically arise in dual-containing LDPC codes. The advantage of such quantum codes comes from the use of efficient decoding algorithms such as sum-product algorithm (SPA). It is well known that in the SPA, cycles of length 4 make successive decoding iterations highly correlated and hence limit the decoding performance. We show the principle of constructing quantum QC-LDPC codes which require only small amounts of initial shared entanglement.

  10. Additive Manufacturing and High-Performance Computing: a Disruptive Latent Technology

    NASA Astrophysics Data System (ADS)

    Goodwin, Bruce

    2015-03-01

    This presentation will discuss the relationship between recent advances in Additive Manufacturing (AM) technology, High-Performance Computing (HPC) simulation and design capabilities, and related advances in Uncertainty Quantification (UQ), and then examines their impacts upon national and international security. The presentation surveys how AM accelerates the fabrication process, while HPC combined with UQ provides a fast track for the engineering design cycle. The combination of AM and HPC/UQ almost eliminates the engineering design and prototype iterative cycle, thereby dramatically reducing cost of production and time-to-market. These methods thereby present significant benefits for US national interests, both civilian and military, in an age of austerity. Finally, considering cyber security issues and the advent of the ``cloud,'' these disruptive, currently latent technologies may well enable proliferation and so challenge both nuclear and non-nuclear aspects of international security.

  11. Hirshfeld atom refinement.

    PubMed

    Capelli, Silvia C; Bürgi, Hans-Beat; Dittrich, Birger; Grabowsky, Simon; Jayatilaka, Dylan

    2014-09-01

    Hirshfeld atom refinement (HAR) is a method which determines structural parameters from single-crystal X-ray diffraction data by using an aspherical atom partitioning of tailor-made ab initio quantum mechanical molecular electron densities without any further approximation. Here the original HAR method is extended by implementing an iterative procedure of successive cycles of electron density calculations, Hirshfeld atom scattering factor calculations and structural least-squares refinements, repeated until convergence. The importance of this iterative procedure is illustrated via the example of crystalline ammonia. The new HAR method is then applied to X-ray diffraction data of the dipeptide Gly-l-Ala measured at 12, 50, 100, 150, 220 and 295 K, using Hartree-Fock and BLYP density functional theory electron densities and three different basis sets. All positions and anisotropic displacement parameters (ADPs) are freely refined without constraints or restraints - even those for hydrogen atoms. The results are systematically compared with those from neutron diffraction experiments at the temperatures 12, 50, 150 and 295 K. Although non-hydrogen-atom ADPs differ by up to three combined standard uncertainties (csu's), all other structural parameters agree within less than 2 csu's. Using our best calculations (BLYP/cc-pVTZ, recommended for organic molecules), the accuracy of determining bond lengths involving hydrogen atoms from HAR is better than 0.009 Å for temperatures of 150 K or below; for hydrogen-atom ADPs it is better than 0.006 Å(2) as judged from the mean absolute X-ray minus neutron differences. These results are among the best ever obtained. Remarkably, the precision of determining bond lengths and ADPs for the hydrogen atoms from the HAR procedure is comparable with that from the neutron measurements - an outcome which is obtained with a routinely achievable resolution of the X-ray data of 0.65 Å.

  12. Hirshfeld atom refinement

    PubMed Central

    Capelli, Silvia C.; Bürgi, Hans-Beat; Dittrich, Birger; Grabowsky, Simon; Jayatilaka, Dylan

    2014-01-01

    Hirshfeld atom refinement (HAR) is a method which determines structural parameters from single-crystal X-ray diffraction data by using an aspherical atom partitioning of tailor-made ab initio quantum mechanical molecular electron densities without any further approximation. Here the original HAR method is extended by implementing an iterative procedure of successive cycles of electron density calculations, Hirshfeld atom scattering factor calculations and structural least-squares refinements, repeated until convergence. The importance of this iterative procedure is illustrated via the example of crystalline ammonia. The new HAR method is then applied to X-ray diffraction data of the dipeptide Gly–l-Ala measured at 12, 50, 100, 150, 220 and 295 K, using Hartree–Fock and BLYP density functional theory electron densities and three different basis sets. All positions and anisotropic displacement parameters (ADPs) are freely refined without constraints or restraints – even those for hydrogen atoms. The results are systematically compared with those from neutron diffraction experiments at the temperatures 12, 50, 150 and 295 K. Although non-hydrogen-atom ADPs differ by up to three combined standard uncertainties (csu’s), all other structural parameters agree within less than 2 csu’s. Using our best calculations (BLYP/cc-pVTZ, recommended for organic molecules), the accuracy of determining bond lengths involving hydrogen atoms from HAR is better than 0.009 Å for temperatures of 150 K or below; for hydrogen-atom ADPs it is better than 0.006 Å2 as judged from the mean absolute X-ray minus neutron differences. These results are among the best ever obtained. Remarkably, the precision of determining bond lengths and ADPs for the hydrogen atoms from the HAR procedure is comparable with that from the neutron measurements – an outcome which is obtained with a routinely achievable resolution of the X-ray data of 0.65 Å. PMID:25295177

  13. Introducing soft systems methodology plus (SSM+): why we need it and what it can contribute.

    PubMed

    Braithwaite, Jeffrey; Hindle, Don; Iedema, Rick; Westbrook, Johanna I

    2002-01-01

    There are many complicated and seemingly intractable problems in the health care sector. Past ways to address them have involved political responses, economic restructuring, biomedical and scientific studies, and managerialist or business-oriented tools. Few methods have enabled us to develop a systematic response to problems. Our version of soft systems methodology, SSM+, seems to improve problem solving processes by providing an iterative, staged framework that emphasises collaborative learning and systems redesign involving both technical and cultural fixes.

  14. Introduction

    USGS Publications Warehouse

    Friend, Milton; Franson, J. Christian; Friend, Milton; Gibbs, Samantha E.J.; Wild, Margaret A.

    2015-01-01

    This is the third iteration of the National Wildlife Health Center's (NWHC) field guide developed primarily to assist field managers and biologists address diseases they encounter. By itself, the first iteration, “Field Guide of Wildlife Diseases: General Field Procedures and Diseases of Migratory Birds,” was simply another addition to an increasing array of North American field guides and other publications focusing on disease in free-ranging wildlife populations. Collectively, those publications were reflecting the ongoing transition in the convergence of wildlife management and wildlife disease as foundational components within the structure of wildlife conservation as a social enterprise serving the stewardship of our wildlife resources. For context, it is useful to consider those publications relative to a timeline of milestones involving the evolution of wildlife conservation in North America.

  15. Modelling of radiation impact on ITER Beryllium wall

    NASA Astrophysics Data System (ADS)

    Landman, I. S.; Janeschitz, G.

    2009-04-01

    In the ITER H-Mode confinement regime, edge localized instabilities (ELMs) will perturb the discharge. Plasma lost after each ELM moves along magnetic field lines and impacts on divertor armour, causing plasma contamination by back propagating eroded carbon or tungsten. These impurities produce enhanced radiation flux distributed mainly over the beryllium main chamber wall. The simulation of the complicated processes involved are subject of the integrated tokamak code TOKES that is currently under development. This work describes the new TOKES model for radiation transport through confined plasma. Equations for level populations of the multi-fluid plasma species and the propagation of different kinds of radiation (resonance, recombination and bremsstrahlung photons) are implemented. First simulation results without account of resonance lines are presented.

  16. Combining living anionic polymerization with branching reactions in an iterative fashion to design branched polymers.

    PubMed

    Higashihara, Tomoya; Sugiyama, Kenji; Yoo, Hee-Soo; Hayashi, Mayumi; Hirao, Akira

    2010-06-16

    This paper reviews the precise synthesis of many-armed and multi-compositional star-branched polymers, exact graft (co)polymers, and structurally well-defined dendrimer-like star-branched polymers, which are synthetically difficult, by a commonly-featured iterative methodology combining living anionic polymerization with branched reactions to design branched polymers. The methodology basically involves only two synthetic steps; (a) preparation of a polymeric building block corresponding to each branched polymer and (b) connection of the resulting building unit to another unit. The synthetic steps were repeated in a stepwise fashion several times to successively synthesize a series of well-defined target branched polymers. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Non-iterative characterization of few-cycle laser pulses using flat-top gates.

    PubMed

    Selm, Romedi; Krauss, Günther; Leitenstorfer, Alfred; Zumbusch, Andreas

    2012-03-12

    We demonstrate a method for broadband laser pulse characterization based on a spectrally resolved cross-correlation with a narrowband flat-top gate pulse. Excellent phase-matching by collinear excitation in a microscope focus is exploited by degenerate four-wave mixing in a microscope slide. Direct group delay extraction of an octave spanning spectrum which is generated in a highly nonlinear fiber allows for spectral phase retrieval. The validity of the technique is supported by the comparison with an independent second-harmonic fringe-resolved autocorrelation measurement for an 11 fs laser pulse.

  18. Pricing and reimbursement experiences and insights in the European Union and the United States: Lessons learned to approach adaptive payer pathways.

    PubMed

    Faulkner, S D; Lee, M; Qin, D; Morrell, L; Xoxi, E; Sammarco, A; Cammarata, S; Russo, P; Pani, L; Barker, R

    2016-12-01

    Earlier patient access to beneficial therapeutics that addresses unmet need is one of the main requirements of innovation in global healthcare systems already burdened by unsustainable budgets. "Adaptive pathways" encompass earlier cross-stakeholder engagement, regulatory tools, and iterative evidence generation through the life cycle of the medicinal product. A key enabler of earlier patient access is through more flexible and adaptive payer approaches to pricing and reimbursement that reflect the emerging evidence generated. © 2016 American Society for Clinical Pharmacology and Therapeutics.

  19. CDC-reported assisted reproductive technology live-birth rates may mislead the public.

    PubMed

    Kushnir, Vitaly A; Choi, Jennifer; Darmon, Sarah K; Albertini, David F; Barad, David H; Gleicher, Norbert

    2017-08-01

    The Centre for Disease Control and Prevention (CDC) publicly reports assisted reproductive technology live-birth rates (LBR) for each US fertility clinic under legal mandate. The 2014 CDC report excluded 35,406 of 184,527 (19.2%) autologous assisted reproductive technology cycles that involved embryo or oocyte banking from LBR calculations. This study calculated 2014 total clinic LBR for all patients utilizing autologous oocytes two ways: including all initiated assisted reproductive technology cycles or excluding banking cycles, as done by the CDC. The main limitation of this analysis is the CDC report did not differentiate between cycles involving long-term banking of embryos or oocytes for fertility preservation from cycles involving short-term embryo banking. Twenty-seven of 458 (6%) clinics reported over 40% of autologous cycles involved banking, collectively performing 12% of all US assisted reproductive technology cycles. LBR in these outlier clinics calculated by the CDC method, was higher than the other 94% of clinics (33.1% versus 31.1%). However, recalculated LBR including banking cycles in the outlier clinics was lower than the other 94% of clinics (15.5% versus 26.6%). LBR calculated by the two methods increasingly diverged based on proportion of banking cycles performed by each clinic reaching 4.5-fold, thereby, potentially misleading the public. Copyright © 2017 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  20. Using narrative-based design scaffolds within a mobile learning environment to support learning outdoors with young children

    NASA Astrophysics Data System (ADS)

    Seely, Brian J.

    This study aims to advance learning outdoors with mobile devices. As part of the ongoing Tree Investigators design-based research study, this research investigated a mobile application to support observation, identification, and explanation of the tree life cycle within an authentic, outdoor setting. Recognizing the scientific and conceptual complexity of this topic for young children, the design incorporated technological and design scaffolds within a narrative-based learning environment. In an effort to support learning, 14 participants (aged 5-9) were guided through the mobile app on tree life cycles by a comic-strip pedagogical agent, "Nutty the Squirrel", as they looked to explore and understand through guided observational practices and artifact creation tasks. In comparison to previous iterations of this DBR study, the overall patterns of talk found in this study were similar, with perceptual and conceptual talk being the first and second most frequently coded categories, respectively. However, this study coded considerably more instances of affective talk. This finding of the higher frequency of affective talk could possibly be explained by the relatively younger age of this iteration's participants, in conjunction with the introduced pedagogical agent, who elicited playfulness and delight from the children. The results also indicated a significant improvement when comparing the pretest results (mean score of .86) with the posttest results (mean score of 4.07, out of 5). Learners were not only able to recall the phases of a tree life cycle, but list them in the correct order. The comparison reports a significant increase, showing evidence of increased knowledge and appropriation of scientific vocabulary. The finding suggests the narrative was effective in structuring the complex material into a story for sense making. Future research with narratives should consider a design to promote learner agency through more interactions with the pedagogical agent and a conditional branching scenario framework to further evoke interest and engagement.

  1. Mixed plasma species effects on Tungsten

    NASA Astrophysics Data System (ADS)

    Baldwin, Matt; Doerner, Russ; Nishijima, Daisuke; Ueda, Yoshio

    2007-11-01

    The diverted reactor exhaust in confinement machines like ITER and DEMO will be intense-mixed plasmas of fusion (D, T, He) and wall species (Be, C, W, in ITER and W in DEMO), characterized by tremendous heat and particle fluxes. In both devices, the divertor walls are to be exposed to such plasma and must operate at high temperature for long durations. Tungsten, with its high-melting point and low-sputtering yield is currently viewed as the leading choice for divertor-wall material in this next generation class of fusion devices, and is supported by an enormous amount of work that has been done to examine its performance in hydrogen isotope plasmas. However, studies of the more realistic scenario, involving mixed species interactions, are considerably less. Current experiments on the PISCES-B device are focused on these issues. The formation of Be-W alloys, He induced nanoscopic morphology, and blistering, as well as mitigation influences on these effects caused by Be and C layer formation have all been observed. These results and the corresponding implications for ITER and DEMO will be presented.

  2. A novel recursive Fourier transform for nonuniform sampled signals: application to heart rate variability spectrum estimation.

    PubMed

    Holland, Alexander; Aboy, Mateo

    2009-07-01

    We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.

  3. 3D algebraic iterative reconstruction for cone-beam x-ray differential phase-contrast computed tomography.

    PubMed

    Fu, Jian; Hu, Xinhua; Velroyen, Astrid; Bech, Martin; Jiang, Ming; Pfeiffer, Franz

    2015-01-01

    Due to the potential of compact imaging systems with magnified spatial resolution and contrast, cone-beam x-ray differential phase-contrast computed tomography (DPC-CT) has attracted significant interest. The current proposed FDK reconstruction algorithm with the Hilbert imaginary filter will induce severe cone-beam artifacts when the cone-beam angle becomes large. In this paper, we propose an algebraic iterative reconstruction (AIR) method for cone-beam DPC-CT and report its experiment results. This approach considers the reconstruction process as the optimization of a discrete representation of the object function to satisfy a system of equations that describes the cone-beam DPC-CT imaging modality. Unlike the conventional iterative algorithms for absorption-based CT, it involves the derivative operation to the forward projections of the reconstructed intermediate image to take into account the differential nature of the DPC projections. This method is based on the algebraic reconstruction technique, reconstructs the image ray by ray, and is expected to provide better derivative estimates in iterations. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured with a three-grating interferometer and a mini-focus x-ray tube source. It is shown that the proposed method can reduce the cone-beam artifacts and performs better than FDK under large cone-beam angles. This algorithm is of interest for future cone-beam DPC-CT applications.

  4. A Gauge Invariant Description for the General Conic Constrained Particle from the FJBW Iteration Algorithm

    NASA Astrophysics Data System (ADS)

    Barbosa, Gabriel D.; Thibes, Ronaldo

    2018-06-01

    We consider a second-degree algebraic curve describing a general conic constraint imposed on the motion of a massive spinless particle. The problem is trivial at classical level but becomes involved and interesting concerning its quantum counterpart with subtleties in its symplectic structure and symmetries. We start with a second-class version of the general conic constrained particle, which encompasses previous versions of circular and elliptical paths discussed in the literature. By applying the symplectic FJBW iteration program, we proceed on to show how a gauge invariant version for the model can be achieved from the originally second-class system. We pursue the complete constraint analysis in phase space and perform the Faddeev-Jackiw symplectic quantization following the Barcelos-Wotzasek iteration program to unravel the essential aspects of the constraint structure. While in the standard Dirac-Bergmann approach there are four second-class constraints, in the FJBW they reduce to two. By using the symplectic potential obtained in the last step of the FJBW iteration process, we construct a gauge invariant model exhibiting explicitly its BRST symmetry. We obtain the quantum BRST charge and write the Green functions generator for the gauge invariant version. Our results reproduce and neatly generalize the known BRST symmetry of the rigid rotor, clearly showing that this last one constitutes a particular case of a broader class of theories.

  5. Cognitive representation of "musical fractals": Processing hierarchy and recursion in the auditory domain.

    PubMed

    Martins, Mauricio Dias; Gingras, Bruno; Puig-Waldmueller, Estela; Fitch, W Tecumseh

    2017-04-01

    The human ability to process hierarchical structures has been a longstanding research topic. However, the nature of the cognitive machinery underlying this faculty remains controversial. Recursion, the ability to embed structures within structures of the same kind, has been proposed as a key component of our ability to parse and generate complex hierarchies. Here, we investigated the cognitive representation of both recursive and iterative processes in the auditory domain. The experiment used a two-alternative forced-choice paradigm: participants were exposed to three-step processes in which pure-tone sequences were built either through recursive or iterative processes, and had to choose the correct completion. Foils were constructed according to generative processes that did not match the previous steps. Both musicians and non-musicians were able to represent recursion in the auditory domain, although musicians performed better. We also observed that general 'musical' aptitudes played a role in both recursion and iteration, although the influence of musical training was somehow independent from melodic memory. Moreover, unlike iteration, recursion in audition was well correlated with its non-auditory (recursive) analogues in the visual and action sequencing domains. These results suggest that the cognitive machinery involved in establishing recursive representations is domain-general, even though this machinery requires access to information resulting from domain-specific processes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Development of an evidence-based review with recommendations using an online iterative process.

    PubMed

    Rudmik, Luke; Smith, Timothy L

    2011-01-01

    The practice of modern medicine is governed by evidence-based principles. Due to the plethora of medical literature, clinicians often rely on systematic reviews and clinical guidelines to summarize the evidence and provide best practices. Implementation of an evidence-based clinical approach can minimize variation in health care delivery and optimize the quality of patient care. This article reports a method for developing an "Evidence-based Review with Recommendations" using an online iterative process. The manuscript describes the following steps involved in this process: Clinical topic selection, Evidence-hased review assignment, Literature review and initial manuscript preparation, Iterative review process with author selection, and Manuscript finalization. The goal of this article is to improve efficiency and increase the production of evidence-based reviews while maintaining the high quality and transparency associated with the rigorous methodology utilized for clinical guideline development. With the rise of evidence-based medicine, most medical and surgical specialties have an abundance of clinical topics which would benefit from a formal evidence-based review. Although clinical guideline development is an important methodology, the associated challenges limit development to only the absolute highest priority clinical topics. As outlined in this article, the online iterative approach to the development of an Evidence-based Review with Recommendations may improve productivity without compromising the quality associated with formal guideline development methodology. Copyright © 2011 American Rhinologic Society-American Academy of Otolaryngic Allergy, LLC.

  7. Methodology for Sensitivity Analysis, Approximate Analysis, and Design Optimization in CFD for Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1996-01-01

    An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.

  8. Kiondo Bag Boutique: A Serial Case for Introductory Financial Accounting

    ERIC Educational Resources Information Center

    Siriwardane, Harshini P.

    2014-01-01

    Kiondo Bag Boutique is a hypothetical serial case involving a start-up retail business. The case evolves from an ambitious business idea to a successful business. Through the evolving business, the importance of accounting information is highlighted. Different iterations are used to illustrate the role of accounting in serving and managing…

  9. iArchi[tech]ture: Developing a Mobile Social Media Framework for Pedagogical Transformation

    ERIC Educational Resources Information Center

    Cochrane, Thomas; Rhodes, David

    2013-01-01

    This paper critiques the journey of pedagogical change over three mobile learning (mlearning) project iterations (2009 to 2011) within the context of a Bachelor of Architecture degree. The three projects were supported by an intentional community of practice model involving a partnership of an educational researcher/technologist, course lecturers,…

  10. Is This a Meaningful Learning Experience? Interactive Critical Self-Inquiry as Investigation

    ERIC Educational Resources Information Center

    Allard, Andrea C.; Gallant, Andrea

    2012-01-01

    What conditions enable educators to engage in meaningful learning experiences with peers and beginning practitioners? This article documents a self-study on our actions-in-practice in a peer mentoring project. The investigation involved an iterative process to improve our knowledge as teacher educators, reflective practitioners, and researchers.…

  11. Investigating Convergence Patterns for Numerical Methods Using Data Analysis

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2013-01-01

    The article investigates the patterns that arise in the convergence of numerical methods, particularly those in the errors involved in successive iterations, using data analysis and curve fitting methods. In particular, the results obtained are used to convey a deeper level of understanding of the concepts of linear, quadratic, and cubic…

  12. The Past, Present, and Future of Demand-Driven Acquisitions in Academic Libraries

    ERIC Educational Resources Information Center

    Goedeken, Edward A.; Lawson, Karen

    2015-01-01

    Demand-driven acquisitions (DDA) programs have become a well-established approach toward integrating user involvement in the process of building academic library collections. However, these programs are in a constant state of evolution. A recent iteration in this evolution of ebook availability is the advent of large ebook collections whose…

  13. How children perceive fractals: Hierarchical self-similarity and cognitive development

    PubMed Central

    Martins, Maurício Dias; Laaha, Sabine; Freiberger, Eva Maria; Choi, Soonja; Fitch, W. Tecumseh

    2014-01-01

    The ability to understand and generate hierarchical structures is a crucial component of human cognition, available in language, music, mathematics and problem solving. Recursion is a particularly useful mechanism for generating complex hierarchies by means of self-embedding rules. In the visual domain, fractals are recursive structures in which simple transformation rules generate hierarchies of infinite depth. Research on how children acquire these rules can provide valuable insight into the cognitive requirements and learning constraints of recursion. Here, we used fractals to investigate the acquisition of recursion in the visual domain, and probed for correlations with grammar comprehension and general intelligence. We compared second (n = 26) and fourth graders (n = 26) in their ability to represent two types of rules for generating hierarchical structures: Recursive rules, on the one hand, which generate new hierarchical levels; and iterative rules, on the other hand, which merely insert items within hierarchies without generating new levels. We found that the majority of fourth graders, but not second graders, were able to represent both recursive and iterative rules. This difference was partially accounted by second graders’ impairment in detecting hierarchical mistakes, and correlated with between-grade differences in grammar comprehension tasks. Empirically, recursion and iteration also differed in at least one crucial aspect: While the ability to learn recursive rules seemed to depend on the previous acquisition of simple iterative representations, the opposite was not true, i.e., children were able to acquire iterative rules before they acquired recursive representations. These results suggest that the acquisition of recursion in vision follows learning constraints similar to the acquisition of recursion in language, and that both domains share cognitive resources involved in hierarchical processing. PMID:24955884

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Guoyong; Budny, Robert; Gorelenkov, Nikolai

    We report here the work done for the FY14 OFES Theory Performance Target as given below: "Understanding alpha particle confinement in ITER, the world's first burning plasma experiment, is a key priority for the fusion program. In FY 2014, determine linear instability trends and thresholds of energetic particle-driven shear Alfven eigenmodes in ITER for a range of parameters and profiles using a set of complementary simulation models (gyrokinetic, hybrid, and gyrofluid). Carry out initial nonlinear simulations to assess the effects of the unstable modes on energetic particle transport". In the past year (FY14), a systematic study of the alpha-driven Alfvenmore » modes in ITER has been carried out jointly by researchers from six institutions involving seven codes including the transport simulation code TRANSP (R. Budny and F. Poli, PPPL), three gyrokinetic codes: GEM (Y. Chen, Univ. of Colorado), GTC (J. McClenaghan, Z. Lin, UCI), and GYRO (E. Bass, R. Waltz, UCSD/GA), the hybrid code M3D-K (G.Y. Fu, PPPL), the gyro-fluid code TAEFL (D. Spong, ORNL), and the linear kinetic stability code NOVA-K (N. Gorelenkov, PPPL). A range of ITER parameters and profiles are specified by TRANSP simulation of a hybrid scenario case and a steady-state scenario case. Based on the specified ITER equilibria linear stability calculations are done to determine the stability boundary of alpha-driven high-n TAEs using the five initial value codes (GEM, GTC, GYRO, M3D-K, and TAEFL) and the kinetic stability code (NOVA-K). Both the effects of alpha particles and beam ions have been considered. Finally, the effects of the unstable modes on energetic particle transport have been explored using GEM and M3D-K.« less

  15. Fast in-memory elastic full-waveform inversion using consumer-grade GPUs

    NASA Astrophysics Data System (ADS)

    Sivertsen Bergslid, Tore; Birger Raknes, Espen; Arntsen, Børge

    2017-04-01

    Full-waveform inversion (FWI) is a technique to estimate subsurface properties by using the recorded waveform produced by a seismic source and applying inverse theory. This is done through an iterative optimization procedure, where each iteration requires solving the wave equation many times, then trying to minimize the difference between the modeled and the measured seismic data. Having to model many of these seismic sources per iteration means that this is a highly computationally demanding procedure, which usually involves writing a lot of data to disk. We have written code that does forward modeling and inversion entirely in memory. A typical HPC cluster has many more CPUs than GPUs. Since FWI involves modeling many seismic sources per iteration, the obvious approach is to parallelize the code on a source-by-source basis, where each core of the CPU performs one modeling, and do all modelings simultaneously. With this approach, the GPU is already at a major disadvantage in pure numbers. Fortunately, GPUs can more than make up for this hardware disadvantage by performing each modeling much faster than a CPU. Another benefit of parallelizing each individual modeling is that it lets each modeling use a lot more RAM. If one node has 128 GB of RAM and 20 CPU cores, each modeling can use only 6.4 GB RAM if one is running the node at full capacity with source-by-source parallelization on the CPU. A parallelized per-source code using GPUs can use 64 GB RAM per modeling. Whenever a modeling uses more RAM than is available and has to start using regular disk space the runtime increases dramatically, due to slow file I/O. The extremely high computational speed of the GPUs combined with the large amount of RAM available for each modeling lets us do high frequency FWI for fairly large models very quickly. For a single modeling, our GPU code outperforms the single-threaded CPU-code by a factor of about 75. Successful inversions have been run on data with frequencies up to 40 Hz for a model of 2001 by 600 grid points with 5 m grid spacing and 5000 time steps, in less than 2.5 minutes per source. In practice, using 15 nodes (30 GPUs) to model 101 sources, each iteration took approximately 9 minutes. For reference, the same inversion run with our CPU code uses two hours per iteration. This was done using only a very simple wavefield interpolation technique, saving every second timestep. Using a more sophisticated checkpointing or wavefield reconstruction method would allow us to increase this model size significantly. Our results show that ordinary gaming GPUs are a viable alternative to the expensive professional GPUs often used today, when performing large scale modeling and inversion in geophysics.

  16. Incorporating redox processes improves prediction of carbon and nutrient cycling and greenhouse gas emission

    NASA Astrophysics Data System (ADS)

    Tang, Guoping; Zheng, Jianqiu; Yang, Ziming; Graham, David; Gu, Baohua; Mayes, Melanie; Painter, Scott; Thornton, Peter

    2016-04-01

    Among the coupled thermal, hydrological, geochemical, and biological processes, redox processes play major roles in carbon and nutrient cycling and greenhouse gas (GHG) emission. Increasingly, mechanistic representation of redox processes is acknowledged as necessary for accurate prediction of GHG emission in the assessment of land-atmosphere interactions. Simple organic substrates, Fe reduction, microbial reactions, and the Windermere Humic Aqueous Model (WHAM) were added to a reaction network used in the land component of an Earth system model. In conjunction with this amended reaction network, various temperature response functions used in ecosystem models were assessed for their ability to describe experimental observations from incubation tests with arctic soils. Incorporation of Fe reduction reactions improves the prediction of the lag time between CO2 and CH4 accumulation. The inclusion of the WHAM model enables us to approximately simulate the initial pH drop due to organic acid accumulation and then a pH increase due to Fe reduction without parameter adjustment. The CLM4.0, CENTURY, and Ratkowsky temperature response functions better described the observations than the Q10 method, Arrhenius equation, and ROTH-C. As electron acceptors between O2 and CO2 (e.g., Fe(III), SO42-) are often involved, our results support inclusion of these redox reactions for accurate prediction of CH4 production and consumption. Ongoing work includes improving the parameterization of organic matter decomposition to produce simple organic substrates, examining the influence of redox potential on methanogenesis under thermodynamically favorable conditions, and refining temperature response representation near the freezing point by additional model-experiment iterations. We will use the model to describe observed GHG emission at arctic and tropical sites.

  17. Parallel computing for automated model calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.

    2002-07-29

    Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less

  18. Further Development of the Assessment of Military Multitasking Performance: Iterative Reliability Testing

    PubMed Central

    McCulloch, Karen L.; Radomski, Mary V.; Finkelstein, Marsha; Cecchini, Amy S.; Davidson, Leslie F.; Heaton, Kristin J.; Smith, Laurel B.; Scherer, Matthew R.

    2017-01-01

    The Assessment of Military Multitasking Performance (AMMP) is a battery of functional dual-tasks and multitasks based on military activities that target known sensorimotor, cognitive, and exertional vulnerabilities after concussion/mild traumatic brain injury (mTBI). The AMMP was developed to help address known limitations in post concussive return to duty assessment and decision making. Once validated, the AMMP is intended for use in combination with other metrics to inform duty-readiness decisions in Active Duty Service Members following concussion. This study used an iterative process of repeated interrater reliability testing and feasibility feedback to drive modifications to the 9 tasks of the original AMMP which resulted in a final version of 6 tasks with metrics that demonstrated clinically acceptable ICCs of > 0.92 (range of 0.92–1.0) for the 3 dual tasks and > 0.87 (range 0.87–1.0) for the metrics of the 3 multitasks. Three metrics involved in recording subject errors across 2 tasks did not achieve ICCs above 0.85 set apriori for multitasks (0.64) and above 0.90 set for dual-tasks (0.77 and 0.86) and were not used for further analysis. This iterative process involved 3 phases of testing with between 13 and 26 subjects, ages 18–42 years, tested in each phase from a combined cohort of healthy controls and Service Members with mTBI. Study findings support continued validation of this assessment tool to provide rehabilitation clinicians further return to duty assessment methods robust to ceiling effects with strong face validity to injured Warriors and their leaders. PMID:28056045

  19. Adjoint tomography of Europe

    NASA Astrophysics Data System (ADS)

    Zhu, H.; Bozdag, E.; Peter, D. B.; Tromp, J.

    2010-12-01

    We use spectral-element and adjoint methods to image crustal and upper mantle heterogeneity in Europe. The study area involves the convergent boundaries of the Eurasian, African and Arabian plates and the divergent boundary between the Eurasian and North American plates, making the tectonic structure of this region complex. Our goal is to iteratively fit observed seismograms and improve crustal and upper mantle images by taking advantage of 3D forward and inverse modeling techniques. We use data from 200 earthquakes with magnitudes between 5 and 6 recorded by 262 stations provided by ORFEUS. Crustal model Crust2.0 combined with mantle model S362ANI comprise the initial 3D model. Before the iterative adjoint inversion, we determine earthquake source parameters in the initial 3D model by using 3D Green functions and their Fréchet derivatives with respect to the source parameters (i.e., centroid moment tensor and location). The updated catalog is used in the subsequent structural inversion. Since we concentrate on upper mantle structures which involve anisotropy, transversely isotropic (frequency-dependent) traveltime sensitivity kernels are used in the iterative inversion. Taking advantage of the adjoint method, we use as many measurements as can obtain based on comparisons between observed and synthetic seismograms. FLEXWIN (Maggi et al., 2009) is used to automatically select measurement windows which are analyzed based on a multitaper technique. The bandpass ranges from 15 second to 150 second. Long-period surface waves and short-period body waves are combined in source relocations and structural inversions. A statistical assessments of traveltime anomalies and logarithmic waveform differences is used to characterize the inverted sources and structure.

  20. Numerical studies on sizing/ rating of plate fin heat exchangers for a modified Claude cycle based helium liquefier/ refrigerator

    NASA Astrophysics Data System (ADS)

    Goyal, M.; Chakravarty, A.; Atrey, M. D.

    2017-02-01

    Performance of modern helium refrigeration/ liquefaction systems depends significantly on the effectiveness of heat exchangers. Generally, compact plate fin heat exchangers (PFHE) having very high effectiveness (>0.95) are used in such systems. Apart from basic fluid film resistances, various secondary parameters influence the sizing/ rating of these heat exchangers. In the present paper, sizing calculations are performed, using in-house developed numerical models/ codes, for a set of high effectiveness PFHE for a modified Claude cycle based helium liquefier/ refrigerator operating in the refrigeration mode without liquid nitrogen (LN2) pre-cooling. The combined effects of secondary parameters like axial heat conduction through the heat exchanger metal matrix, parasitic heat in-leak from surroundings and variation in the fluid/ metal properties are taken care of in the sizing calculation. Numerical studies are carried out to predict the off-design performance of the PFHEs in the refrigeration mode with LN2 pre-cooling. Iterative process cycle calculations are also carried out to obtain the inlet/ exit state points of the heat exchangers.

  1. The selectivity of the Na+/K+-pump is controlled by binding site protonation and self-correcting occlusion

    PubMed Central

    Rui, Huan; Artigas, Pablo; Roux, Benoît

    2016-01-01

    The Na+/K+-pump maintains the physiological K+ and Na+ electrochemical gradients across the cell membrane. It operates via an 'alternating-access' mechanism, making iterative transitions between inward-facing (E1) and outward-facing (E2) conformations. Although the general features of the transport cycle are known, the detailed physicochemical factors governing the binding site selectivity remain mysterious. Free energy molecular dynamics simulations show that the ion binding sites switch their binding specificity in E1 and E2. This is accompanied by small structural arrangements and changes in protonation states of the coordinating residues. Additional computations on structural models of the intermediate states along the conformational transition pathway reveal that the free energy barrier toward the occlusion step is considerably increased when the wrong type of ion is loaded into the binding pocket, prohibiting the pump cycle from proceeding forward. This self-correcting mechanism strengthens the overall transport selectivity and protects the stoichiometry of the pump cycle. DOI: http://dx.doi.org/10.7554/eLife.16616.001 PMID:27490484

  2. Feasibility of a far infrared laser based polarimeter diagnostic system for the JT-60SA fusion experiment

    NASA Astrophysics Data System (ADS)

    Boboc, A.; Gil, C.; Terranova, D.; Orsitto, F. P.; Soare, S.; Lotte, P.; Sozzi, C.; Imazawa, R.; Kubo, H.

    2018-07-01

    JT-60SA is the large Tokamak device that is being built in Japan under the Broader Approach Satellite Tokamak Programme and the Japanese National Programme and will operate as a satellite machine for ITER. The main goal of the JT-60SA Programme is to provide valuable information for the ITER steady-state scenario and for the design of DEMO, where the real-time control of the safety factor profile is very important, in connection with both MHD stability and plasma confinement. It has been demonstrated in this work that to this end polarimetry measurements are necessary, in particular in order to reconstruct the safety factor profile in reversed shear scenarios. In this paper we present the main steps of a conceptual feasibility study of a multi-channel polarimeter diagnostic and the resulting optimised geometry. In this study, magnetic scenario modelling, a realistic CAD-driven design and long-term operation requirements, rarely even considered at this stage, have been considered. It is shown that a far infrared polarimeter system, with a laser operating at a wavelength of 194.7 μm and up to twelve channels can be envisaged for JT-60SA. The top requirements can be attained, i.e., that the polarimeter, together with other diagnostic measurements, should provide q-profile reconstruction with an accuracy of 10% for the entire plasma cycle and suitable time resolution for real-time applications, in particular in high density and ITER-relevant plasma scenarios.

  3. Using the Tritium Plasma Experiment to evaluate ITER PFC safety

    NASA Astrophysics Data System (ADS)

    Longhurst, Glen R.; Anderl, Robert A.; Bartlit, John R.; Causey, Rion A.; Haines, John R.

    The Tritium Plasma Experiment was assembled at Sandia National Laboratories, Livermore to investigate interactions between dense plasmas at low energies and plasma-facing component materials. This apparatus has the unique capability of replicating plasma conditions in a tokamak divertor with particle flux densities of 2 x 10(exp 19) ions/((sq cm)(s)) and a plasma temperature of about 15 eV using a plasma that includes tritium. With the closure of the Tritium Research Laboratory at Livermore, the experiment was moved to the Tritium Systems Test Assembly facility at Los Alamos National Laboratory. An experimental program has been initiated there using the Tritium Plasma Experiment to examine safety issues related to tritium in plasma-facing components, particularly the ITER divertor. Those issues include tritium retention and release characteristics, tritium permeation rates and transient times to coolant streams, surface modification and erosion by the plasma, the effects of thermal loads and cycling, and particulate production. A considerable lack of data exists in these areas for many of the materials, especially beryllium, being considered for use in ITER. Not only will basic material behavior with respect to safety issues in the divertor environment be examined, but innovative techniques for optimizing performance with respect to tritium safety by material modification and process control will be investigated. Supplementary experiments will be carried out at the Idaho National Engineering Laboratory and Sandia National Laboratory to expand and clarify results obtained on the Tritium Plasma Experiment.

  4. Performance evolution of 60 kA HTS cable prototypes in the EDIPO test facility

    NASA Astrophysics Data System (ADS)

    Bykovsky, N.; Uglietti, D.; Sedlak, K.; Stepanov, B.; Wesche, R.; Bruzzone, P.

    2016-08-01

    During the first test campaign of the 60 kA HTS cable prototypes in the EDIPO test facility, the feasibility of a novel HTS fusion cable concept proposed at the EPFL Swiss Plasma Center (SPC) was successfully demonstrated. While the measured DC performance of the prototypes at magnetic fields from 8 T to 12 T and for currents from 30 kA to 70 kA was close to the expected one, an initial electromagnetic cycling test (1000 cycles) revealed progressive degradation of the performance in both the SuperPower and SuperOx conductors. Aiming to understand the reasons for the degradation, additional cycling (1000 cycles) and warm up-cool down tests were performed during the second test campaign. I c performance degradation of the SuperOx conductor reached ∼20% after about 2000 cycles, which was reason to continue with a visual inspection of the conductor and further tests at 77 K. AC tests were carried out at 0 and 2 T background fields without transport current and at 10 T/50 kA operating conditions. Results obtained in DC and AC tests of the second test campaign are presented and compared with appropriate data published recently. Concluding the first iteration of the HTS cable development program at SPC, a summary and recommendations for the next activity within the HTS fusion cable project are also reported.

  5. Optimal Design of a Resonance-Based Voltage Boosting Rectifier for Wireless Power Transmission.

    PubMed

    Lim, Jaemyung; Lee, Byunghun; Ghovanloo, Maysam

    2018-02-01

    This paper presents the design procedure for a new multi-cycle resonance-based voltage boosting rectifier (MCRR) capable of delivering a desired amount of power to the load (PDL) at a designated high voltage (HV) through a loosely-coupled inductive link. This is achieved by shorting the receiver (Rx) LC-tank for several cycles to harvest and accumulate the wireless energy in the RX inductor before boosting the voltage by breaking the loop and transferring the energy to the load in a quarter cycle. By optimizing the geometries of the transmitter (Tx) and Rx coils and the number of cycles, N , for energy harvesting, through an iterative design procedure, the MCRR can achieve the highest PDL under a given set of design constraints. Governing equations in the MCRR operation are derived to identify key specifications and the design guidelines. Using an exemplary set of specs, the optimized MCRR was able to generate 20.9 V DC across a 100 kΩ load from a 1.8 V p , 6.78 MHz sinusoid input in the ISM-band at a Tx/Rx coil separation of 1.3 cm, power transfer efficiency (PTE) of 2.2%, and N = 9 cycles. At the same coil distance and loading, coils optimized for a conventional half-wave rectifier (CHWR) were able to reach only 13.6 V DC from the same source.

  6. Solutions of large-scale electromagnetics problems involving dielectric objects with the parallel multilevel fast multipole algorithm.

    PubMed

    Ergül, Özgür

    2011-11-01

    Fast and accurate solutions of large-scale electromagnetics problems involving homogeneous dielectric objects are considered. Problems are formulated with the electric and magnetic current combined-field integral equation and discretized with the Rao-Wilton-Glisson functions. Solutions are performed iteratively by using the multilevel fast multipole algorithm (MLFMA). For the solution of large-scale problems discretized with millions of unknowns, MLFMA is parallelized on distributed-memory architectures using a rigorous technique, namely, the hierarchical partitioning strategy. Efficiency and accuracy of the developed implementation are demonstrated on very large problems involving as many as 100 million unknowns.

  7. A targeted noise reduction observational study for reducing noise in a neonatal intensive unit.

    PubMed

    Chawla, S; Barach, P; Dwaihy, M; Kamat, D; Shankaran, S; Panaitescu, B; Wang, B; Natarajan, G

    2017-09-01

    Excessive noise in neonatal intensive care units (NICUs) can interfere with infants' growth, development and healing.Local problem:Sound levels in our NICUs exceeded the recommended levels by the World Health Organization. We implemented a noise reduction strategy in an urban, tertiary academic medical center NICU that included baseline noise measurements. We conducted a survey involving staff and visitors regarding their opinions and perceptions of noise levels in the NICU. Ongoing feedback to staff after each measurement cycle was provided to improve awareness, engagement and adherence with noise reduction strategies. After widespread discussion with active clinician involvement, consensus building and iterative testing, changes were implemented including: lowering of equipment alarm sounds, designated 'quiet times' and implementing a customized education program for staff. A multiphase noise reduction quality improvement (QI) intervention to reduce ambient sound levels in a patient care room in our NICUs by 3 dB (20%) over 18 months. The noise in the NICU was reduced by 3 dB from baseline. Mean (s.d.) baseline, phase 2, 3 and 4 noise levels in the two NICUs were: LAeq: 57.0 (0.84), 56.8 (1.6), 55.3 (1.9) and 54.5 (2.6) dB, respectively (P<0.01). Adherence with the planned process measure of 'quiet times' was >90%. Implementing a multipronged QI initiative resulted in significant noise level reduction in two multipod NICUs. It is feasible to reduce noise levels if QI interventions are coupled with active engagement of the clinical staff and following continuous process of improvement methods, measurements and protocols.

  8. Scale-up of ecological experiments: Density variation in the mobile bivalve Macomona liliana

    USGS Publications Warehouse

    Schneider, Davod C.; Walters, R.; Thrush, S.; Dayton, P.

    1997-01-01

    At present the problem of scaling up from controlled experiments (necessarily at a small spatial scale) to questions of regional or global importance is perhaps the most pressing issue in ecology. Most of the proposed techniques recommend iterative cycling between theory and experiment. We present a graphical technique that facilitates this cycling by allowing the scope of experiments, surveys, and natural history observations to be compared to the scope of models and theory. We apply the scope analysis to the problem of understanding the population dynamics of a bivalve exposed to environmental stress at the scale of a harbour. Previous lab and field experiments were found not to be 1:1 scale models of harbour-wide processes. Scope analysis allowed small scale experiments to be linked to larger scale surveys and to a spatially explicit model of population dynamics.

  9. A parallel variable metric optimization algorithm

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.

    1973-01-01

    An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.

  10. Dual regression physiological modeling of resting-state EPI power spectra: Effects of healthy aging.

    PubMed

    Viessmann, Olivia; Möller, Harald E; Jezzard, Peter

    2018-02-02

    Aging and disease-related changes in the arteriovasculature have been linked to elevated levels of cardiac cycle-induced pulsatility in the cerebral microcirculation. Functional magnetic resonance imaging (fMRI), acquired fast enough to unalias the cardiac frequency contributions, can be used to study these physiological signals in the brain. Here, we propose an iterative dual regression analysis in the frequency domain to model single voxel power spectra of echo planar imaging (EPI) data using external recordings of the cardiac and respiratory cycles as input. We further show that a data-driven variant, without external physiological traces, produces comparable results. We use this framework to map and quantify cardiac and respiratory contributions in healthy aging. We found a significant increase in the spatial extent of cardiac modulated white matter voxels with age, whereas the overall strength of cardiac-related EPI power did not show an age effect. Copyright © 2018. Published by Elsevier Inc.

  11. Two-Step Cycle for Producing Multiple Anodic Aluminum Oxide (AAO) Films with Increasing Long-Range Order

    PubMed Central

    2017-01-01

    Nanoporous anodic aluminum oxide (AAO) membranes are being used for an increasing number of applications. However, the original two-step anodization method in which the first anodization is sacrificial to pre-pattern the second is still widely used to produce them. This method provides relatively low throughput and material utilization as half of the films are discarded. An alternative scheme that relies on alternating anodization and cathodic delamination is demonstrated that allows for the fabrication of several AAO films with only one sacrificial layer thus greatly improving total aluminum to alumina yield. The thickness for which the cathodic delamination performs best to yield full, unbroken AAO sheets is around 85 μm. Additionally, an image analysis method is used to quantify the degree of long-range ordering of the unit cells in the AAO films which was found to increase with each successive iteration of the fabrication cycle. PMID:28630684

  12. Two-Step Cycle for Producing Multiple Anodic Aluminum Oxide (AAO) Films with Increasing Long-Range Order.

    PubMed

    Choudhary, Eric; Szalai, Veronika

    2016-01-01

    Nanoporous anodic aluminum oxide (AAO) membranes are being used for an increasing number of applications. However, the original two-step anodization method in which the first anodization is sacrificial to pre-pattern the second is still widely used to produce them. This method provides relatively low throughput and material utilization as half of the films are discarded. An alternative scheme that relies on alternating anodization and cathodic delamination is demonstrated that allows for the fabrication of several AAO films with only one sacrificial layer thus greatly improving total aluminum to alumina yield. The thickness for which the cathodic delamination performs best to yield full, unbroken AAO sheets is around 85 μm. Additionally, an image analysis method is used to quantify the degree of long-range ordering of the unit cells in the AAO films which was found to increase with each successive iteration of the fabrication cycle.

  13. Understanding Biological Regulation Through Synthetic Biology.

    PubMed

    Bashor, Caleb J; Collins, James J

    2018-05-20

    Engineering synthetic gene regulatory circuits proceeds through iterative cycles of design, building, and testing. Initial circuit designs must rely on often-incomplete models of regulation established by fields of reductive inquiry-biochemistry and molecular and systems biology. As differences in designed and experimentally observed circuit behavior are inevitably encountered, investigated, and resolved, each turn of the engineering cycle can force a resynthesis in understanding of natural network function. Here, we outline research that uses the process of gene circuit engineering to advance biological discovery. Synthetic gene circuit engineering research has not only refined our understanding of cellular regulation but furnished biologists with a toolkit that can be directed at natural systems to exact precision manipulation of network structure. As we discuss, using circuit engineering to predictively reorganize, rewire, and reconstruct cellular regulation serves as the ultimate means of testing and understanding how cellular phenotype emerges from systems-level network function.

  14. Multigrid methods for isogeometric discretization

    PubMed Central

    Gahalaut, K.P.S.; Kraus, J.K.; Tomar, S.K.

    2013-01-01

    We present (geometric) multigrid methods for isogeometric discretization of scalar second order elliptic problems. The smoothing property of the relaxation method, and the approximation property of the intergrid transfer operators are analyzed. These properties, when used in the framework of classical multigrid theory, imply uniform convergence of two-grid and multigrid methods. Supporting numerical results are provided for the smoothing property, the approximation property, convergence factor and iterations count for V-, W- and F-cycles, and the linear dependence of V-cycle convergence on the smoothing steps. For two dimensions, numerical results include the problems with variable coefficients, simple multi-patch geometry, a quarter annulus, and the dependence of convergence behavior on refinement levels ℓ, whereas for three dimensions, only the constant coefficient problem in a unit cube is considered. The numerical results are complete up to polynomial order p=4, and for C0 and Cp-1 smoothness. PMID:24511168

  15. Development and Implementation of Science and Technology Ethics Education Program for Prospective Science Teachers

    NASA Astrophysics Data System (ADS)

    Rhee, Hyang-yon; Choi, Kyunghee

    2014-05-01

    The purposes of this study were (1) to develop a science and technology (ST) ethics education program for prospective science teachers, (2) to examine the effect of the program on the perceptions of the participants, in terms of their ethics and education concerns, and (3) to evaluate the impact of the program design. The program utilized problem-based learning (PBL) which was performed as an iterative process during two cycles. A total of 23 and 29 prospective teachers in each cycle performed team activities. A PBL-based ST ethics education program for the science classroom setting was effective in enhancing participants' perceptions of ethics and education in ST. These perceptions motivated prospective science teachers to develop and implement ST ethics education in their future classrooms. The change in the prospective teachers' perceptions of ethical issues and the need for ethics education was greater when the topic was controversial.

  16. Combined electrochemical, heat generation, and thermal model for large prismatic lithium-ion batteries in real-time applications

    NASA Astrophysics Data System (ADS)

    Farag, Mohammed; Sweity, Haitham; Fleckenstein, Matthias; Habibi, Saeid

    2017-08-01

    Real-time prediction of the battery's core temperature and terminal voltage is very crucial for an accurate battery management system. In this paper, a combined electrochemical, heat generation, and thermal model is developed for large prismatic cells. The proposed model consists of three sub-models, an electrochemical model, heat generation model, and thermal model which are coupled together in an iterative fashion through physicochemical temperature dependent parameters. The proposed parameterization cycles identify the sub-models' parameters separately by exciting the battery under isothermal and non-isothermal operating conditions. The proposed combined model structure shows accurate terminal voltage and core temperature prediction at various operating conditions while maintaining a simple mathematical structure, making it ideal for real-time BMS applications. Finally, the model is validated against both isothermal and non-isothermal drive cycles, covering a broad range of C-rates, and temperature ranges [-25 °C to 45 °C].

  17. Spectral iterative method and convergence analysis for solving nonlinear fractional differential equation

    NASA Astrophysics Data System (ADS)

    Yarmohammadi, M.; Javadi, S.; Babolian, E.

    2018-04-01

    In this study a new spectral iterative method (SIM) based on fractional interpolation is presented for solving nonlinear fractional differential equations (FDEs) involving Caputo derivative. This method is equipped with a pre-algorithm to find the singularity index of solution of the problem. This pre-algorithm gives us a real parameter as the index of the fractional interpolation basis, for which the SIM achieves the highest order of convergence. In comparison with some recent results about the error estimates for fractional approximations, a more accurate convergence rate has been attained. We have also proposed the order of convergence for fractional interpolation error under the L2-norm. Finally, general error analysis of SIM has been considered. The numerical results clearly demonstrate the capability of the proposed method.

  18. Perl Modules for Constructing Iterators

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt

    2009-01-01

    The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.

  19. On feasibility of a closed nuclear power fuel cycle with minimum radioactivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrianova, E. A.; Davidenko, V. D.; Tsibulskiy, V. F., E-mail: Tsibulskiy-VF@nrcki.ru

    2015-12-15

    Practical implementation of a closed nuclear fuel cycle implies solution of two main tasks. The first task is creation of environmentally acceptable operating conditions of the nuclear fuel cycle considering, first of all, high radioactivity of the involved materials. The second task is creation of effective and economically appropriate conditions of involving fertile isotopes in the fuel cycle. Creation of technologies for management of the high-level radioactivity of spent fuel reliable in terms of radiological protection seems to be the hardest problem.

  20. Designing and Developing a Blended Course: Toward Best Practices for Japanese Learners

    ERIC Educational Resources Information Center

    Mehran, Parisa; Alizadeh, Mehrasa; Koguchi, Ichiro; Takemura, Haruo

    2017-01-01

    This paper outlines the iterative stages involved in designing and developing a blended course of English for General Academic Purposes (EGAP) at Osaka University. First, the basic Successive Approximation Model (SAM 1) is introduced as the guiding instructional design model upon which the course was created. Afterward, the stages of design and…

  1. Design and Facilitation of Problem-Based Learning in Graduate Teacher Education: An MA TESOL Case

    ERIC Educational Resources Information Center

    Caswell, Cynthia Ann

    2016-01-01

    This exploratory, evaluative case study introduces a new context for problem-based learning (PBL) involving an iterative, modular approach to curriculum-wide delivery of PBL in an MA TESOL program. The introduction to the curriculum context provides an overview of the design and delivery features particular to the situation. The delivery approach…

  2. No More "Magic Aprons": Longitudinal Assessment and Continuous Improvement of Customer Service at the University of North Dakota Libraries

    ERIC Educational Resources Information Center

    Clark, Karlene T.; Walker, Stephanie R.

    2017-01-01

    The University of North Dakota (UND) Libraries have developed a multi-award winning Customer Service Program (CSP) involving longitudinal assessment and continuous improvement. The CSP consists of iterative training modules; constant reinforcement of Customer Service Principles with multiple communication strategies and tools, and incentives that…

  3. A New Iterative Method to Calculate [pi

    ERIC Educational Resources Information Center

    Dion, Peter; Ho, Anthony

    2012-01-01

    For at least 2000 years people have been trying to calculate the value of [pi], the ratio of the circumference to the diameter of a circle. People know that [pi] is an irrational number; its decimal representation goes on forever. Early methods were geometric, involving the use of inscribed and circumscribed polygons of a circle. However, real…

  4. Reflections on the Use of Iterative, Agile and Collaborative Approaches for Blended Flipped Learning Development

    ERIC Educational Resources Information Center

    Owen, Hazel; Dunham, Nicola

    2015-01-01

    E-learning experiences are widely becoming common practice in many schools, tertiary institutions and other organisations. However despite this increased use of technology to enhance learning and the associated investment involved the result does not always equate to more engaged, knowledgeable and skilled learners. We have observed two key…

  5. What's Getting in the Way of Play? An Analysis of the Contextual Factors that Hinder Recess in Elementary Schools

    ERIC Educational Resources Information Center

    McNamara, Lauren

    2013-01-01

    This article describes the first two years of an ongoing, collaborative action research project focused on the troubled recess environment in 4 elementary schools in southern Ontario. The project involves an iterative, dynamic process of inquiry, planning, action, and reflection among students, teachers, university researchers, university student…

  6. Expanding the Role of School Psychologists to Support Early Career Teachers: A Mixed-Method Study

    ERIC Educational Resources Information Center

    Shernoff, Elisa S.; Frazier, Stacy L.; Maríñez-Lora, Ané M.; Lakind, Davielle; Atkins, Marc S.; Jakobsons, Lara; Hamre, Bridget K.; Bhaumik, Dulal K.; Parker-Katz, Michelle; Neal, Jennifer Watling; Smylie, Mark A.; Patel, Darshan A.

    2016-01-01

    School psychologists have training and expertise in consultation and evidence-based interventions that position them well to support early career teachers (ECTs). The current study involved iterative development and pilot testing of an intervention to help ECTs become more effective in classroom management and engaging learners, as well as more…

  7. Decision aid prototype development for parents considering adenotonsillectomy for their children with sleep disordered breathing.

    PubMed

    Maguire, Erin; Hong, Paul; Ritchie, Krista; Meier, Jeremy; Archibald, Karen; Chorney, Jill

    2016-11-04

    To describe the process involved in developing a decision aid prototype for parents considering adenotonsillectomy for their children with sleep disordered breathing. A paper-based decision aid prototype was developed using the framework proposed by the International Patient Decision Aids Standards Collaborative. The decision aid focused on two main treatment options: watchful waiting and adenotonsillectomy. Usability was assessed with parents of pediatric patients and providers with qualitative content analysis of semi-structured interviews, which included open-ended user feedback. A steering committee composed of key stakeholders was assembled. A needs assessment was then performed, which confirmed the need for a decision support tool. A decision aid prototype was developed and modified based on semi-structured qualitative interviews and a scoping literature review. The prototype provided information on the condition, risk and benefits of treatments, and values clarification. The prototype underwent three cycles of accessibility, feasibility, and comprehensibility testing, incorporating feedback from all stakeholders to develop the final decision aid prototype. A standardized, iterative methodology was used to develop a decision aid prototype for parents considering adenotonsillectomy for their children with sleep disordered breathing. The decision aid prototype appeared feasible, acceptable and comprehensible, and may serve as an effective means of improving shared decision-making.

  8. The RNA-mediated, asymmetric ring regulatory mechanism of the transcription termination Rho helicase decrypted by time-resolved nucleotide analog interference probing (trNAIP).

    PubMed

    Soares, Emilie; Schwartz, Annie; Nollmann, Marcello; Margeat, Emmanuel; Boudvillain, Marc

    2014-08-01

    Rho is a ring-shaped, ATP-dependent RNA helicase/translocase that dissociates transcriptional complexes in bacteria. How RNA recognition is coupled to ATP hydrolysis and translocation in Rho is unclear. Here, we develop and use a new combinatorial approach, called time-resolved Nucleotide Analog Interference Probing (trNAIP), to unmask RNA molecular determinants of catalytic Rho function. We identify a regulatory step in the translocation cycle involving recruitment of the 2'-hydroxyl group of the incoming 3'-RNA nucleotide by a Rho subunit. We propose that this step arises from the intrinsic weakness of one of the subunit interfaces caused by asymmetric, split-ring arrangement of primary RNA tethers around the Rho hexamer. Translocation is at highest stake every seventh nucleotide when the weak interface engages the incoming 3'-RNA nucleotide or breaks, depending on RNA threading constraints in the Rho pore. This substrate-governed, 'test to run' iterative mechanism offers a new perspective on how a ring-translocase may function or be regulated. It also illustrates the interest and versatility of the new trNAIP methodology to unveil the molecular mechanisms of complex RNA-based systems. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. International Fusion Materials Irradiation Facility injector acceptance tests at CEA/Saclay: 140 mA/100 keV deuteron beam characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gobin, R., E-mail: rjgobin@cea.fr; Bogard, D.; Chauvin, N.

    In the framework of the ITER broader approach, the International Fusion Materials Irradiation Facility (IFMIF) deuteron accelerator (2 × 125 mA at 40 MeV) is an irradiation tool dedicated to high neutron flux production for future nuclear plant material studies. During the validation phase, the Linear IFMIF Prototype Accelerator (LIPAc) machine will be tested on the Rokkasho site in Japan. This demonstrator aims to produce 125 mA/9 MeV deuteron beam. Involved in the LIPAc project for several years, specialists from CEA/Saclay designed the injector based on a SILHI type ECR source operating at 2.45 GHz and a 2 solenoid lowmore » energy beam line to produce such high intensity beam. The whole injector, equipped with its dedicated diagnostics, has been then installed and tested on the Saclay site. Before shipment from Europe to Japan, acceptance tests have been performed in November 2012 with 100 keV deuteron beam and intensity as high as 140 mA in continuous and pulsed mode. In this paper, the emittance measurements done for different duty cycles and different beam intensities will be presented as well as beam species fraction analysis. Then the reinstallation in Japan and commissioning plan on site will be reported.« less

  10. Porting marine ecosystem model spin-up using transport matrices to GPUs

    NASA Astrophysics Data System (ADS)

    Siewertsen, E.; Piwonski, J.; Slawig, T.

    2013-01-01

    We have ported an implementation of the spin-up for marine ecosystem models based on transport matrices to graphics processing units (GPUs). The original implementation was designed for distributed-memory architectures and uses the Portable, Extensible Toolkit for Scientific Computation (PETSc) library that is based on the Message Passing Interface (MPI) standard. The spin-up computes a steady seasonal cycle of ecosystem tracers with climatological ocean circulation data as forcing. Since the transport is linear with respect to the tracers, the resulting operator is represented by matrices. Each iteration of the spin-up involves two matrix-vector multiplications and the evaluation of the used biogeochemical model. The original code was written in C and Fortran. On the GPU, we use the Compute Unified Device Architecture (CUDA) standard, a customized version of PETSc and a commercial CUDA Fortran compiler. We describe the extensions to PETSc and the modifications of the original C and Fortran codes that had to be done. Here we make use of freely available libraries for the GPU. We analyze the computational effort of the main parts of the spin-up for two exemplar ecosystem models and compare the overall computational time to those necessary on different CPUs. The results show that a consumer GPU can compete with a significant number of cluster CPUs without further code optimization.

  11. Usability and utility evaluation of the web-based "Should I Start Insulin?" patient decision aid for patients with type 2 diabetes among older people.

    PubMed

    Lee, Yew Kong; Lee, Ping Yein; Ng, Chirk Jenn; Teo, Chin Hai; Abu Bakar, Ahmad Ihsan; Abdullah, Khatijah Lim; Khoo, Ee Ming; Hanafi, Nik Sherina; Low, Wah Yun; Chiew, Thiam Kian

    2018-01-01

    This study aimed to evaluate the usability (ease of use) and utility (impact on user's decision-making process) of a web-based patient decision aid (PDA) among older-age users. A pragmatic, qualitative research design was used. We recruited patients with type 2 diabetes who were at the point of making a decision about starting insulin from a tertiary teaching hospital in Malaysia in 2014. Computer screen recording software was used to record the website browsing session and in-depth interviews were conducted while playing back the website recording. The interviews were analyzed using the framework approach to identify usability and utility issues. Three cycles of iteration were conducted until no more major issues emerged. Thirteen patients participated: median age 65 years old, 10 men, and nine had secondary education/diploma, four were graduates/had postgraduate degree. Four usability issues were identified (navigation between pages and sections, a layout with open display, simple language, and equipment preferences). For utility, participants commented that the website influenced their decision about insulin in three ways: it had provided information about insulin, it helped them deliberate choices using the option-attribute matrix, and it allowed them to involve others in their decision making by sharing the PDA summary printout.

  12. Combining High-Speed Cameras and Stop-Motion Animation Software to Support Students' Modeling of Human Body Movement

    NASA Astrophysics Data System (ADS)

    Lee, Victor R.

    2015-04-01

    Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video, can be deployed in such a way to support students' participation in practices of scientific modeling. As participants in classroom design experiment, fifteen fifth-grade students worked with high-speed cameras and stop-motion animation software (SAM Animation) over several days to produce dynamic models of motion and body movement. The designed series of learning activities involved iterative cycles of animation creation and critique and use of various depictive materials. Subsequent analysis of flipbooks of human jumping movements created by the students at the beginning and end of the unit revealed a significant improvement in both the epistemic fidelity of students' representations. Excerpts from classroom observations highlight the role that the teacher plays in supporting students' thoughtful reflection of and attention to slow-motion video. In total, this design and research intervention demonstrates that the combination of technologies, activities, and teacher support can lead to improvements in some of the foundations associated with students' modeling.

  13. Cell Cycle Regulation of Stem Cells by MicroRNAs.

    PubMed

    Mens, Michelle M J; Ghanbari, Mohsen

    2018-06-01

    MicroRNAs (miRNAs) are a class of small non-coding RNA molecules involved in the regulation of gene expression. They are involved in the fine-tuning of fundamental biological processes such as proliferation, differentiation, survival and apoptosis in many cell types. Emerging evidence suggests that miRNAs regulate critical pathways involved in stem cell function. Several miRNAs have been suggested to target transcripts that directly or indirectly coordinate the cell cycle progression of stem cells. Moreover, previous studies have shown that altered expression levels of miRNAs can contribute to pathological conditions, such as cancer, due to the loss of cell cycle regulation. However, the precise mechanism underlying miRNA-mediated regulation of cell cycle in stem cells is still incompletely understood. In this review, we discuss current knowledge of miRNAs regulatory role in cell cycle progression of stem cells. We describe how specific miRNAs may control cell cycle associated molecules and checkpoints in embryonic, somatic and cancer stem cells. We further outline how these miRNAs could be regulated to influence cell cycle progression in stem cells as a potential clinical application.

  14. Cell Cycle Regulates Nuclear Stability of AID and Determines the Cellular Response to AID

    PubMed Central

    Le, Quy; Maizels, Nancy

    2015-01-01

    AID (Activation Induced Deaminase) deaminates cytosines in DNA to initiate immunoglobulin gene diversification and to reprogram CpG methylation in early development. AID is potentially highly mutagenic, and it causes genomic instability evident as translocations in B cell malignancies. Here we show that AID is cell cycle regulated. By high content screening microscopy, we demonstrate that AID undergoes nuclear degradation more slowly in G1 phase than in S or G2-M phase, and that mutations that affect regulatory phosphorylation or catalytic activity can alter AID stability and abundance. We directly test the role of cell cycle regulation by fusing AID to tags that destabilize nuclear protein outside of G1 or S-G2/M phases. We show that enforced nuclear localization of AID in G1 phase accelerates somatic hypermutation and class switch recombination, and is well-tolerated; while nuclear AID compromises viability in S-G2/M phase cells. We identify AID derivatives that accelerate somatic hypermutation with minimal impact on viability, which will be useful tools for engineering genes and proteins by iterative mutagenesis and selection. Our results further suggest that use of cell cycle tags to regulate nuclear stability may be generally applicable to studying DNA repair and to engineering the genome. PMID:26355458

  15. Tests of a two-color interferometer and polarimeter for ITER density measurements

    NASA Astrophysics Data System (ADS)

    Van Zeeland, M. A.; Carlstrom, T. N.; Finkenthal, D. K.; Boivin, R. L.; Colio, A.; Du, D.; Gattuso, A.; Glass, F.; Muscatello, C. M.; O'Neill, R.; Smiley, M.; Vasquez, J.; Watkins, M.; Brower, D. L.; Chen, J.; Ding, W. X.; Johnson, D.; Mauzey, P.; Perry, M.; Watts, C.; Wood, R.

    2017-12-01

    A full-scale 120 m path length ITER toroidal interferometer and polarimeter (TIP) prototype, including an active feedback alignment system, has been constructed and undergone initial testing at General Atomics. In the TIP prototype, two-color interferometry is carried out at 10.59 μm and 5.22 μm using a CO2 and quantum cascade laser (QCL) respectively while a separate polarimetry measurement of the plasma induced Faraday effect is made at 10.59 μm. The polarimeter system uses co-linear right and left-hand circularly polarized beams upshifted by 40 and 44 MHz acousto-optic cells respectively, to generate the necessary beat signal for heterodyne phase detection, while interferometry measurements are carried out at both 40 MHz and 44 MHz for the CO2 laser and 40 MHz for the QCL. The high-resolution phase information is obtained using an all-digital FPGA based phase demodulation scheme and precision clock source. The TIP prototype is equipped with a piezo tip/tilt stage active feedback alignment system responsible for minimizing noise in the measurement and keeping the TIP diagnostic aligned indefinitely on its 120 m beam path including as the ITER vessel is brought from ambient to operating temperatures. The prototype beam path incorporates translation stages to simulate ITER motion through a bake cycle as well as other sources of motion or misalignment. Even in the presence of significant motion, the TIP prototype is able to meet ITER’s density measurement requirements over 1000 s shot durations with demonstrated phase resolution of 0.06° and 1.5° for the polarimeter and vibration compensated interferometer respectively. TIP vibration compensated interferometer measurements of a plasma have also been made in a pulsed radio frequency device and show a line-integrated density resolution of δ {nL}=3.5× {10}17 m-2.

  16. Intelligent model-based OPC

    NASA Astrophysics Data System (ADS)

    Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Chih, M. H.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.

    2006-03-01

    Optical proximity correction is the technique of pre-distorting mask layouts so that the printed patterns are as close to the desired shapes as possible. For model-based optical proximity correction, a lithographic model to predict the edge position (contour) of patterns on the wafer after lithographic processing is needed. Generally, segmentation of edges is performed prior to the correction. Pattern edges are dissected into several small segments with corresponding target points. During the correction, the edges are moved back and forth from the initial drawn position, assisted by the lithographic model, to finally settle on the proper positions. When the correction converges, the intensity predicted by the model in every target points hits the model-specific threshold value. Several iterations are required to achieve the convergence and the computation time increases with the increase of the required iterations. An artificial neural network is an information-processing paradigm inspired by biological nervous systems, such as how the brain processes information. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. A neural network can be a powerful data-modeling tool that is able to capture and represent complex input/output relationships. The network can accurately predict the behavior of a system via the learning procedure. A radial basis function network, a variant of artificial neural network, is an efficient function approximator. In this paper, a radial basis function network was used to build a mapping from the segment characteristics to the edge shift from the drawn position. This network can provide a good initial guess for each segment that OPC has carried out. The good initial guess reduces the required iterations. Consequently, cycle time can be shortened effectively. The optimization of the radial basis function network for this system was practiced by genetic algorithm, which is an artificially intelligent optimization method with a high probability to obtain global optimization. From preliminary results, the required iterations were reduced from 5 to 2 for a simple dumbbell-shape layout.

  17. Triple effect absorption cycles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, D.C.; Potnis, S.V.; Tang, J.

    1996-12-31

    Triple effect absorption chillers can achieve 50% COP improvement over double-effect systems. However, to translate this potential into cost-effective hardware, the most promising embodiments must be identified. In this study, 12 generic triple effect cycles and 76 possible hermetic loop arrangements of those 12 generic cycles were identified. The generic triple effect cycles were screened based on their pressure and solubility field requirements, generic COPs, risk involved in the component design, and number of components in a high corrosive environment. This screening identified four promising arrangements: Alkitrate Topping cycle, Pressure Staged Envelope cycle, High Pressure Overlap cycle, and Dual Loopmore » cycle. All of these arrangements have a very high COP ({approximately} 1.8), however the development risk and cost involved is different for each arrangement. Therefore, the selection of a particular arrangement will depend upon the specific situation under consideration.« less

  18. An Iterative Solver in the Presence and Absence of Multiplicity for Nonlinear Equations

    PubMed Central

    Özkum, Gülcan

    2013-01-01

    We develop a high-order fixed point type method to approximate a multiple root. By using three functional evaluations per full cycle, a new class of fourth-order methods for this purpose is suggested and established. The methods from the class require the knowledge of the multiplicity. We also present a method in the absence of multiplicity for nonlinear equations. In order to attest the efficiency of the obtained methods, we employ numerical comparisons alongside obtaining basins of attraction to compare them in the complex plane according to their convergence speed and chaotic behavior. PMID:24453914

  19. A free interactive matching program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J.-F. Ostiguy

    1999-04-16

    For physicists and engineers involved in the design and analysis of beamlines (transfer lines or insertions) the lattice function matching problem is central and can be time-consuming because it involves constrained nonlinear optimization. For such problems convergence can be difficult to obtain in general without expert human intervention. Over the years, powerful codes have been developed to assist beamline designers. The canonical example is MAD (Methodical Accelerator Design) developed at CERN by Christophe Iselin. MAD, through a specialized command language, allows one to solve a wide variety of problems, including matching problems. Although in principle, the MAD command interpreter canmore » be run interactively, in practice the solution of a matching problem involves a sequence of independent trial runs. Unfortunately, but perhaps not surprisingly, there still exists relatively few tools exploiting the resources offered by modern environments to assist lattice designer with this routine and repetitive task. In this paper, we describe a fully interactive lattice matching program, written in C++ and assembled using freely available software components. An important feature of the code is that the evolution of the lattice functions during the nonlinear iterative process can be graphically monitored in real time; the user can dynamically interrupt the iterations at will to introduce new variables, freeze existing ones into their current state and/or modify constraints. The program runs under both UNIX and Windows NT.« less

  20. RF Negative Ion Source Development at IPP Garching

    NASA Astrophysics Data System (ADS)

    Kraus, W.; McNeely, P.; Berger, M.; Christ-Koch, S.; Falter, H. D.; Fantz, U.; Franzen, P.; Fröschle, M.; Heinemann, B.; Leyer, S.; Riedl, R.; Speth, E.; Wünderlich, D.

    2007-08-01

    IPP Garching is heavily involved in the development of an ion source for Neutral Beam Heating of the ITER Tokamak. RF driven ion sources have been successfully developed and are in operation on the ASDEX-Upgrade Tokamak for positive ion based NBH by the NB Heating group at IPP Garching. Building on this experience a RF driven H- ion source has been under development at IPP Garching as an alternative to the ITER reference design ion source. The number of test beds devoted to source development for ITER has increased from one (BATMAN) by the addition of two test beds (MANITU, RADI). This paper contains descriptions of the three test beds. Results on diagnostic development using laser photodetachment and cavity ringdown spectroscopy are given for BATMAN. The latest results for long pulse development on MANITU are presented including the to date longest pulse (600 s). As well, details of source modifications necessitated for pulses in excess of 100 s are given. The newest test bed RADI is still being commissioned and only technical details of the test bed are included in this paper. The final topic of the paper is an investigation into the effects of biasing the plasma grid.

  1. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1993-01-01

    In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.

  2. Principal components and iterative regression analysis of geophysical series: Application to Sunspot number (1750 2004)

    NASA Astrophysics Data System (ADS)

    Nordemann, D. J. R.; Rigozo, N. R.; de Souza Echer, M. P.; Echer, E.

    2008-11-01

    We present here an implementation of a least squares iterative regression method applied to the sine functions embedded in the principal components extracted from geophysical time series. This method seems to represent a useful improvement for the non-stationary time series periodicity quantitative analysis. The principal components determination followed by the least squares iterative regression method was implemented in an algorithm written in the Scilab (2006) language. The main result of the method is to obtain the set of sine functions embedded in the series analyzed in decreasing order of significance, from the most important ones, likely to represent the physical processes involved in the generation of the series, to the less important ones that represent noise components. Taking into account the need of a deeper knowledge of the Sun's past history and its implication to global climate change, the method was applied to the Sunspot Number series (1750-2004). With the threshold and parameter values used here, the application of the method leads to a total of 441 explicit sine functions, among which 65 were considered as being significant and were used for a reconstruction that gave a normalized mean squared error of 0.146.

  3. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE PAGES

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...

    2017-09-21

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  4. Design of 4D x-ray tomography experiments for reconstruction using regularized iterative algorithms

    NASA Astrophysics Data System (ADS)

    Mohan, K. Aditya

    2017-10-01

    4D X-ray computed tomography (4D-XCT) is widely used to perform non-destructive characterization of time varying physical processes in various materials. The conventional approach to improving temporal resolution in 4D-XCT involves the development of expensive and complex instrumentation that acquire data faster with reduced noise. It is customary to acquire data with many tomographic views at a high signal to noise ratio. Instead, temporal resolution can be improved using regularized iterative algorithms that are less sensitive to noise and limited views. These algorithms benefit from optimization of other parameters such as the view sampling strategy while improving temporal resolution by reducing the total number of views or the detector exposure time. This paper presents the design principles of 4D-XCT experiments when using regularized iterative algorithms derived using the framework of model-based reconstruction. A strategy for performing 4D-XCT experiments is presented that allows for improving the temporal resolution by progressively reducing the number of views or the detector exposure time. Theoretical analysis of the effect of the data acquisition parameters on the detector signal to noise ratio, spatial reconstruction resolution, and temporal reconstruction resolution is also presented in this paper.

  5. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  6. An Iterative Local Updating Ensemble Smoother for Estimation and Uncertainty Assessment of Hydrologic Model Parameters With Multimodal Distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangjiang; Lin, Guang; Li, Weixuan; Wu, Laosheng; Zeng, Lingzao

    2018-03-01

    Ensemble smoother (ES) has been widely used in inverse modeling of hydrologic systems. However, for problems where the distribution of model parameters is multimodal, using ES directly would be problematic. One popular solution is to use a clustering algorithm to identify each mode and update the clusters with ES separately. However, this strategy may not be very efficient when the dimension of parameter space is high or the number of modes is large. Alternatively, we propose in this paper a very simple and efficient algorithm, i.e., the iterative local updating ensemble smoother (ILUES), to explore multimodal distributions of model parameters in nonlinear hydrologic systems. The ILUES algorithm works by updating local ensembles of each sample with ES to explore possible multimodal distributions. To achieve satisfactory data matches in nonlinear problems, we adopt an iterative form of ES to assimilate the measurements multiple times. Numerical cases involving nonlinearity and multimodality are tested to illustrate the performance of the proposed method. It is shown that overall the ILUES algorithm can well quantify the parametric uncertainties of complex hydrologic models, no matter whether the multimodal distribution exists.

  7. Multilevel Iterative Methods in Nonlinear Computational Plasma Physics

    NASA Astrophysics Data System (ADS)

    Knoll, D. A.; Finn, J. M.

    1997-11-01

    Many applications in computational plasma physics involve the implicit numerical solution of coupled systems of nonlinear partial differential equations or integro-differential equations. Such problems arise in MHD, systems of Vlasov-Fokker-Planck equations, edge plasma fluid equations. We have been developing matrix-free Newton-Krylov algorithms for such problems and have applied these algorithms to the edge plasma fluid equations [1,2] and to the Vlasov-Fokker-Planck equation [3]. Recently we have found that with increasing grid refinement, the number of Krylov iterations required per Newton iteration has grown unmanageable [4]. This has led us to the study of multigrid methods as a means of preconditioning matrix-free Newton-Krylov methods. In this poster we will give details of the general multigrid preconditioned Newton-Krylov algorithm, as well as algorithm performance details on problems of interest in the areas of magnetohydrodynamics and edge plasma physics. Work supported by US DoE 1. Knoll and McHugh, J. Comput. Phys., 116, pg. 281 (1995) 2. Knoll and McHugh, Comput. Phys. Comm., 88, pg. 141 (1995) 3. Mousseau and Knoll, J. Comput. Phys. (1997) (to appear) 4. Knoll and McHugh, SIAM J. Sci. Comput. 19, (1998) (to appear)

  8. Can we estimate plasma density in ICP driver through electrical parameters in RF circuit?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bandyopadhyay, M., E-mail: mainak@iter-india.org; Sudhir, Dass, E-mail: dass.sudhir@iter-india.org; Chakraborty, A., E-mail: arunkc@iter-india.org

    2015-04-08

    To avoid regular maintenance, invasive plasma diagnostics with probes are not included in the inductively coupled plasma (ICP) based ITER Neutral Beam (NB) source design. Even non-invasive probes like optical emission spectroscopic diagnostics are also not included in the present ITER NB design due to overall system design and interface issues. As a result, negative ion beam current through the extraction system in the ITER NB negative ion source is the only measurement which indicates plasma condition inside the ion source. However, beam current not only depends on the plasma condition near the extraction region but also on the perveancemore » condition of the ion extractor system and negative ion stripping. Nevertheless, inductively coupled plasma production region (RF driver region) is placed at distance (∼ 30cm) from the extraction region. Due to that, some uncertainties are expected to be involved if one tries to link beam current with plasma properties inside the RF driver. Plasma characterization in source RF driver region is utmost necessary to maintain the optimum condition for source operation. In this paper, a method of plasma density estimation is described, based on density dependent plasma load calculation.« less

  9. Coordination of Myeloid Differentiation with Reduced Cell Cycle Progression by PU.1 Induction of MicroRNAs Targeting Cell Cycle Regulators and Lipid Anabolism.

    PubMed

    Solomon, Lauren A; Podder, Shreya; He, Jessica; Jackson-Chornenki, Nicholas L; Gibson, Kristen; Ziliotto, Rachel G; Rhee, Jess; DeKoter, Rodney P

    2017-05-15

    During macrophage development, myeloid progenitor cells undergo terminal differentiation coordinated with reduced cell cycle progression. Differentiation of macrophages from myeloid progenitors is accompanied by increased expression of the E26 transformation-specific transcription factor PU.1. Reduced PU.1 expression leads to increased proliferation and impaired differentiation of myeloid progenitor cells. It is not understood how PU.1 coordinates macrophage differentiation with reduced cell cycle progression. In this study, we utilized cultured PU.1-inducible myeloid cells to perform genome-wide chromatin immunoprecipitation sequencing (ChIP-seq) analysis coupled with gene expression analysis to determine targets of PU.1 that may be involved in regulating cell cycle progression. We found that genes encoding cell cycle regulators and enzymes involved in lipid anabolism were directly and inducibly bound by PU.1 although their steady-state mRNA transcript levels were reduced. Inhibition of lipid anabolism was sufficient to reduce cell cycle progression in these cells. Induction of PU.1 reduced expression of E2f1 , an important activator of genes involved in cell cycle and lipid anabolism, indirectly through microRNA 223. Next-generation sequencing identified microRNAs validated as targeting cell cycle and lipid anabolism for downregulation. These results suggest that PU.1 coordinates cell cycle progression with differentiation through induction of microRNAs targeting cell cycle regulators and lipid anabolism. Copyright © 2017 American Society for Microbiology.

  10. The cell cycle of early mammalian embryos: lessons from genetic mouse models.

    PubMed

    Artus, Jérôme; Babinet, Charles; Cohen-Tannoudji, Michel

    2006-03-01

    Genes coding for cell cycle components predicted to be essential for its regulation have been shown to be dispensable in mice, at the whole organism level. Such studies have highlighted the extraordinary plasticity of the embryonic cell cycle and suggest that many aspects of in vivo cell cycle regulation remain to be discovered. Here, we discuss the particularities of the mouse early embryonic cell cycle and review the mutations that result in cell cycle defects during mouse early embryogenesis, including deficiencies for genes of the cyclin family (cyclin A2 and B1), genes involved in cell cycle checkpoints (Mad2, Bub3, Chk1, Atr), genes involved in ubiquitin and ubiquitin-like pathways (Uba3, Ubc9, Cul1, Cul3, Apc2, Apc10, Csn2) as well as genes the function of which had not been previously ascribed to cell cycle regulation (Cdc2P1, E4F and Omcg1).

  11. A graphic approach to include dissipative-like effects in reversible thermal cycles

    NASA Astrophysics Data System (ADS)

    Gonzalez-Ayala, Julian; Arias-Hernandez, Luis Antonio; Angulo-Brown, Fernando

    2017-05-01

    Since the decade of 1980's, a connection between a family of maximum-work reversible thermal cycles and maximum-power finite-time endoreversible cycles has been established. The endoreversible cycles produce entropy at their couplings with the external heat baths. Thus, this kind of cycles can be optimized under criteria of merit that involve entropy production terms. Meanwhile the relation between the concept of work and power is quite direct, apparently, the finite-time objective functions involving entropy production have not reversible counterparts. In the present paper we show that it is also possible to establish a connection between irreversible cycle models and reversible ones by means of the concept of "geometric dissipation", which has to do with the equivalent role of a deficit of areas between some reversible cycles and the Carnot cycle and actual dissipative terms in a Curzon-Ahlborn engine.

  12. New Manning System Field Evaluation

    DTIC Science & Technology

    1985-11-01

    absence of welcoming attention has deleterious effects on family member stability and adaptability. Decreased self - esteem and negative attitudes toward unit...through TCATA and their BDM on-stat..on data collection agents, is conducting self - administered attitudinal surveys among members (80% or more) of...soldier-unit performance. Data collection will involve three iterations of a self -administered mailed survey over an 18--month period. (3) Battalton

  13. Bootstrap evaluation of a young Douglas-fir height growth model for the Pacific Northwest

    Treesearch

    Nicholas R. Vaughn; Eric C. Turnblom; Martin W. Ritchie

    2010-01-01

    We evaluated the stability of a complex regression model developed to predict the annual height growth of young Douglas-fir. This model is highly nonlinear and is fit in an iterative manner for annual growth coefficients from data with multiple periodic remeasurement intervals. The traditional methods for such a sensitivity analysis either involve laborious math or...

  14. Least Squares Computations in Science and Engineering

    DTIC Science & Technology

    1994-02-01

    iterative least squares deblurring procedure. Because of the ill-posed characteristics of the deconvolution problem, in the presence of noise , direct...optimization methods. Generally, the problems are accompanied by constraints, such as bound constraints, and the observations are corrupted by noise . The...engineering. This effort has involved interaction with researchers in closed-loop active noise (vibration) control at Phillips Air Force Laboratory

  15. Iterated Hamiltonian type systems and applications

    NASA Astrophysics Data System (ADS)

    Tiba, Dan

    2018-04-01

    We discuss, in arbitrary dimension, certain Hamiltonian type systems and prove existence, uniqueness and regularity properties, under the independence condition. We also investigate the critical case, define a class of generalized solutions and prove existence and basic properties. Relevant examples and counterexamples are also indicated. The applications concern representations of implicitly defined manifolds and their perturbations, motivated by differential systems involving unknown geometries.

  16. For the Love of Statistics: Appreciating and Learning to Apply Experimental Analysis and Statistics through Computer Programming Activities

    ERIC Educational Resources Information Center

    Mascaró, Maite; Sacristán, Ana Isabel; Rufino, Marta M.

    2016-01-01

    For the past 4 years, we have been involved in a project that aims to enhance the teaching and learning of experimental analysis and statistics, of environmental and biological sciences students, through computational programming activities (using R code). In this project, through an iterative design, we have developed sequences of R-code-based…

  17. Development of a multimedia educational programme for first-time hearing aid users: a participatory design.

    PubMed

    Ferguson, Melanie; Leighton, Paul; Brandreth, Marian; Wharrad, Heather

    2018-05-02

    To develop content for a series of interactive video tutorials (or reusable learning objects, RLOs) for first-time adult hearing aid users, to enhance knowledge of hearing aids and communication. RLO content was based on an electronically-delivered Delphi review, workshops, and iterative peer-review and feedback using a mixed-methods participatory approach. An expert panel of 33 hearing healthcare professionals, and workshops involving 32 hearing aid users and 11 audiologists. This ensured that social, emotional and practical experiences of the end-user alongside clinical validity were captured. Content for evidence-based, self-contained RLOs based on pedagogical principles was developed for delivery via DVD for television, PC or internet. Content was developed based on Delphi review statements about essential information that reached consensus (≥90%), visual representations of relevant concepts relating to hearing aids and communication, and iterative peer-review and feedback of content. This participatory approach recognises and involves key stakeholders in the design process to create content for a user-friendly multimedia educational intervention, to supplement the clinical management of first-time hearing aid users. We propose participatory methodologies are used in the development of content for e-learning interventions in hearing-related research and clinical practice.

  18. Deletion within the metallothionein locus of cadmium-tolerant Synechococcus PCC 6301 involving a highly iterated palindrome (HIP1).

    PubMed

    Gupta, A; Morby, A P; Turner, J S; Whitton, B A; Robinson, N J

    1993-01-01

    Genomic rearrangements involving amplification of metallothionein (MT) genes have been reported in metal-tolerant eukaryotes. Similarly, we have recently observed amplification and rearrangement of a prokaryotic MT locus, smt, in cells of Synechococcus PCC 6301 selected for Cd tolerance. Following the characterization of this locus, the altered smt region has now been isolated from a Cd-tolerant cell line, C3.2, and its nucleotide sequence determined. This has identified a deletion within smtB, which encodes a trans-acting repressor of smt transcription. Two identical palindromic octanucleotides (5'-GCGATC-GC-3') traverse both borders of the excised element. This palindromic sequence is highly represented in the smt locus (7 occurrences in 1326 nucleotides) and analysis of the GenBank/EMBL/DDBJ DNA Nucleotide Sequence Data Libraries reveals that this is a highly iterated palindrome (HIP1) in other known sequences from Synechococcus strains (estimated to occur at an average frequency of once every c. 664 bp). HIP1 is also abundant in the genomes of other cyanobacteria. The functional significance of smtB deletion and the possible role of HIP1 in genome plasticity and adaptation in cyanobacteria are discussed.

  19. JWST Wavefront Control Toolbox

    NASA Technical Reports Server (NTRS)

    Shin, Shahram Ron; Aronstein, David L.

    2011-01-01

    A Matlab-based toolbox has been developed for the wavefront control and optimization of segmented optical surfaces to correct for possible misalignments of James Webb Space Telescope (JWST) using influence functions. The toolbox employs both iterative and non-iterative methods to converge to an optimal solution by minimizing the cost function. The toolbox could be used in either of constrained and unconstrained optimizations. The control process involves 1 to 7 degrees-of-freedom perturbations per segment of primary mirror in addition to the 5 degrees of freedom of secondary mirror. The toolbox consists of a series of Matlab/Simulink functions and modules, developed based on a "wrapper" approach, that handles the interface and data flow between existing commercial optical modeling software packages such as Zemax and Code V. The limitations of the algorithm are dictated by the constraints of the moving parts in the mirrors.

  20. Contact stresses in pin-loaded orthotropic plates

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Klang, E. C.

    1984-01-01

    The effects of pin elasticity, friction, and clearance on the stresses near the hole in a pin-loaded orthotropic plate are described. The problem is modeled as a contact elasticity problem using complex variable theory, the pin and the plate being two elastic bodies interacting through contact. This modeling is in contrast to previous works which assumed that the pin is rigid or that it exerts a known cosinusoidal radial traction on the hole boundary. Neither of these approaches explicitly involves a pin. A collocation procedure and iteration were used to obtain numerical results for a variety of plate and pin elastic properties and various levels of friction and clearance. Collocation was used to enforce the boundary and iteration was used to find the contact and no-slip regions on the boundary. Details of the numerical scheme are discussed.

  1. A method for the dynamic and thermal stress analysis of space shuttle surface insulation

    NASA Technical Reports Server (NTRS)

    Ojalvo, I. U.; Levy, A.; Austin, F.

    1975-01-01

    The thermal protection system of the space shuttle consists of thousands of separate insulation tiles bonded to the orbiter's surface through a soft strain-isolation layer. The individual tiles are relatively thick and possess nonuniform properties. Therefore, each is idealized by finite-element assemblages containing up to 2500 degrees of freedom. Since the tiles affixed to a given structural panel will, in general, interact with one another, application of the standard direct-stiffness method would require equation systems involving excessive numbers of unknowns. This paper presents a method which overcomes this problem through an efficient iterative procedure which requires treatment of only a single tile at any given time. Results of associated static, dynamic, and thermal stress analyses and sufficient conditions for convergence of the iterative solution method are given.

  2. On the electromagnetic scattering from infinite rectangular grids with finite conductivity

    NASA Technical Reports Server (NTRS)

    Christodoulou, C. G.; Kauffman, J. F.

    1986-01-01

    A variety of methods can be used in constructing solutions to the problem of mesh scattering. However, each of these methods has certain drawbacks. The present paper is concerned with a new technique which is valid for all spacings. The new method involved, called the fast Fourier transform-conjugate gradient method (FFT-CGM), represents an iterative technique which employs the conjugate gradient method to improve upon each iterate, utilizing the fast Fourier transform. The FFT-CGM method provides a new accurate model which can be extended and applied to the more difficult problems of woven mesh surfaces. The formulation of the FFT-conjugate gradient method for aperture fields and current densities for a planar periodic structure is considered along with singular operators, the formulation of the FFT-CG method for thin wires with finite conductivity, and reflection coefficients.

  3. A constrained modulus reconstruction technique for breast cancer assessment.

    PubMed

    Samani, A; Bishop, J; Plewes, D B

    2001-09-01

    A reconstruction technique for breast tissue elasticity modulus is described. This technique assumes that the geometry of normal and suspicious tissues is available from a contrast-enhanced magnetic resonance image. Furthermore, it is assumed that the modulus is constant throughout each tissue volume. The technique, which uses quasi-static strain data, is iterative where each iteration involves modulus updating followed by stress calculation. Breast mechanical stimulation is assumed to be done by two compressional rigid plates. As a result, stress is calculated using the finite element method based on the well-controlled boundary conditions of the compression plates. Using the calculated stress and the measured strain, modulus updating is done element-by-element based on Hooke's law. Breast tissue modulus reconstruction using simulated data and phantom modulus reconstruction using experimental data indicate that the technique is robust.

  4. Metatranscriptomic analysis of a high-sulfide aquatic spring reveals insights into sulfur cycling and unexpected aerobic metabolism

    PubMed Central

    Elshahed, Mostafa S.; Najar, Fares Z.; Krumholz, Lee R.

    2015-01-01

    Zodletone spring is a sulfide-rich spring in southwestern Oklahoma characterized by shallow, microoxic, light-exposed spring water overlaying anoxic sediments. Previously, culture-independent 16S rRNA gene based diversity surveys have revealed that Zodletone spring source sediments harbor a highly diverse microbial community, with multiple lineages putatively involved in various sulfur-cycling processes. Here, we conducted a metatranscriptomic survey of microbial populations in Zodletone spring source sediments to characterize the relative prevalence and importance of putative phototrophic, chemolithotrophic, and heterotrophic microorganisms in the sulfur cycle, the identity of lineages actively involved in various sulfur cycling processes, and the interaction between sulfur cycling and other geochemical processes at the spring source. Sediment samples at the spring’s source were taken at three different times within a 24-h period for geochemical analyses and RNA sequencing. In depth mining of datasets for sulfur cycling transcripts revealed major sulfur cycling pathways and taxa involved, including an unexpected potential role of Actinobacteria in sulfide oxidation and thiosulfate transformation. Surprisingly, transcripts coding for the cyanobacterial Photosystem II D1 protein, methane monooxygenase, and terminal cytochrome oxidases were encountered, indicating that genes for oxygen production and aerobic modes of metabolism are actively being transcribed, despite below-detectable levels (<1 µM) of oxygen in source sediment. Results highlight transcripts involved in sulfur, methane, and oxygen cycles, propose that oxygenic photosynthesis could support aerobic methane and sulfide oxidation in anoxic sediments exposed to sunlight, and provide a viewpoint of microbial metabolic lifestyles under conditions similar to those seen during late Archaean and Proterozoic eons. PMID:26417542

  5. Metatranscriptomic analysis of a high-sulfide aquatic spring reveals insights into sulfur cycling and unexpected aerobic metabolism.

    PubMed

    Spain, Anne M; Elshahed, Mostafa S; Najar, Fares Z; Krumholz, Lee R

    2015-01-01

    Zodletone spring is a sulfide-rich spring in southwestern Oklahoma characterized by shallow, microoxic, light-exposed spring water overlaying anoxic sediments. Previously, culture-independent 16S rRNA gene based diversity surveys have revealed that Zodletone spring source sediments harbor a highly diverse microbial community, with multiple lineages putatively involved in various sulfur-cycling processes. Here, we conducted a metatranscriptomic survey of microbial populations in Zodletone spring source sediments to characterize the relative prevalence and importance of putative phototrophic, chemolithotrophic, and heterotrophic microorganisms in the sulfur cycle, the identity of lineages actively involved in various sulfur cycling processes, and the interaction between sulfur cycling and other geochemical processes at the spring source. Sediment samples at the spring's source were taken at three different times within a 24-h period for geochemical analyses and RNA sequencing. In depth mining of datasets for sulfur cycling transcripts revealed major sulfur cycling pathways and taxa involved, including an unexpected potential role of Actinobacteria in sulfide oxidation and thiosulfate transformation. Surprisingly, transcripts coding for the cyanobacterial Photosystem II D1 protein, methane monooxygenase, and terminal cytochrome oxidases were encountered, indicating that genes for oxygen production and aerobic modes of metabolism are actively being transcribed, despite below-detectable levels (<1 µM) of oxygen in source sediment. Results highlight transcripts involved in sulfur, methane, and oxygen cycles, propose that oxygenic photosynthesis could support aerobic methane and sulfide oxidation in anoxic sediments exposed to sunlight, and provide a viewpoint of microbial metabolic lifestyles under conditions similar to those seen during late Archaean and Proterozoic eons.

  6. Mobile phone use while cycling: incidence and effects on behaviour and safety.

    PubMed

    de Waard, Dick; Schepers, Paul; Ormel, Wieke; Brookhuis, Karel

    2010-01-01

    The effects of mobile phone use on cycling behaviour were studied. In study 1, the prevalence of mobile phone use while cycling was assessed. In Groningen 2.2% of cyclists were observed talking on their phone and 0.6% were text messaging or entering a phone number. In study 2, accident-involved cyclists responded to a questionnaire. Only 0.5% stated that they were using their phone at the time of the accident. In study 3, participants used a phone while cycling. The content of the conversation was manipulated and participants also had to enter a text message. Data were compared with just cycling and cycling while listening to music. Telephoning coincided with reduced speed, reduced peripheral vision performance and increased risk and mental effort ratings. Text messaging had the largest negative impact on cycling performance. Higher mental workload and lower speed may account for the relatively low number of people calling involved in accidents. STATEMENT OF RELEVANCE: Although perhaps mainly restricted to flat countries with a large proportion of cyclists, mobile phone use while cycling has increased and may be a threat to traffic safety, similar to phone use while driving a car. In this study, the extent of the problem was assessed by observing the proportion of cyclists using mobile phones, sending questionnaires to accident-involved cyclists and an experimental study was conducted on the effects of mobile phone use while cycling.

  7. Simultaneous multigrid techniques for nonlinear eigenvalue problems: Solutions of the nonlinear Schrödinger-Poisson eigenvalue problem in two and three dimensions

    NASA Astrophysics Data System (ADS)

    Costiner, Sorin; Ta'asan, Shlomo

    1995-07-01

    Algorithms for nonlinear eigenvalue problems (EP's) often require solving self-consistently a large number of EP's. Convergence difficulties may occur if the solution is not sought in an appropriate region, if global constraints have to be satisfied, or if close or equal eigenvalues are present. Multigrid (MG) algorithms for nonlinear problems and for EP's obtained from discretizations of partial differential EP have often been shown to be more efficient than single level algorithms. This paper presents MG techniques and a MG algorithm for nonlinear Schrödinger Poisson EP's. The algorithm overcomes the above mentioned difficulties combining the following techniques: a MG simultaneous treatment of the eigenvectors and nonlinearity, and with the global constrains; MG stable subspace continuation techniques for the treatment of nonlinearity; and a MG projection coupled with backrotations for separation of solutions. These techniques keep the solutions in an appropriate region, where the algorithm converges fast, and reduce the large number of self-consistent iterations to only a few or one MG simultaneous iteration. The MG projection makes it possible to efficiently overcome difficulties related to clusters of close and equal eigenvalues. Computational examples for the nonlinear Schrödinger-Poisson EP in two and three dimensions, presenting special computational difficulties that are due to the nonlinearity and to the equal and closely clustered eigenvalues are demonstrated. For these cases, the algorithm requires O(qN) operations for the calculation of q eigenvectors of size N and for the corresponding eigenvalues. One MG simultaneous cycle per fine level was performed. The total computational cost is equivalent to only a few Gauss-Seidel relaxations per eigenvector. An asymptotic convergence rate of 0.15 per MG cycle is attained.

  8. Environmental Biochemistry--A New Approach for Teaching the Cycles of the Elements.

    ERIC Educational Resources Information Center

    Ricci, Juan C. Diaz; And Others

    1988-01-01

    Presents three dimensional models of biological pathways for the following cycles: carbon, nitrogen, sulfur, and a combination of the three. Discusses steps involved in each cycle and breaks each cycle into trophic and environmental regions. (MVL)

  9. Reducing Variability in Orthogonal Reformatted Image Quality Associated With Axial Long-z-Axis CT Angiography.

    PubMed

    Stein, Erica B; Liu, Peter S; Kazerooni, Ella A; Barber, Karen; Davenport, Matthew S

    2016-12-01

    The objective of our study was to reduce variation in image quality of orthogonal reformatted images generated from long-z-axis CT angiography (CTA) studies of the upper and lower extremities. Upper and lower extremity CTA studies were targeted at a single health care system. A correctly performed CTA examination was defined as one that met the following three criteria: Sagittal and coronal reformats were obtained, a high-resolution matrix greater than 512 × 512 was used, and reformatted images were available in a distance-measurable format. Baseline data were collected from February 1, 2014, through September 30, 2014. Corrective actions were implemented during three consecutive plan-do-check-act (PDCA) cycles from October 1, 2014, through July 31, 2015, that addressed human, technical, and systematic variations. A 3-month maintenance period followed in which no intervention was performed. Longitudinal data were analyzed monthly using a statistical process control chart (p-chart). The total number of long-z-axis extremity CTA studies analyzed was as follows: 351 CTA studies were analyzed at baseline, 94 at the first PDCA cycle, 92 at the second PDCA cycle, 114 at the third PDCA cycle, and 138 during the maintenance period. The monthly rate of correctly performed studies ranged from 7% to 51% (mean, 38% ± 13% [SD]) during the baseline period, 32-59% (mean, 46% ± 14%) during the first PDCA cycle, 40-81% (mean, 61% ± 21%) during the second PDCA cycle, and 80-82% (mean, 81% ± 0.9%) during the third PDCA cycle. The monthly rate improved to 90-91% (mean, 91% ± 0.5%) during the maintenance period. The upper and lower control limits of the p-chart were upshifted after the second and third PDCA cycles. Correcting systematic and technical variations led to the greatest improvements in reformat accuracy. Obtaining consistently and correctly reformatted images from long-z-axis CTA studies is achievable using iterative PDCA cycles.

  10. Discrete-Time Local Value Iteration Adaptive Dynamic Programming: Admissibility and Termination Analysis.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Qiao

    In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.

  11. Using iterative learning to improve understanding during the informed consent process in a South African psychiatric genomics study.

    PubMed

    Campbell, Megan M; Susser, Ezra; Mall, Sumaya; Mqulwana, Sibonile G; Mndini, Michael M; Ntola, Odwa A; Nagdee, Mohamed; Zingela, Zukiswa; Van Wyk, Stephanus; Stein, Dan J

    2017-01-01

    Obtaining informed consent is a great challenge in global health research. There is a need for tools that can screen for and improve potential research participants' understanding of the research study at the time of recruitment. Limited empirical research has been conducted in low and middle income countries, evaluating informed consent processes in genomics research. We sought to investigate the quality of informed consent obtained in a South African psychiatric genomics study. A Xhosa language version of the University of California, San Diego Brief Assessment of Capacity to Consent Questionnaire (UBACC) was used to screen for capacity to consent and improve understanding through iterative learning in a sample of 528 Xhosa people with schizophrenia and 528 controls. We address two questions: firstly, whether research participants' understanding of the research study improved through iterative learning; and secondly, what were predictors for better understanding of the research study at the initial screening? During screening 290 (55%) cases and 172 (33%) controls scored below the 14.5 cut-off for acceptable understanding of the research study elements, however after iterative learning only 38 (7%) cases and 13 (2.5%) controls continued to score below this cut-off. Significant variables associated with increased understanding of the consent included the psychiatric nurse recruiter conducting the consent screening, higher participant level of education, and being a control. The UBACC proved an effective tool to improve understanding of research study elements during consent, for both cases and controls. The tool holds utility for complex studies such as those involving genomics, where iterative learning can be used to make significant improvements in understanding of research study elements. The UBACC may be particularly important in groups with severe mental illness and lower education levels. Study recruiters play a significant role in managing the quality of the informed consent process.

  12. Verification and Validation of Digitally Upgraded Control Rooms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald; Lau, Nathan

    2015-09-01

    As nuclear power plants undertake main control room modernization, a challenge is the lack of a clearly defined human factors process to follow. Verification and validation (V&V) as applied in the nuclear power community has tended to involve efforts such as integrated system validation, which comes at the tail end of the design stage. To fill in guidance gaps and create a step-by-step process for control room modernization, we have developed the Guideline for Operational Nuclear Usability and Knowledge Elicitation (GONUKE). This approach builds on best practices in the software industry, which prescribe an iterative user-centered approach featuring multiple cyclesmore » of design and evaluation. Nuclear regulatory guidance for control room design emphasizes summative evaluation—which occurs after the design is complete. In the GONUKE approach, evaluation is also performed at the formative stage of design—early in the design cycle using mockups and prototypes for evaluation. The evaluation may involve expert review (e.g., software heuristic evaluation at the formative stage and design verification against human factors standards like NUREG-0700 at the summative stage). The evaluation may also involve user testing (e.g., usability testing at the formative stage and integrated system validation at the summative stage). An additional, often overlooked component of evaluation is knowledge elicitation, which captures operator insights into the system. In this report we outline these evaluation types across design phases that support the overall modernization process. The objective is to provide industry-suitable guidance for steps to be taken in support of the design and evaluation of a new human-machine interface (HMI) in the control room. We suggest the value of early-stage V&V and highlight how this early-stage V&V can help improve the design process for control room modernization. We argue that there is a need to overcome two shortcomings of V&V in current practice—the propensity for late-stage V&V and the use of increasingly complex psychological assessment measures for V&V.« less

  13. Disruption of predicted dengue virus type 3 major outbreak cycle coincided with switching of the dominant circulating virus genotype.

    PubMed

    Tan, Kim-Kee; Zulkifle, Nurul-Izzani; Abd-Jamil, Juraina; Sulaiman, Syuhaida; Yaacob, Che Norainon; Azizan, Noor Syahida; Che Mat Seri, Nurul Asma Anati; Samsudin, Nur Izyan; Mahfodz, Nur Hidayana; AbuBakar, Sazaly

    2017-10-01

    Dengue is hyperendemic in most of Southeast Asia. In this region, all four dengue virus serotypes are persistently present. Major dengue outbreak cycle occurs in a cyclical pattern involving the different dengue virus serotypes. In Malaysia, since the 1980s, the major outbreak cycles have involved dengue virus type 3 (DENV3), dengue virus type 1 (DENV1) and dengue virus type 2 (DENV2), occurring in that order (DENV3/DENV1/DENV2). Only limited information on the DENV3 cycles, however, have been described. In the current study, we examined the major outbreak cycle involving DENV3 using data from 1985 to 2016. We examined the genetic diversity of DENV3 isolates obtained during the period when DENV3 was the dominant serotype and during the inter-dominant transmission period. Results obtained suggest that the typical DENV3/DENV1/DENV2 cyclical outbreak cycle in Malaysia has recently been disrupted. The last recorded major outbreak cycle involving DENV3 occurred in 2002, and the expected major outbreak cycle involving DENV3 in 2006-2012 did not materialize. DENV genome analyses revealed that DENV3 genotype II (DENV3/II) was the predominant DENV3 genotype (67%-100%) recovered between 1987 and 2002. DENV3 genotype I (DENV3/I) emerged in 2002 followed by the introduction of DENV3 genotype III (DENV3/III) in 2008. These newly emerged DENV3 genotypes replaced DENV3/II, but there was no major upsurge of DENV3 cases that accompanied the emergence of these viruses. DENV3 remained in the background of DENV1 and DENV2 until now. Virus genome sequence analysis suggested that intrinsic differences within the different dengue virus genotypes could have influenced the transmission efficiency of DENV3. Further studies and continuous monitoring of the virus are needed for better understanding of the DENV transmission dynamics in hyperendemic regions. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. A Kernel-free Boundary Integral Method for Elliptic Boundary Value Problems ⋆

    PubMed Central

    Ying, Wenjun; Henriquez, Craig S.

    2013-01-01

    This paper presents a class of kernel-free boundary integral (KFBI) methods for general elliptic boundary value problems (BVPs). The boundary integral equations reformulated from the BVPs are solved iteratively with the GMRES method. During the iteration, the boundary and volume integrals involving Green's functions are approximated by structured grid-based numerical solutions, which avoids the need to know the analytical expressions of Green's functions. The KFBI method assumes that the larger regular domain, which embeds the original complex domain, can be easily partitioned into a hierarchy of structured grids so that fast elliptic solvers such as the fast Fourier transform (FFT) based Poisson/Helmholtz solvers or those based on geometric multigrid iterations are applicable. The structured grid-based solutions are obtained with standard finite difference method (FDM) or finite element method (FEM), where the right hand side of the resulting linear system is appropriately modified at irregular grid nodes to recover the formal accuracy of the underlying numerical scheme. Numerical results demonstrating the efficiency and accuracy of the KFBI methods are presented. It is observed that the number of GM-RES iterations used by the method for solving isotropic and moderately anisotropic BVPs is independent of the sizes of the grids that are employed to approximate the boundary and volume integrals. With the standard second-order FEMs and FDMs, the KFBI method shows a second-order convergence rate in accuracy for all of the tested Dirichlet/Neumann BVPs when the anisotropy of the diffusion tensor is not too strong. PMID:23519600

  15. Inlet Spillage Drag Predictions Using the AIRPLANE Code

    NASA Technical Reports Server (NTRS)

    Thomas, Scott D.; Won, Mark A.; Cliff, Susan E.

    1999-01-01

    AIRPLANE (Jameson/Baker) is a steady inviscid unstructured Euler flow solver. It has been validated on many HSR geometries. It is implemented as MESHPLANE, an unstructured mesh generator, and FLOPLANE, an iterative flow solver. The surface description from an Intergraph CAD system goes into MESHPLANE as collections of polygonal curves to generate the 3D mesh. The flow solver uses a multistage time stepping scheme with residual averaging to approach steady state, but R is not time accurate. The flow solver was ported from Cray to IBM SP2 by Wu-Sun Cheng (IBM); it could only be run on 4 CPUs at a time because of memory limitations. Meshes for the four cases had about 655,000 points in the flow field, about 3.9 million tetrahedra, about 77,500 points on the surface. The flow solver took about 23 wall seconds per iteration when using 4 CPUs. It took about eight and a half wall hours to run 1,300 iterations at a time (the queue limit is 10 hours). A revised version of FLOPLANE (Thomas) was used on up to 64 CPUs to finish up some calculations at the end. We had to turn on more communication when using more processors to eliminate noise that was contaminating the flow field; this added about 50% to the elapsed wall time per iteration when using 64 CPUs. This study involved computing lift and drag for a wing/body/nacelle configuration at Mach 0.9 and 4 degrees pitch. Four cases were considered, corresponding to four nacelle mass flow conditions.

  16. Continuation of probability density functions using a generalized Lyapunov approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baars, S., E-mail: s.baars@rug.nl; Viebahn, J.P., E-mail: viebahn@cwi.nl; Mulder, T.E., E-mail: t.e.mulder@uu.nl

    Techniques from numerical bifurcation theory are very useful to study transitions between steady fluid flow patterns and the instabilities involved. Here, we provide computational methodology to use parameter continuation in determining probability density functions of systems of stochastic partial differential equations near fixed points, under a small noise approximation. Key innovation is the efficient solution of a generalized Lyapunov equation using an iterative method involving low-rank approximations. We apply and illustrate the capabilities of the method using a problem in physical oceanography, i.e. the occurrence of multiple steady states of the Atlantic Ocean circulation.

  17. Bringing values and deliberation to science communication.

    PubMed

    Dietz, Thomas

    2013-08-20

    Decisions always involve both facts and values, whereas most science communication focuses only on facts. If science communication is intended to inform decisions, it must be competent with regard to both facts and values. Public participation inevitably involves both facts and values. Research on public participation suggests that linking scientific analysis to public deliberation in an iterative process can help decision making deal effectively with both facts and values. Thus, linked analysis and deliberation can be an effective tool for science communication. However, challenges remain in conducting such process at the national and global scales, in enhancing trust, and in reconciling diverse values.

  18. Exactly energy conserving semi-implicit particle in cell formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapenta, Giovanni, E-mail: giovanni.lapenta@kuleuven.be

    We report a new particle in cell (PIC) method based on the semi-implicit approach. The novelty of the new method is that unlike any of its semi-implicit predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. Recent research has presented fully implicit methods where energy conservation is obtained as part of a non-linear iteration procedure. The new method (referred to as Energy Conserving Semi-Implicit Method, ECSIM), instead, does not require any non-linear iteration and its computational cycle is similar to that of explicit PIC. The properties of the new method are: i) it conservesmore » energy exactly to round-off for any time step or grid spacing; ii) it is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency and allowing the user to select any desired time step; iii) it eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length; iv) the particle mover has a computational complexity identical to that of the explicit PIC, only the field solver has an increased computational cost. The new ECSIM is tested in a number of benchmarks where accuracy and computational performance are tested. - Highlights: • We present a new fully energy conserving semi-implicit particle in cell (PIC) method based on the implicit moment method (IMM). The new method is called Energy Conserving Implicit Moment Method (ECIMM). • The novelty of the new method is that unlike any of its predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. • The new method is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency. • The new method eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length. • These features are achieved at a reduced cost compared with either previous IMM or fully implicit implementation of PIC.« less

  19. A Systems Analysis Role Play Case: We Sell Stuff, Inc.

    ERIC Educational Resources Information Center

    Mitri, Michel; Cole, Carey

    2007-01-01

    Most systems development projects incorporate some sort of life cycle approach in their development. Whether the development methodology involves a traditional life cycle, prototyping, rapid application development, or some other approach, the first step usually involves a system investigation, which includes problem identification, feasibility…

  20. Active Engagement with Assessment and Feedback Can Improve Group-Work Outcomes and Boost Student Confidence

    ERIC Educational Resources Information Center

    Scott, G. W.

    2017-01-01

    This study involves evaluation of a novel iterative group-based learning task developed to enable students to actively engage with assessment and feedback in order to improve the quality of their written work. The students were all in the final semester of their final year of study and enrolled on either BSc Zoology or BSc Marine and Freshwater…

  1. Distributed Simulation as a modelling tool for the development of a simulation-based training programme for cardiovascular specialties.

    PubMed

    Kelay, Tanika; Chan, Kah Leong; Ako, Emmanuel; Yasin, Mohammad; Costopoulos, Charis; Gold, Matthew; Kneebone, Roger K; Malik, Iqbal S; Bello, Fernando

    2017-01-01

    Distributed Simulation is the concept of portable, high-fidelity immersive simulation. Here, it is used for the development of a simulation-based training programme for cardiovascular specialities. We present an evidence base for how accessible, portable and self-contained simulated environments can be effectively utilised for the modelling, development and testing of a complex training framework and assessment methodology. Iterative user feedback through mixed-methods evaluation techniques resulted in the implementation of the training programme. Four phases were involved in the development of our immersive simulation-based training programme: ( 1) initial conceptual stage for mapping structural criteria and parameters of the simulation training framework and scenario development ( n  = 16), (2) training facility design using Distributed Simulation , (3) test cases with clinicians ( n  = 8) and collaborative design, where evaluation and user feedback involved a mixed-methods approach featuring (a) quantitative surveys to evaluate the realism and perceived educational relevance of the simulation format and framework for training and (b) qualitative semi-structured interviews to capture detailed feedback including changes and scope for development. Refinements were made iteratively to the simulation framework based on user feedback, resulting in (4) transition towards implementation of the simulation training framework, involving consistent quantitative evaluation techniques for clinicians ( n  = 62). For comparative purposes, clinicians' initial quantitative mean evaluation scores for realism of the simulation training framework, realism of the training facility and relevance for training ( n  = 8) are presented longitudinally, alongside feedback throughout the development stages from concept to delivery, including the implementation stage ( n  = 62). Initially, mean evaluation scores fluctuated from low to average, rising incrementally. This corresponded with the qualitative component, which augmented the quantitative findings; trainees' user feedback was used to perform iterative refinements to the simulation design and components (collaborative design), resulting in higher mean evaluation scores leading up to the implementation phase. Through application of innovative Distributed Simulation techniques, collaborative design, and consistent evaluation techniques from conceptual, development, and implementation stages, fully immersive simulation techniques for cardiovascular specialities are achievable and have the potential to be implemented more broadly.

  2. Structure and Regulatory Interactions of the Cytoplasmic Terminal Domains of Serotonin Transporter

    PubMed Central

    2014-01-01

    Uptake of neurotransmitters by sodium-coupled monoamine transporters of the NSS family is required for termination of synaptic transmission. Transport is tightly regulated by protein–protein interactions involving the small cytoplasmic segments at the amino- and carboxy-terminal ends of the transporter. Although structures of homologues provide information about the transmembrane regions of these transporters, the structural arrangement of the terminal domains remains largely unknown. Here, we combined molecular modeling, biochemical, and biophysical approaches in an iterative manner to investigate the structure of the 82-residue N-terminal and 30-residue C-terminal domains of human serotonin transporter (SERT). Several secondary structures were predicted in these domains, and structural models were built using the Rosetta fragment-based methodology. One-dimensional 1H nuclear magnetic resonance and circular dichroism spectroscopy supported the presence of helical elements in the isolated SERT N-terminal domain. Moreover, introducing helix-breaking residues within those elements altered the fluorescence resonance energy transfer signal between terminal cyan fluorescent protein and yellow fluorescent protein tags attached to full-length SERT, consistent with the notion that the fold of the terminal domains is relatively well-defined. Full-length models of SERT that are consistent with these and published experimental data were generated. The resultant models predict confined loci for the terminal domains and predict that they move apart during the transport-related conformational cycle, as predicted by structures of homologues and by the “rocking bundle” hypothesis, which is consistent with spectroscopic measurements. The models also suggest the nature of binding to regulatory interaction partners. This study provides a structural context for functional and regulatory mechanisms involving SERT terminal domains. PMID:25093911

  3. Signalling networks and dynamics of allosteric transitions in bacterial chaperonin GroEL: implications for iterative annealing of misfolded proteins.

    PubMed

    Thirumalai, D; Hyeon, Changbong

    2018-06-19

    Signal transmission at the molecular level in many biological complexes occurs through allosteric transitions. Allostery describes the responses of a complex to binding of ligands at sites that are spatially well separated from the binding region. We describe the structural perturbation method, based on phonon propagation in solids, which can be used to determine the signal-transmitting allostery wiring diagram (AWD) in large but finite-sized biological complexes. Application to the bacterial chaperonin GroEL-GroES complex shows that the AWD determined from structures also drives the allosteric transitions dynamically. From both a structural and dynamical perspective these transitions are largely determined by formation and rupture of salt-bridges. The molecular description of allostery in GroEL provides insights into its function, which is quantitatively described by the iterative annealing mechanism. Remarkably, in this complex molecular machine, a deep connection is established between the structures, reaction cycle during which GroEL undergoes a sequence of allosteric transitions, and function, in a self-consistent manner.This article is part of a discussion meeting issue 'Allostery and molecular machines'. © 2018 The Author(s).

  4. Three-dimensional forward modeling of DC resistivity using the aggregation-based algebraic multigrid method

    NASA Astrophysics Data System (ADS)

    Chen, Hui; Deng, Ju-Zhi; Yin, Min; Yin, Chang-Chun; Tang, Wen-Wu

    2017-03-01

    To speed up three-dimensional (3D) DC resistivity modeling, we present a new multigrid method, the aggregation-based algebraic multigrid method (AGMG). We first discretize the differential equation of the secondary potential field with mixed boundary conditions by using a seven-point finite-difference method to obtain a large sparse system of linear equations. Then, we introduce the theory behind the pairwise aggregation algorithms for AGMG and use the conjugate-gradient method with the V-cycle AGMG preconditioner (AGMG-CG) to solve the linear equations. We use typical geoelectrical models to test the proposed AGMG-CG method and compare the results with analytical solutions and the 3DDCXH algorithm for 3D DC modeling (3DDCXH). In addition, we apply the AGMG-CG method to different grid sizes and geoelectrical models and compare it to different iterative methods, such as ILU-BICGSTAB, ILU-GCR, and SSOR-CG. The AGMG-CG method yields nearly linearly decreasing errors, whereas the number of iterations increases slowly with increasing grid size. The AGMG-CG method is precise and converges fast, and thus can improve the computational efficiency in forward modeling of three-dimensional DC resistivity.

  5. Advanced Software for Analysis of High-Speed Rolling-Element Bearings

    NASA Technical Reports Server (NTRS)

    Poplawski, J. V.; Rumbarger, J. H.; Peters, S. M.; Galatis, H.; Flower, R.

    2003-01-01

    COBRA-AHS is a package of advanced software for analysis of rigid or flexible shaft systems supported by rolling-element bearings operating at high speeds under complex mechanical and thermal loads. These loads can include centrifugal and thermal loads generated by motions of bearing components. COBRA-AHS offers several improvements over prior commercial bearing-analysis programs: It includes innovative probabilistic fatigue-life-estimating software that provides for computation of three-dimensional stress fields and incorporates stress-based (in contradistinction to prior load-based) mathematical models of fatigue life. It interacts automatically with the ANSYS finite-element code to generate finite-element models for estimating distributions of temperature and temperature-induced changes in dimensions in iterative thermal/dimensional analyses: thus, for example, it can be used to predict changes in clearances and thermal lockup. COBRA-AHS provides an improved graphical user interface that facilitates the iterative cycle of analysis and design by providing analysis results quickly in graphical form, enabling the user to control interactive runs without leaving the program environment, and facilitating transfer of plots and printed results for inclusion in design reports. Additional features include roller-edge stress prediction and influence of shaft and housing distortion on bearing performance.

  6. Design Issues of the Pre-Compression Rings of Iter

    NASA Astrophysics Data System (ADS)

    Knaster, J.; Baker, W.; Bettinali, L.; Jong, C.; Mallick, K.; Nardi, C.; Rajainmaki, H.; Rossi, P.; Semeraro, L.

    2010-04-01

    The pre-compression system is the keystone of ITER. A centripetal force of ˜30 MN will be applied at cryogenic conditions on top and bottom of each TF coil. It will prevent the `breathing effect' caused by the bursting forces occurring during plasma operation that would affect the machine design life of 30000 cycles. Different alternatives have been studied throughout the years. There are two major design requirements limiting the engineering possibilities: 1) the limited available space and 2) the need to hamper eddy currents flowing in the structures. Six unidirectionally wound glass-fibre composite rings (˜5 m diameter and ˜300 mm cross section) are the final design choice. The rings will withstand the maximum hoop stresses <500 MPa at room temperature conditions. Although retightening or replacing the pre-compression rings in case of malfunctioning is possible, they have to sustain the load during the entire 20 years of machine operation. The present paper summarizes the pre-compression ring R&D carried out during several years. In particular, we will address the composite choice and mechanical characterization, assessment of creep or stress relaxation phenomena, sub-sized rings testing and the optimal ring fabrication processes that have led to the present final design.

  7. Dust remobilization tests in DIII-D divertor

    NASA Astrophysics Data System (ADS)

    Bykov, I.; Rudakov, D.; Moyer, R.; Ratynskaia, S.; Tolias, P.; Deangeli, M.; McLean, A.; Bystrov, K.

    2015-11-01

    Accumulation of dust on hot surfaces is a safety concern for ITER operation. We studied the life cycle of pre-deposited dust under ITER-relevant conditions by exposing W samples with W, C and Al (surrogate for Be) dust at the outer strike point (OSP) in a few ELMy H-mode discharges using DiMES. The maxima in the dust ejection rate correspond to ELM crashes under both attached and detached OSP conditions, as confirmed by a fast camera monitoring DiMES. SEM mapping of dust before and after exposures shows that >95 % of C and <5 % of metal dust gets remobilized in a few shots. In discharges with detached OSP, remaining Al particles melt and fuse together, forming larger spherical grains. At elevated heat flux with attached OSP, they melt, destruct and fuse with W substrate, which is not thermally affected. In this mode W grains partly melt and adjacent particles can weld together, forming larger asymmetric agglomerates with increased adhesion to the surface. We show that these results are consistent with recent observations from Pilot-PSI. Work supported by the US DOE under DE-FC02-04ER54698, DE-FG02-07ER54917 and DE-AC52-07NA27344.

  8. Iterative motion compensation approach for ultrasonic thermal imaging

    NASA Astrophysics Data System (ADS)

    Fleming, Ioana; Hager, Gregory; Guo, Xiaoyu; Kang, Hyun Jae; Boctor, Emad

    2015-03-01

    As thermal imaging attempts to estimate very small tissue motion (on the order of tens of microns), it can be negatively influenced by signal decorrelation. Patient's breathing and cardiac cycle generate shifts in the RF signal patterns. Other sources of movement could be found outside the patient's body, like transducer slippage or small vibrations due to environment factors like electronic noise. Here, we build upon a robust displacement estimation method for ultrasound elastography and we investigate an iterative motion compensation algorithm, which can detect and remove non-heat induced tissue motion at every step of the ablation procedure. The validation experiments are performed on laboratory induced ablation lesions in ex-vivo tissue. The ultrasound probe is either held by the operator's hand or supported by a robotic arm. We demonstrate the ability to detect and remove non-heat induced tissue motion in both settings. We show that removing extraneous motion helps unmask the effects of heating. Our strain estimation curves closely mirror the temperature changes within the tissue. While previous results in the area of motion compensation were reported for experiments lasting less than 10 seconds, our algorithm was tested on experiments that lasted close to 20 minutes.

  9. Cycling injuries and alcohol.

    PubMed

    Airaksinen, Noora K; Nurmi-Lüthje, Ilona S; Kataja, J Matti; Kröger, Heikki P J; Lüthje, Peter M J

    2018-05-01

    Most of the cycling accidents that occur in Finland do not end up in the official traffic accident statistics. Thus, there is minimal information on these accidents and their consequences, particularly in cases in which alcohol was involved. The focus of the present study is on cycling accidents and injuries involving alcohol in particular. Data on patients visiting the emergency department at North Kymi Hospital because of a cycling accident was prospectively collected for two years, from June 1, 2004 to May 31, 2006. Blood alcohol concentration (BAC) was measured on admission with a breath analyser. The severity of the cycling injuries was classified according to the Abbreviated Injury Scale (AIS). A total of 217 cycling accidents occurred. One third of the injured cyclists were involved with alcohol at the time of visiting the hospital. Of these, 85% were males. A blood alcohol concentration of ≥ 1.2 g/L was measured in nearly 90% of all alcohol-related cases. A positive BAC result was more common among males than females (p < 0.001), and head injuries were more common among cyclists where alcohol was involved (AI) (60%) than among sober cyclists (29%) (p < 0.001). Two thirds (64%) of the cyclists with AI were not wearing a bicycle helmet. The figure for serious injuries (MAIS ≥ 3) was similar in both groups. Intoxication with an alcohol level of more than 1.5 g/L and the age of 15 to 24 years were found to be risk factors for head injuries. The mean cost of treatment was higher among sober cyclists than among cyclists with AI (€2143 vs. €1629), whereas in respect of the cost of work absence, the situation was the opposite (€1348 vs. €1770, respectively). Cyclists involved with alcohol were, in most cases, heavily intoxicated and were not wearing a bicycle helmet. Head injuries were more common among these cyclists than among sober cyclists. As cycling continues to increase, it is important to monitor cycling accidents, improve the accident statistics and heighten awareness of the risks of head injuries when cycling under the influence of alcohol. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Selection dynamic of Escherichia coli host in M13 combinatorial peptide phage display libraries.

    PubMed

    Zanconato, Stefano; Minervini, Giovanni; Poli, Irene; De Lucrezia, Davide

    2011-01-01

    Phage display relies on an iterative cycle of selection and amplification of random combinatorial libraries to enrich the initial population of those peptides that satisfy a priori chosen criteria. The effectiveness of any phage display protocol depends directly on library amino acid sequence diversity and the strength of the selection procedure. In this study we monitored the dynamics of the selective pressure exerted by the host organism on a random peptide library in the absence of any additional selection pressure. The results indicate that sequence censorship exerted by Escherichia coli dramatically reduces library diversity and can significantly impair phage display effectiveness.

  11. A composite step conjugate gradients squared algorithm for solving nonsymmetric linear systems

    NASA Astrophysics Data System (ADS)

    Chan, Tony; Szeto, Tedd

    1994-03-01

    We propose a new and more stable variant of the CGS method [27] for solving nonsymmetric linear systems. The method is based on squaring the Composite Step BCG method, introduced recently by Bank and Chan [1,2], which itself is a stabilized variant of BCG in that it skips over steps for which the BCG iterate is not defined and causes one kind of breakdown in BCG. By doing this, we obtain a method (Composite Step CGS or CSCGS) which not only handles the breakdowns described above, but does so with the advantages of CGS, namely, no multiplications by the transpose matrix and a faster convergence rate than BCG. Our strategy for deciding whether to skip a step does not involve any machine dependent parameters and is designed to skip near breakdowns as well as produce smoother iterates. Numerical experiments show that the new method does produce improved performance over CGS on practical problems.

  12. Eigenproblem solution by a combined Sturm sequence and inverse iteration technique.

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1973-01-01

    Description of an efficient and numerically stable algorithm, along with a complete listing of the associated computer program, developed for the accurate computation of specified roots and associated vectors of the eigenvalue problem Aq = lambda Bq with band symmetric A and B, B being also positive-definite. The desired roots are first isolated by the Sturm sequence procedure; then a special variant of the inverse iteration technique is applied for the individual determination of each root along with its vector. The algorithm fully exploits the banded form of relevant matrices, and the associated program written in FORTRAN V for the JPL UNIVAC 1108 computer proves to be most significantly economical in comparison to similar existing procedures. The program may be conveniently utilized for the efficient solution of practical engineering problems, involving free vibration and buckling analysis of structures. Results of such analyses are presented for representative structures.

  13. Phase retrieval by coherent modulation imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Fucai; Chen, Bo; Morrison, Graeme R.

    Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging (CDI) is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit-wave. This coherent modulation imaging (CMI) method removes inherent ambiguities of CDI and uses a reliable, rapidly converging iterative algorithm involving three planes. It works formore » extended samples, does not require tight support for convergence, and relaxes dynamic range requirements on the detector. CMI provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free electron laser.« less

  14. Phase retrieval by coherent modulation imaging

    DOE PAGES

    Zhang, Fucai; Chen, Bo; Morrison, Graeme R.; ...

    2016-11-18

    Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging (CDI) is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit-wave. This coherent modulation imaging (CMI) method removes inherent ambiguities of CDI and uses a reliable, rapidly converging iterative algorithm involving three planes. It works formore » extended samples, does not require tight support for convergence, and relaxes dynamic range requirements on the detector. CMI provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free electron laser.« less

  15. The Laboratory Course Assessment Survey: A Tool to Measure Three Dimensions of Research-Course Design

    PubMed Central

    Corwin, Lisa A.; Runyon, Christopher; Robinson, Aspen; Dolan, Erin L.

    2015-01-01

    Course-based undergraduate research experiences (CUREs) are increasingly being offered as scalable ways to involve undergraduates in research. Yet few if any design features that make CUREs effective have been identified. We developed a 17-item survey instrument, the Laboratory Course Assessment Survey (LCAS), that measures students’ perceptions of three design features of biology lab courses: 1) collaboration, 2) discovery and relevance, and 3) iteration. We assessed the psychometric properties of the LCAS using established methods for instrument design and validation. We also assessed the ability of the LCAS to differentiate between CUREs and traditional laboratory courses, and found that the discovery and relevance and iteration scales differentiated between these groups. Our results indicate that the LCAS is suited for characterizing and comparing undergraduate biology lab courses and should be useful for determining the relative importance of the three design features for achieving student outcomes. PMID:26466990

  16. Distance-weighted city growth.

    PubMed

    Rybski, Diego; García Cantú Ros, Anselmo; Kropp, Jürgen P

    2013-04-01

    Urban agglomerations exhibit complex emergent features of which Zipf's law, i.e., a power-law size distribution, and fractality may be regarded as the most prominent ones. We propose a simplistic model for the generation of citylike structures which is solely based on the assumption that growth is more likely to take place close to inhabited space. The model involves one parameter which is an exponent determining how strongly the attraction decays with the distance. In addition, the model is run iteratively so that existing clusters can grow (together) and new ones can emerge. The model is capable of reproducing the size distribution and the fractality of the boundary of the largest cluster. Although the power-law distribution depends on both, the imposed exponent and the iteration, the fractality seems to be independent of the former and only depends on the latter. Analyzing land-cover data, we estimate the parameter-value γ≈2.5 for Paris and its surroundings.

  17. Present limits and improvements of structural materials for fusion reactors - a review

    NASA Astrophysics Data System (ADS)

    Tavassoli, A.-A. F.

    2002-04-01

    Since the transition from ITER or DEMO to a commercial power reactor would involve a significant change in system and materials options, a parallel R&D path has been put in place in Europe to address these issues. This paper assesses the structural materials part of this program along with the latest R&D results from the main programs. It is shown that stainless steels and ferritic/martensitic steels, retained for ITER and DEMO, will also remain the principal contenders for the future FPR, despite uncertainties over irradiation induced embrittlement at low temperatures and consequences of high He/dpa ratio. Neither one of the present advanced high temperature materials has to this date the structural integrity reliability needed for application in critical components. This situation is unlikely to change with the materials R&D alone and has to be mitigated in close collaboration with blanket system design.

  18. Decentralized Feedback Controllers for Exponential Stabilization of Hybrid Periodic Orbits: Application to Robotic Walking.

    PubMed

    Hamed, Kaveh Akbari; Gregg, Robert D

    2016-07-01

    This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially stabilize periodic orbits for a class of hybrid dynamical systems arising from bipedal walking. The algorithm assumes a class of parameterized and nonlinear decentralized feedback controllers which coordinate lower-dimensional hybrid subsystems based on a common phasing variable. The exponential stabilization problem is translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities, which can be easily solved with available software packages. A set of sufficient conditions for the convergence of the iterative algorithm to a stabilizing decentralized feedback control solution is presented. The power of the algorithm is demonstrated by designing a set of local nonlinear controllers that cooperatively produce stable walking for a 3D autonomous biped with 9 degrees of freedom, 3 degrees of underactuation, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg.

  19. Decentralized Feedback Controllers for Robust Stabilization of Periodic Orbits of Hybrid Systems: Application to Bipedal Walking.

    PubMed

    Hamed, Kaveh Akbari; Gregg, Robert D

    2017-07-01

    This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially and robustly stabilize periodic orbits for hybrid dynamical systems against possible uncertainties in discrete-time phases. The algorithm assumes a family of parameterized and decentralized nonlinear controllers to coordinate interconnected hybrid subsystems based on a common phasing variable. The exponential and [Formula: see text] robust stabilization problems of periodic orbits are translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities. By investigating the properties of the Poincaré map, some sufficient conditions for the convergence of the iterative algorithm are presented. The power of the algorithm is finally demonstrated through designing a set of robust stabilizing local nonlinear controllers for walking of an underactuated 3D autonomous bipedal robot with 9 degrees of freedom, impact model uncertainties, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg.

  20. Experimental investigations of helium cryotrapping by argon frost

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mack, A.; Perinic, D.; Murdoch, D.

    1992-03-01

    At the Karlsruhe Nuclear Research Centre (KfK) cryopumping techniques are being investigated by which the gaseous exhausts from the NET/ITER reactor can be pumped out during the burn-and dwell-times. Cryosorption and cryotrapping are techniques which are suitable for this task. It is the target of the investigations to test the techniques under NET/ITER conditions and to determine optimum design data for a prototype. They involve measurement of the pumping speed as a function of the gas composition, gas flow and loading condition of the pump surfaces. The following parameters are subjected to variations: Ar/He ratio, specific helium volume flow rate,more » cryosurface temperature, process gas composition, impurities in argon trapping gas, three-stage operation and two-stage operation. This paper is a description of the experiments on argon trapping techniques started in 1990. Eleven tests as well as the results derived from them are described.« less

  1. Systems and methods for optimal power flow on a radial network

    DOEpatents

    Low, Steven H.; Peng, Qiuyu

    2018-04-24

    Node controllers and power distribution networks in accordance with embodiments of the invention enable distributed power control. One embodiment includes a node controller including a distributed power control application; a plurality of node operating parameters describing the operating parameter of a node and a set of at least one node selected from the group consisting of an ancestor node and at least one child node; wherein send node operating parameters to nodes in the set of at least one node; receive operating parameters from the nodes in the set of at least one node; calculate a plurality of updated node operating parameters using an iterative process to determine the updated node operating parameters using the node operating parameters that describe the operating parameters of the node and the set of at least one node, where the iterative process involves evaluation of a closed form solution; and adjust node operating parameters.

  2. Decentralized Feedback Controllers for Exponential Stabilization of Hybrid Periodic Orbits: Application to Robotic Walking*

    PubMed Central

    Hamed, Kaveh Akbari; Gregg, Robert D.

    2016-01-01

    This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially stabilize periodic orbits for a class of hybrid dynamical systems arising from bipedal walking. The algorithm assumes a class of parameterized and nonlinear decentralized feedback controllers which coordinate lower-dimensional hybrid subsystems based on a common phasing variable. The exponential stabilization problem is translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities, which can be easily solved with available software packages. A set of sufficient conditions for the convergence of the iterative algorithm to a stabilizing decentralized feedback control solution is presented. The power of the algorithm is demonstrated by designing a set of local nonlinear controllers that cooperatively produce stable walking for a 3D autonomous biped with 9 degrees of freedom, 3 degrees of underactuation, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg. PMID:27990059

  3. Decentralized Feedback Controllers for Robust Stabilization of Periodic Orbits of Hybrid Systems: Application to Bipedal Walking

    PubMed Central

    Hamed, Kaveh Akbari; Gregg, Robert D.

    2016-01-01

    This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially and robustly stabilize periodic orbits for hybrid dynamical systems against possible uncertainties in discrete-time phases. The algorithm assumes a family of parameterized and decentralized nonlinear controllers to coordinate interconnected hybrid subsystems based on a common phasing variable. The exponential and H2 robust stabilization problems of periodic orbits are translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities. By investigating the properties of the Poincaré map, some sufficient conditions for the convergence of the iterative algorithm are presented. The power of the algorithm is finally demonstrated through designing a set of robust stabilizing local nonlinear controllers for walking of an underactuated 3D autonomous bipedal robot with 9 degrees of freedom, impact model uncertainties, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg. PMID:28959117

  4. Image based method for aberration measurement of lithographic tools

    NASA Astrophysics Data System (ADS)

    Xu, Shuang; Tao, Bo; Guo, Yongxing; Li, Gongfa

    2018-01-01

    Information of lens aberration of lithographic tools is important as it directly affects the intensity distribution in the image plane. Zernike polynomials are commonly used for a mathematical description of lens aberrations. Due to the advantage of lower cost and easier implementation of tools, image based measurement techniques have been widely used. Lithographic tools are typically partially coherent systems that can be described by a bilinear model, which entails time consuming calculations and does not lend a simple and intuitive relationship between lens aberrations and the resulted images. Previous methods for retrieving lens aberrations in such partially coherent systems involve through-focus image measurements and time-consuming iterative algorithms. In this work, we propose a method for aberration measurement in lithographic tools, which only requires measuring two images of intensity distribution. Two linear formulations are derived in matrix forms that directly relate the measured images to the unknown Zernike coefficients. Consequently, an efficient non-iterative solution is obtained.

  5. Parallelization of implicit finite difference schemes in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Decker, Naomi H.; Naik, Vijay K.; Nicoules, Michel

    1990-01-01

    Implicit finite difference schemes are often the preferred numerical schemes in computational fluid dynamics, requiring less stringent stability bounds than the explicit schemes. Each iteration in an implicit scheme involves global data dependencies in the form of second and higher order recurrences. Efficient parallel implementations of such iterative methods are considerably more difficult and non-intuitive. The parallelization of the implicit schemes that are used for solving the Euler and the thin layer Navier-Stokes equations and that require inversions of large linear systems in the form of block tri-diagonal and/or block penta-diagonal matrices is discussed. Three-dimensional cases are emphasized and schemes that minimize the total execution time are presented. Partitioning and scheduling schemes for alleviating the effects of the global data dependencies are described. An analysis of the communication and the computation aspects of these methods is presented. The effect of the boundary conditions on the parallel schemes is also discussed.

  6. Handling Big Data in Medical Imaging: Iterative Reconstruction with Large-Scale Automated Parallel Computation

    PubMed Central

    Lee, Jae H.; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T.; Seo, Youngho

    2014-01-01

    The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting. PMID:27081299

  7. Handling Big Data in Medical Imaging: Iterative Reconstruction with Large-Scale Automated Parallel Computation.

    PubMed

    Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho

    2014-11-01

    The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.

  8. A one-dimensional nonlinear problem of thermoelasticity in extended thermodynamics

    NASA Astrophysics Data System (ADS)

    Rawy, E. K.

    2018-06-01

    We solve a nonlinear, one-dimensional initial boundary-value problem of thermoelasticity in generalized thermodynamics. A Cattaneo-type evolution equation for the heat flux is used, which differs from the one used extensively in the literature. The hyperbolic nature of the associated linear system is clarified through a study of the characteristic curves. Progressive wave solutions with two finite speeds are noted. A numerical treatment is presented for the nonlinear system using a three-step, quasi-linearization, iterative finite-difference scheme for which the linear system of equations is the initial step in the iteration. The obtained results are discussed in detail. They clearly show the hyperbolic nature of the system, and may be of interest in investigating thermoelastic materials, not only at low temperatures, but also during high temperature processes involving rapid changes in temperature as in laser treatment of surfaces.

  9. WIND: Computer program for calculation of three dimensional potential compressible flow about wind turbine rotor blades

    NASA Technical Reports Server (NTRS)

    Dulikravich, D. S.

    1980-01-01

    A computer program is presented which numerically solves an exact, full potential equation (FPE) for three dimensional, steady, inviscid flow through an isolated wind turbine rotor. The program automatically generates a three dimensional, boundary conforming grid and iteratively solves the FPE while fully accounting for both the rotating cascade and Coriolis effects. The numerical techniques incorporated involve rotated, type dependent finite differencing, a finite volume method, artificial viscosity in conservative form, and a successive line overrelaxation combined with the sequential grid refinement procedure to accelerate the iterative convergence rate. Consequently, the WIND program is capable of accurately analyzing incompressible and compressible flows, including those that are locally transonic and terminated by weak shocks. The program can also be used to analyze the flow around isolated aircraft propellers and helicopter rotors in hover as long as the total relative Mach number of the oncoming flow is subsonic.

  10. Phase retrieval by coherent modulation imaging.

    PubMed

    Zhang, Fucai; Chen, Bo; Morrison, Graeme R; Vila-Comamala, Joan; Guizar-Sicairos, Manuel; Robinson, Ian K

    2016-11-18

    Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single-diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit wave. This coherent modulation imaging method removes inherent ambiguities of coherent diffraction imaging and uses a reliable, rapidly converging iterative algorithm involving three planes. It works for extended samples, does not require tight support for convergence and relaxes dynamic range requirements on the detector. Coherent modulation imaging provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free-electron lasers.

  11. An Iterative, Bimodular Nonribosomal Peptide Synthetase that Converts Anthranilate and Tryptophan into Tetracyclic Asperlicins

    PubMed Central

    Gao, Xue; Jiang, Wei; Jiménez-Osés, Gonzalo; Choi, Moon Seok; Houk, Kendall N.; Tang, Yi; Walsh, Christopher T.

    2013-01-01

    The bimodular 276 kDa nonribosomal peptide synthetase AspA from Aspergillus alliaceus, heterologously expressed in Saccharomyces cerevisiae, converts tryptophan and two molecules of the aromatic β-amino acid anthranilate (Ant) into a pair of tetracyclic peptidyl alkaloids asperlicin C and D in a ratio of 10:1. The first module of AspA activates and processes two molecules of Ant iteratively to generate a tethered Ant-Ant-Trp-S-enzyme intermediate on module two. Release is postulated to involve tandem cyclizations, in which the first step is the macrocyclization of the linear tripeptidyl-S-enzyme, by the terminal condensation (CT) domain to generate the regioisomeric tetracyclic asperlicin scaffolds. Computational analysis of the transannular cyclization of the 11-membered macrocyclic intermediate shows that asperlicin C is the kinetically favored product due to the high stability of a conformation resembling the transition state for cyclization, while asperlicin D is thermodynamically more stable. PMID:23890005

  12. The performance of monotonic and new non-monotonic gradient ascent reconstruction algorithms for high-resolution neuroreceptor PET imaging.

    PubMed

    Angelis, G I; Reader, A J; Kotasidis, F A; Lionheart, W R; Matthews, J C

    2011-07-07

    Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [(11)C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice for many applications, especially given that in practice convergence is often not desired for algorithms seeking ML estimates.

  13. Phase extraction based on iterative algorithm using five-frame crossed fringes in phase measuring deflectometry

    NASA Astrophysics Data System (ADS)

    Jin, Chengying; Li, Dahai; Kewei, E.; Li, Mengyang; Chen, Pengyu; Wang, Ruiyang; Xiong, Zhao

    2018-06-01

    In phase measuring deflectometry, two orthogonal sinusoidal fringe patterns are separately projected on the test surface and the distorted fringes reflected by the surface are recorded, each with a sequential phase shift. Then the two components of the local surface gradients are obtained by triangulation. It usually involves some complicated and time-consuming procedures (fringe projection in the orthogonal directions). In addition, the digital light devices (e.g. LCD screen and CCD camera) are not error free. There are quantization errors for each pixel of both LCD and CCD. Therefore, to avoid the complex process and improve the reliability of the phase distribution, a phase extraction algorithm with five-frame crossed fringes is presented in this paper. It is based on a least-squares iterative process. Using the proposed algorithm, phase distributions and phase shift amounts in two orthogonal directions can be simultaneously and successfully determined through an iterative procedure. Both a numerical simulation and a preliminary experiment are conducted to verify the validity and performance of this algorithm. Experimental results obtained by our method are shown, and comparisons between our experimental results and those obtained by the traditional 16-step phase-shifting algorithm and between our experimental results and those measured by the Fizeau interferometer are made.

  14. Parallel Preconditioning for CFD Problems on the CM-5

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.

  15. Doppler Lidar System Design via Interdisciplinary Design Concept at NASA Langley Research Center - Part II

    NASA Technical Reports Server (NTRS)

    Crasner, Aaron I.; Scola,Salvatore; Beyon, Jeffrey Y.; Petway, Larry B.

    2014-01-01

    Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. Thermal modeling software was used to run steady state thermal analyses, which were used to both validate the designs and recommend further changes. Analyses were run on each redesign, as well as the original system. Thermal Desktop was used to run trade studies to account for uncertainty and assumptions about fan performance and boundary conditions. The studies suggested that, even if the assumptions were significantly wrong, the redesigned systems would remain within operating temperature limits.

  16. Protecting enzymatic function through directed packaging into bacterial outer membrane vesicles

    PubMed Central

    Alves, Nathan J.; Turner, Kendrick B.; Medintz, Igor L.; Walper, Scott A.

    2016-01-01

    Bacteria possess innate machinery to transport extracellular cargo between cells as well as package virulence factors to infect host cells by secreting outer membrane vesicles (OMVs) that contain small molecules, proteins, and genetic material. These robust proteoliposomes have evolved naturally to be resistant to degradation and provide a supportive environment to extend the activity of encapsulated cargo. In this study, we sought to exploit bacterial OMV formation to package and maintain the activity of an enzyme, phosphotriesterase (PTE), under challenging storage conditions encountered for real world applications. Here we show that OMV packaged PTE maintains activity over free PTE when subjected to elevated temperatures (>100-fold more activity after 14 days at 37 °C), iterative freeze-thaw cycles (3.4-fold post four-cycles), and lyophilization (43-fold). We also demonstrate how lyophilized OMV packaged PTE can be utilized as a cell free reagent for long term environmental remediation of pesticide/chemical warfare contaminated areas. PMID:27117743

  17. Textbook Multigrid Efficiency for Leading Edge Stagnation

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Mineck, Raymond E.

    2004-01-01

    A multigrid solver is defined as having textbook multigrid efficiency (TME) if the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in evaluating the discrete residuals. TME in solving the incompressible inviscid fluid equations is demonstrated for leading-edge stagnation flows. The contributions of this paper include (1) a special formulation of the boundary conditions near stagnation allowing convergence of the Newton iterations on coarse grids, (2) the boundary relaxation technique to facilitate relaxation and residual restriction near the boundaries, (3) a modified relaxation scheme to prevent initial error amplification, and (4) new general analysis techniques for multigrid solvers. Convergence of algebraic errors below the level of discretization errors is attained by a full multigrid (FMG) solver with one full approximation scheme (FAS) cycle per grid. Asymptotic convergence rates of the FAS cycles for the full system of flow equations are very fast, approaching those for scalar elliptic equations.

  18. Textbook Multigrid Efficiency for Leading Edge Stagnation

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Mineck, Raymond E.

    2004-01-01

    A multigrid solver is defined as having textbook multigrid efficiency (TME) if the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in evaluating the discrete residuals. TME in solving the incompressible inviscid fluid equations is demonstrated for leading- edge stagnation flows. The contributions of this paper include (1) a special formulation of the boundary conditions near stagnation allowing convergence of the Newton iterations on coarse grids, (2) the boundary relaxation technique to facilitate relaxation and residual restriction near the boundaries, (3) a modified relaxation scheme to prevent initial error amplification, and (4) new general analysis techniques for multigrid solvers. Convergence of algebraic errors below the level of discretization errors is attained by a full multigrid (FMG) solver with one full approximation scheme (F.4S) cycle per grid. Asymptotic convergence rates of the F.4S cycles for the full system of flow equations are very fast, approaching those for scalar elliptic equations.

  19. Thinking like an expert: surgical decision making as a cyclical process of being aware.

    PubMed

    Cristancho, Sayra M; Apramian, Tavis; Vanstone, Meredith; Lingard, Lorelei; Ott, Michael; Forbes, Thomas; Novick, Richard

    2016-01-01

    Education researchers are studying the practices of high-stake professionals as they learn how to better train for flexibility under uncertainty. This study explores the "Reconciliation Cycle" as the core element of an intraoperative decision-making model of how experienced surgeons assess and respond to challenges. We analyzed 32 semistructured interviews using constructivist grounded theory to develop a model of intraoperative decision making. Using constant comparison analysis, we built on this model with 9 follow-up interviews about the most challenging cases described in our dataset. The Reconciliation Cycle constituted an iterative process of "gaining" and "transforming information." The cyclical nature of surgeons' decision making suggested that transforming information requires a higher degree of awareness, not yet accounted by current conceptualizations of situation awareness. This study advances the notion of situation awareness in surgery. This characterization will support further investigations on how expert and nonexpert surgeons implement strategies to cope with unexpected events. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Crack propagation in aluminum sheets reinforced with boron-epoxy

    NASA Technical Reports Server (NTRS)

    Roderick, G. L.

    1979-01-01

    An analysis was developed to predict both the crack growth and debond growth in a reinforced system. The analysis was based on the use of complex variable Green's functions for cracked, isotropic sheets and uncracked, orthotropic sheets to calculate inplane and interlaminar stresses, stress intensities, and strain-energy-release rates. An iterative solution was developed that used the stress intensities and strain-energy-release rates to predict crack and debond growths, respectively, on a cycle-by-cycle basis. A parametric study was made of the effects of boron-epoxy composite reinforcement on crack propagation in aluminum sheets. Results show that the size of the debond area has a significant effect on the crack propagation in the aluminum. For small debond areas, the crack propagation rate is reduced significantly, but these small debonds have a strong tendency to enlarge. Debond growth is most likely to occur in reinforced systems that have a cracked metal sheet reinforced with a relatively thin composite sheet.

  1. The Effect of Iteration on the Design Performance of Primary School Children

    ERIC Educational Resources Information Center

    Looijenga, Annemarie; Klapwijk, Remke; de Vries, Marc J.

    2015-01-01

    Iteration during the design process is an essential element. Engineers optimize their design by iteration. Research on iteration in Primary Design Education is however scarce; possibly teachers believe they do not have enough time for iteration in daily classroom practices. Spontaneous playing behavior of children indicates that iteration fits in…

  2. Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Hanquan

    2016-03-01

    In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.

  3. Viral genome analysis and knowledge management.

    PubMed

    Kuiken, Carla; Yoon, Hyejin; Abfalterer, Werner; Gaschen, Brian; Lo, Chienchi; Korber, Bette

    2013-01-01

    One of the challenges of genetic data analysis is to combine information from sources that are distributed around the world and accessible through a wide array of different methods and interfaces. The HIV database and its footsteps, the hepatitis C virus (HCV) and hemorrhagic fever virus (HFV) databases, have made it their mission to make different data types easily available to their users. This involves a large amount of behind-the-scenes processing, including quality control and analysis of the sequences and their annotation. Gene and protein sequences are distilled from the sequences that are stored in GenBank; to this end, both submitter annotation and script-generated sequences are used. Alignments of both nucleotide and amino acid sequences are generated, manually curated, distilled into an alignment model, and regenerated in an iterative cycle that results in ever better new alignments. Annotation of epidemiological and clinical information is parsed, checked, and added to the database. User interfaces are updated, and new interfaces are added based upon user requests. Vital for its success, the database staff are heavy users of the system, which enables them to fix bugs and find opportunities for improvement. In this chapter we describe some of the infrastructure that keeps these heavily used analysis platforms alive and vital after nearly 25 years of use. The database/analysis platforms described in this chapter can be accessed at http://hiv.lanl.gov http://hcv.lanl.gov http://hfv.lanl.gov.

  4. High throughput and quantitative approaches for measuring circadian rhythms in cyanobacteria using bioluminescence

    PubMed Central

    Shultzaberger, Ryan K.; Paddock, Mark L.; Katsuki, Takeo; Greenspan, Ralph J.; Golden, Susan S.

    2016-01-01

    The temporal measurement of a bioluminescent reporter has proven to be one of the most powerful tools for characterizing circadian rhythms in the cyanobacterium Synechococcus elongatus. Primarily, two approaches have been used to automate this process: (1) detection of cell culture bioluminescence in 96-well plates by a photomultiplier tube-based plate-cycling luminometer (TopCount Microplate Scintillation and Luminescence Counter, Perkin Elmer) and (2) detection of individual colony bioluminescence by iteratively rotating a Petri dish under a cooled CCD camera using a computer-controlled turntable. Each approach has distinct advantages. The TopCount provides a more quantitative measurement of bioluminescence, enabling the direct comparison of clock output levels among strains. The computer-controlled turntable approach has a shorter set-up time and greater throughput, making it a more powerful phenotypic screening tool. While the latter approach is extremely useful, only a few labs have been able to build such an apparatus because of technical hurdles involved in coordinating and controlling both the camera and the turntable, and in processing the resulting images. This protocol provides instructions on how to construct, use, and process data from a computer-controlled turntable to measure the temporal changes in bioluminescence of individual cyanobacterial colonies. Furthermore, we describe how to prepare samples for use with the TopCount to minimize experimental noise, and generate meaningful quantitative measurements of clock output levels for advanced analysis. PMID:25662451

  5. A metadata reporting framework for standardization and synthesis of ecohydrological field observations

    NASA Astrophysics Data System (ADS)

    Christianson, D. S.; Varadharajan, C.; Detto, M.; Faybishenko, B.; Gimenez, B.; Jardine, K.; Negron Juarez, R. I.; Pastorello, G.; Powell, T.; Warren, J.; Wolfe, B.; McDowell, N. G.; Kueppers, L. M.; Chambers, J.; Agarwal, D.

    2016-12-01

    The U.S. Department of Energy's (DOE) Next Generation Ecosystem Experiment (NGEE) Tropics project aims to develop a process-rich tropical forest ecosystem model that is parameterized and benchmarked by field observations. Thus, data synthesis, quality assurance and quality control (QA/QC), and data product generation of a diverse and complex set of ecohydrological observations, including sapflux, leaf surface temperature, soil water content, and leaf gas exchange from sites across the Tropics, are required to support model simulations. We have developed a metadata reporting framework, implemented in conjunction with the NGEE Tropics Data Archive tool, to enable cross-site and cross-method comparison, data interpretability, and QA/QC. We employed a modified User-Centered Design approach, which involved short development cycles based on user-identified needs, and iterative testing with data providers and users. The metadata reporting framework currently has been implemented for sensor-based observations and leverages several existing metadata protocols. The framework consists of templates that define a multi-scale measurement position hierarchy, descriptions of measurement settings, and details about data collection and data file organization. The framework also enables data providers to define data-access permission settings, provenance, and referencing to enable appropriate data usage, citation, and attribution. In addition to describing the metadata reporting framework, we discuss tradeoffs and impressions from both data providers and users during the development process, focusing on the scalability, usability, and efficiency of the framework.

  6. 15 CFR 783.4 - Deadlines for submission of reports and amendments.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... REGULATIONS CIVIL NUCLEAR FUEL CYCLE-RELATED ACTIVITIES NOT INVOLVING NUCLEAR MATERIALS § 783.4 Deadlines for... location that commenced one or more of the civil nuclear fuel cycle-related activities described in § 783.1... activities involving uranium hard-rock mines must include any such mines that were closed down during...

  7. 15 CFR 783.4 - Deadlines for submission of reports and amendments.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... REGULATIONS CIVIL NUCLEAR FUEL CYCLE-RELATED ACTIVITIES NOT INVOLVING NUCLEAR MATERIALS § 783.4 Deadlines for... location that commenced one or more of the civil nuclear fuel cycle-related activities described in § 783.1... activities involving uranium hard-rock mines must include any such mines that were closed down during...

  8. 15 CFR 783.4 - Deadlines for submission of reports and amendments.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... REGULATIONS CIVIL NUCLEAR FUEL CYCLE-RELATED ACTIVITIES NOT INVOLVING NUCLEAR MATERIALS § 783.4 Deadlines for... location that commenced one or more of the civil nuclear fuel cycle-related activities described in § 783.1... activities involving uranium hard-rock mines must include any such mines that were closed down during...

  9. 15 CFR 783.4 - Deadlines for submission of reports and amendments.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... REGULATIONS CIVIL NUCLEAR FUEL CYCLE-RELATED ACTIVITIES NOT INVOLVING NUCLEAR MATERIALS § 783.4 Deadlines for... location that commenced one or more of the civil nuclear fuel cycle-related activities described in § 783.1... activities involving uranium hard-rock mines must include any such mines that were closed down during...

  10. 15 CFR 783.4 - Deadlines for submission of reports and amendments.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... REGULATIONS CIVIL NUCLEAR FUEL CYCLE-RELATED ACTIVITIES NOT INVOLVING NUCLEAR MATERIALS § 783.4 Deadlines for... location that commenced one or more of the civil nuclear fuel cycle-related activities described in § 783.1... activities involving uranium hard-rock mines must include any such mines that were closed down during...

  11. Method and apparatus for iterative lysis and extraction of algae

    DOEpatents

    Chew, Geoffrey; Boggs, Tabitha; Dykes, Jr., H. Waite H.; Doherty, Stephen J.

    2015-12-01

    A method and system for processing algae involves the use of an ionic liquid-containing clarified cell lysate to lyse algae cells. The resulting crude cell lysate may be clarified and subsequently used to lyse algae cells. The process may be repeated a number of times before a clarified lysate is separated into lipid and aqueous phases for further processing and/or purification of desired products.

  12. LAVA: Large scale Automated Vulnerability Addition

    DTIC Science & Technology

    2016-05-23

    memory copy, e.g., are reasonable attack points. If the goal is to inject divide- by-zero, then arithmetic operations involving division will be...ways. First, it introduces deterministic record and replay , which can be used for iterated and expensive analyses that cannot be performed online... memory . Since our approach records the correspondence between source lines and program basic block execution, it would be just as easy to figure out

  13. AutoMap User’s Guide

    DTIC Science & Technology

    2006-10-01

    Hierarchy of Pre-Processing Techniques 3. NLP (Natural Language Processing) Utilities 3.1 Named-Entity Recognition 3.1.1 Example for Named-Entity... Recognition 3.2 Symbol RemovalN-Gram Identification: Bi-Grams 4. Stemming 4.1 Stemming Example 5. Delete List 5.1 Open a Delete List 5.1.1 Small...iterative and involves several key processes: • Named-Entity Recognition Named-Entity Recognition is an Automap feature that allows you to

  14. Finite element analysis of wrinkling membranes

    NASA Technical Reports Server (NTRS)

    Miller, R. K.; Hedgepeth, J. M.; Weingarten, V. I.; Das, P.; Kahyai, S.

    1984-01-01

    The development of a nonlinear numerical algorithm for the analysis of stresses and displacements in partly wrinkled flat membranes, and its implementation on the SAP VII finite-element code are described. A comparison of numerical results with exact solutions of two benchmark problems reveals excellent agreement, with good convergence of the required iterative procedure. An exact solution of a problem involving axisymmetric deformations of a partly wrinkled shallow curved membrane is also reported.

  15. Can the discharge of a hyperconcentrated flow be estimated from paleoflood evidence?

    NASA Astrophysics Data System (ADS)

    Bodoque, Jose M.; Eguibar, Miguel A.; DíEz-Herrero, AndréS.; GutiéRrez-PéRez, Ignacio; RuíZ-Villanueva, Virginia

    2011-12-01

    Many flood events involving water and sediments have been characterized using classic hydraulics principles, assuming the existence of critical flow and many other simplifications. In this paper, hyperconcentrated flow discharge was evaluated by using paleoflood reconstructions (based on paleostage indicators [PSI]) combined with a detailed hydraulic analysis of the critical flow assumption. The exact location where this condition occurred was established by iteratively determining the corresponding cross section, so that specific energy is at a minimum. In addition, all of the factors and parameters involved in the process were assessed, especially those related to the momentum equation, existing shear stresses in the wetted perimeter, and nonhydrostatic and hydrostatic pressure distributions. The superelevation of the hyperconcentrated flow, due to the flow elevation curvature, was also estimated and calibrated with the PSI. The estimated peak discharge was established once the iterative process was unable to improve the fit between the simulated depth and the depth observed from the PSI. The methodological approach proposed here can be applied to other higher-gradient mountainous torrents with a similar geomorphic configuration to the one studied in this paper. Likewise, results have been derived with fewer uncertainties than those obtained from standard hydraulic approaches, whose simplifying assumptions have not been considered.

  16. Fractional Programming for Communication Systems—Part I: Power Control and Beamforming

    NASA Astrophysics Data System (ADS)

    Shen, Kaiming; Yu, Wei

    2018-05-01

    This two-part paper explores the use of FP in the design and optimization of communication systems. Part I of this paper focuses on FP theory and on solving continuous problems. The main theoretical contribution is a novel quadratic transform technique for tackling the multiple-ratio concave-convex FP problem--in contrast to conventional FP techniques that mostly can only deal with the single-ratio or the max-min-ratio case. Multiple-ratio FP problems are important for the optimization of communication networks, because system-level design often involves multiple signal-to-interference-plus-noise ratio terms. This paper considers the applications of FP to solving continuous problems in communication system design, particularly for power control, beamforming, and energy efficiency maximization. These application cases illustrate that the proposed quadratic transform can greatly facilitate the optimization involving ratios by recasting the original nonconvex problem as a sequence of convex problems. This FP-based problem reformulation gives rise to an efficient iterative optimization algorithm with provable convergence to a stationary point. The paper further demonstrates close connections between the proposed FP approach and other well-known algorithms in the literature, such as the fixed-point iteration and the weighted minimum mean-square-error beamforming. The optimization of discrete problems is discussed in Part II of this paper.

  17. Solving the Sea-Level Equation in an Explicit Time Differencing Scheme

    NASA Astrophysics Data System (ADS)

    Klemann, V.; Hagedoorn, J. M.; Thomas, M.

    2016-12-01

    In preparation of coupling the solid-earth to an ice-sheet compartment in an earth-system model, the dependency of initial topography on the ice-sheet history and viscosity structure has to be analysed. In this study, we discuss this dependency and how it influences the reconstruction of former sea level during a glacial cycle. The modelling is based on the VILMA code in which the field equations are solved in the time domain applying an explicit time-differencing scheme. The sea-level equation is solved simultaneously in the same explicit scheme as the viscoleastic field equations (Hagedoorn et al., 2007). With the assumption of only small changes, we neglect the iterative solution at each time step as suggested by e.g. Kendall et al. (2005). Nevertheless, the prediction of the initial paleo topography in case of moving coastlines remains to be iterated by repeated integration of the whole load history. The sensitivity study sketched at the beginning is accordingly motivated by the question if the iteration of the paleo topography can be replaced by a predefined one. This study is part of the German paleoclimate modelling initiative PalMod. Lit:Hagedoorn JM, Wolf D, Martinec Z, 2007. An estimate of global mean sea-level rise inferred from tide-gauge measurements using glacial-isostatic models consistent with the relative sea-level record. Pure appl. Geophys. 164: 791-818, doi:10.1007/s00024-007-0186-7Kendall RA, Mitrovica JX, Milne GA, 2005. On post-glacial sea level - II. Numerical formulation and comparative reesults on spherically symmetric models. Geophys. J. Int., 161: 679-706, doi:10.1111/j.365-246.X.2005.02553.x

  18. Metallographic autopsies of full-scale ITER prototype cable-in-conduit conductors after full cyclic testing in SULTAN: III. The importance of strand surface roughness in long twist pitch conductors

    DOE PAGES

    Sanabria, Charlie; Lee, Peter J.; Starch, William; ...

    2016-05-31

    As part of the ITER conductor qualification process, 3 m long Cable-in-Conduit Conductors (CICCs) were tested at the SULTAN facility under conditions simulating ITER operation so as to establish the current sharing temperature, T cs, as a function of multiple full Lorentz force loading cycles. After a comprehensive evaluation of both the Toroidal Field (TF) and the Central Solenoid (CS) conductors, it was found that T cs degradation was common in long twist pitch TF conductors while short twist pitch CS conductors showed some T cs increase. However, one kind of TF conductors containing superconducting strand fabricated by the Bochvarmore » Institute of Inorganic Materials (VNIINM) avoided T cs degradation despite having long twist pitch. In our earlier metallographic autopsies of long and short twist pitch CS conductors, we observed a substantially greater transverse strand movement under Lorentz force loading for long twist pitch conductors, while short twist pitch conductors had negligible transverse movement. With help from the literature, we concluded that the transverse movement was not the source of T cs degradation but rather an increase of the compressive strain in the Nb 3Sn filaments possibly induced by longitudinal movement of the wires. Like all TF conductors this TF VNIINM conductor showed large transverse motions under Lorentz force loading, but Tcs actually increased, as in all short twist pitch CS conductors. We here propose that the high surface roughness of the VNIINM strand may be responsible for the suppression of the compressive strain enhancement (characteristic of long twist pitch conductors). Furthermore, it appears that increasing strand surface roughness could improve the performance of long twist pitch CICCs.« less

  19. Marky: a tool supporting annotation consistency in multi-user and iterative document annotation projects.

    PubMed

    Pérez-Pérez, Martín; Glez-Peña, Daniel; Fdez-Riverola, Florentino; Lourenço, Anália

    2015-02-01

    Document annotation is a key task in the development of Text Mining methods and applications. High quality annotated corpora are invaluable, but their preparation requires a considerable amount of resources and time. Although the existing annotation tools offer good user interaction interfaces to domain experts, project management and quality control abilities are still limited. Therefore, the current work introduces Marky, a new Web-based document annotation tool equipped to manage multi-user and iterative projects, and to evaluate annotation quality throughout the project life cycle. At the core, Marky is a Web application based on the open source CakePHP framework. User interface relies on HTML5 and CSS3 technologies. Rangy library assists in browser-independent implementation of common DOM range and selection tasks, and Ajax and JQuery technologies are used to enhance user-system interaction. Marky grants solid management of inter- and intra-annotator work. Most notably, its annotation tracking system supports systematic and on-demand agreement analysis and annotation amendment. Each annotator may work over documents as usual, but all the annotations made are saved by the tracking system and may be further compared. So, the project administrator is able to evaluate annotation consistency among annotators and across rounds of annotation, while annotators are able to reject or amend subsets of annotations made in previous rounds. As a side effect, the tracking system minimises resource and time consumption. Marky is a novel environment for managing multi-user and iterative document annotation projects. Compared to other tools, Marky offers a similar visually intuitive annotation experience while providing unique means to minimise annotation effort and enforce annotation quality, and therefore corpus consistency. Marky is freely available for non-commercial use at http://sing.ei.uvigo.es/marky. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. A Model of Supervisor Decision-Making in the Accommodation of Workers with Low Back Pain.

    PubMed

    Williams-Whitt, Kelly; Kristman, Vicki; Shaw, William S; Soklaridis, Sophie; Reguly, Paula

    2016-09-01

    Purpose To explore supervisors' perspectives and decision-making processes in the accommodation of back injured workers. Methods Twenty-three semi-structured, in-depth interviews were conducted with supervisors from eleven Canadian organizations about their role in providing job accommodations. Supervisors were identified through an on-line survey and interviews were recorded, transcribed and entered into NVivo software. The initial analyses identified common units of meaning, which were used to develop a coding guide. Interviews were coded, and a model of supervisor decision-making was developed based on the themes, categories and connecting ideas identified in the data. Results The decision-making model includes a process element that is described as iterative "trial and error" decision-making. Medical restrictions are compared to job demands, employee abilities and available alternatives. A feasible modification is identified through brainstorming and then implemented by the supervisor. Resources used for brainstorming include information, supervisor experience and autonomy, and organizational supports. The model also incorporates the experience of accommodation as a job demand that causes strain for the supervisor. Accommodation demands affect the supervisor's attitude, brainstorming and monitoring effort, and communication with returning employees. Resources and demands have a combined effect on accommodation decision complexity, which in turn affects the quality of the accommodation option selected. If the employee is unable to complete the tasks or is reinjured during the accommodation, the decision cycle repeats. More frequent iteration through the trial and error process reduces the likelihood of return to work success. Conclusion A series of propositions is developed to illustrate the relationships among categories in the model. The model and propositions show: (a) the iterative, problem solving nature of the RTW process; (b) decision resources necessary for accommodation planning, and (c) the impact accommodation demands may have on supervisors and RTW quality.

  1. Metallographic autopsies of full-scale ITER prototype cable-in-conduit conductors after full cyclic testing in SULTAN: III. The importance of strand surface roughness in long twist pitch conductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanabria, Charlie; Lee, Peter J.; Starch, William

    As part of the ITER conductor qualification process, 3 m long Cable-in-Conduit Conductors (CICCs) were tested at the SULTAN facility under conditions simulating ITER operation so as to establish the current sharing temperature, T cs, as a function of multiple full Lorentz force loading cycles. After a comprehensive evaluation of both the Toroidal Field (TF) and the Central Solenoid (CS) conductors, it was found that T cs degradation was common in long twist pitch TF conductors while short twist pitch CS conductors showed some T cs increase. However, one kind of TF conductors containing superconducting strand fabricated by the Bochvarmore » Institute of Inorganic Materials (VNIINM) avoided T cs degradation despite having long twist pitch. In our earlier metallographic autopsies of long and short twist pitch CS conductors, we observed a substantially greater transverse strand movement under Lorentz force loading for long twist pitch conductors, while short twist pitch conductors had negligible transverse movement. With help from the literature, we concluded that the transverse movement was not the source of T cs degradation but rather an increase of the compressive strain in the Nb 3Sn filaments possibly induced by longitudinal movement of the wires. Like all TF conductors this TF VNIINM conductor showed large transverse motions under Lorentz force loading, but Tcs actually increased, as in all short twist pitch CS conductors. We here propose that the high surface roughness of the VNIINM strand may be responsible for the suppression of the compressive strain enhancement (characteristic of long twist pitch conductors). Furthermore, it appears that increasing strand surface roughness could improve the performance of long twist pitch CICCs.« less

  2. PLAN2D - A PROGRAM FOR ELASTO-PLASTIC ANALYSIS OF PLANAR FRAMES

    NASA Technical Reports Server (NTRS)

    Lawrence, C.

    1994-01-01

    PLAN2D is a FORTRAN computer program for the plastic analysis of planar rigid frame structures. Given a structure and loading pattern as input, PLAN2D calculates the ultimate load that the structure can sustain before collapse. Element moments and plastic hinge rotations are calculated for the ultimate load. The location of hinges required for a collapse mechanism to form are also determined. The program proceeds in an iterative series of linear elastic analyses. After each iteration the resulting elastic moments in each member are compared to the reserve plastic moment capacity of that member. The member or members that have moments closest to their reserve capacity will determine the minimum load factor and the site where the next hinge is to be inserted. Next, hinges are inserted and the structural stiffness matrix is reformulated. This cycle is repeated until the structure becomes unstable. At this point the ultimate collapse load is calculated by accumulating the minimum load factor from each previous iteration and multiplying them by the original input loads. PLAN2D is based on the program STAN, originally written by Dr. E.L. Wilson at U.C. Berkeley. PLAN2D has several limitations: 1) Although PLAN2D will detect unloading of hinges it does not contain the capability to remove hinges; 2) PLAN2D does not allow the user to input different positive and negative moment capacities and 3) PLAN2D does not consider the interaction between axial and plastic moment capacity. Axial yielding and buckling is ignored as is the reduction in moment capacity due to axial load. PLAN2D is written in FORTRAN and is machine independent. It has been tested on an IBM PC and a DEC MicroVAX. The program was developed in 1988.

  3. Lecture Notes on Multigrid Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vassilevski, P S

    The Lecture Notes are primarily based on a sequence of lectures given by the author while been a Fulbright scholar at 'St. Kliment Ohridski' University of Sofia, Sofia, Bulgaria during the winter semester of 2009-2010 academic year. The notes are somewhat expanded version of the actual one semester class he taught there. The material covered is slightly modified and adapted version of similar topics covered in the author's monograph 'Multilevel Block-Factorization Preconditioners' published in 2008 by Springer. The author tried to keep the notes as self-contained as possible. That is why the lecture notes begin with some basic introductory matrix-vectormore » linear algebra, numerical PDEs (finite element) facts emphasizing the relations between functions in finite dimensional spaces and their coefficient vectors and respective norms. Then, some additional facts on the implementation of finite elements based on relation tables using the popular compressed sparse row (CSR) format are given. Also, typical condition number estimates of stiffness and mass matrices, the global matrix assembly from local element matrices are given as well. Finally, some basic introductory facts about stationary iterative methods, such as Gauss-Seidel and its symmetrized version are presented. The introductory material ends up with the smoothing property of the classical iterative methods and the main definition of two-grid iterative methods. From here on, the second part of the notes begins which deals with the various aspects of the principal TG and the numerous versions of the MG cycles. At the end, in part III, we briefly introduce algebraic versions of MG referred to as AMG, focusing on classes of AMG specialized for finite element matrices.« less

  4. Direct Iterative Nonlinear Inversion by Multi-frequency T-matrix Completion

    NASA Astrophysics Data System (ADS)

    Jakobsen, M.; Wu, R. S.

    2016-12-01

    Researchers in the mathematical physics community have recently proposed a conceptually new method for solving nonlinear inverse scattering problems (like FWI) which is inspired by the theory of nonlocality of physical interactions. The conceptually new method, which may be referred to as the T-matrix completion method, is very interesting since it is not based on linearization at any stage. Also, there are no gradient vectors or (inverse) Hessian matrices to calculate. However, the convergence radius of this promising T-matrix completion method is seriously restricted by it's use of single-frequency scattering data only. In this study, we have developed a modified version of the T-matrix completion method which we believe is more suitable for applications to nonlinear inverse scattering problems in (exploration) seismology, because it makes use of multi-frequency data. Essentially, we have simplified the single-frequency T-matrix completion method of Levinson and Markel and combined it with the standard sequential frequency inversion (multi-scale regularization) method. For each frequency, we first estimate the experimental T-matrix by using the Moore-Penrose pseudo inverse concept. Then this experimental T-matrix is used to initiate an iterative procedure for successive estimation of the scattering potential and the T-matrix using the Lippmann-Schwinger for the nonlinear relation between these two quantities. The main physical requirements in the basic iterative cycle is that the T-matrix should be data-compatible and the scattering potential operator should be dominantly local; although a non-local scattering potential operator is allowed in the intermediate iterations. In our simplified T-matrix completion strategy, we ensure that the T-matrix updates are always data compatible simply by adding a suitable correction term in the real space coordinate representation. The use of singular-value decomposition representations are not required in our formulation since we have developed an efficient domain decomposition method. The results of several numerical experiments for the SEG/EAGE salt model illustrate the importance of using multi-frequency data when performing frequency domain full waveform inversion in strongly scattering media via the new concept of T-matrix completion.

  5. Equilibrium cycle pin by pin transport depletion calculations with DeCART

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kochunas, B.; Downar, T.; Taiwo, T.

    As the Advanced Fuel Cycle Initiative (AFCI) program has matured it has become more important to utilize more advanced simulation methods. The work reported here was performed as part of the AFCI fellowship program to develop and demonstrate the capability of performing high fidelity equilibrium cycle calculations. As part of the work here, a new multi-cycle analysis capability was implemented in the DeCART code which included modifying the depletion modules to perform nuclide decay calculations, implementing an assembly shuffling pattern description, and modifying iteration schemes. During the work, stability issues were uncovered with respect to converging simultaneously the neutron flux,more » isotopics, and fluid density and temperature distributions in 3-D. Relaxation factors were implemented which considerably improved the stability of the convergence. To demonstrate the capability two core designs were utilized, a reference UOX core and a CORAIL core. Full core equilibrium cycle calculations were performed on both cores and the discharge isotopics were compared. From this comparison it was noted that the improved modeling capability was not drastically different in its prediction of the discharge isotopics when compared to 2-D single assembly or 2-D core models. For fissile isotopes such as U-235, Pu-239, and Pu-241 the relative differences were 1.91%, 1.88%, and 0.59%), respectively. While this difference may not seem large it translates to mass differences on the order of tens of grams per assembly, which may be significant for the purposes of accounting of special nuclear material. (authors)« less

  6. Difference of nitrogen-cycling microbes between shallow bay and deep-sea sediments in the South China Sea.

    PubMed

    Yu, Tiantian; Li, Meng; Niu, Mingyang; Fan, Xibei; Liang, Wenyue; Wang, Fengping

    2018-01-01

    In marine sediments, microorganisms are known to play important roles in nitrogen cycling; however, the composition and quantity of microbes taking part in each process of nitrogen cycling are currently unclear. In this study, two different types of marine sediment samples (shallow bay and deep-sea sediments) in the South China Sea (SCS) were selected to investigate the microbial community involved in nitrogen cycling. The abundance and composition of prokaryotes and seven key functional genes involved in five processes of the nitrogen cycle [nitrogen fixation, nitrification, denitrification, dissimilatory nitrate reduction to ammonium (DNRA), and anaerobic ammonia oxidation (anammox)] were presented. The results showed that a higher abundance of denitrifiers was detected in shallow bay sediments, while a higher abundance of microbes involved in ammonia oxidation, anammox, and DNRA was found in the deep-sea sediments. Moreover, phylogenetic differentiation of bacterial amoA, nirS, nosZ, and nrfA sequences between the two types of sediments was also presented, suggesting environmental selection of microbes with the same geochemical functions but varying physiological properties.

  7. Tailored and Integrated Web-Based Tools for Improving Psychosocial Outcomes of Cancer Patients: The DoTTI Development Framework

    PubMed Central

    Bryant, Jamie; Sanson-Fisher, Rob; Tzelepis, Flora; Henskens, Frans; Paul, Christine; Stevenson, William

    2014-01-01

    Background Effective communication with cancer patients and their families about their disease, treatment options, and possible outcomes may improve psychosocial outcomes. However, traditional approaches to providing information to patients, including verbal information and written booklets, have a number of shortcomings centered on their limited ability to meet patient preferences and literacy levels. New-generation Web-based technologies offer an innovative and pragmatic solution for overcoming these limitations by providing a platform for interactive information seeking, information sharing, and user-centered tailoring. Objective The primary goal of this paper is to discuss the advantages of comprehensive and iterative Web-based technologies for health information provision and propose a four-phase framework for the development of Web-based information tools. Methods The proposed framework draws on our experience of constructing a Web-based information tool for hematological cancer patients and their families. The framework is based on principles for the development and evaluation of complex interventions and draws on the Agile methodology of software programming that emphasizes collaboration and iteration throughout the development process. Results The DoTTI framework provides a model for a comprehensive and iterative approach to the development of Web-based informational tools for patients. The process involves 4 phases of development: (1) Design and development, (2) Testing early iterations, (3) Testing for effectiveness, and (4) Integration and implementation. At each step, stakeholders (including researchers, clinicians, consumers, and programmers) are engaged in consultations to review progress, provide feedback on versions of the Web-based tool, and based on feedback, determine the appropriate next steps in development. Conclusions This 4-phase framework is evidence-informed and consumer-centered and could be applied widely to develop Web-based programs for a diverse range of diseases. PMID:24641991

  8. Tailored and integrated Web-based tools for improving psychosocial outcomes of cancer patients: the DoTTI development framework.

    PubMed

    Smits, Rochelle; Bryant, Jamie; Sanson-Fisher, Rob; Tzelepis, Flora; Henskens, Frans; Paul, Christine; Stevenson, William

    2014-03-14

    Effective communication with cancer patients and their families about their disease, treatment options, and possible outcomes may improve psychosocial outcomes. However, traditional approaches to providing information to patients, including verbal information and written booklets, have a number of shortcomings centered on their limited ability to meet patient preferences and literacy levels. New-generation Web-based technologies offer an innovative and pragmatic solution for overcoming these limitations by providing a platform for interactive information seeking, information sharing, and user-centered tailoring. The primary goal of this paper is to discuss the advantages of comprehensive and iterative Web-based technologies for health information provision and propose a four-phase framework for the development of Web-based information tools. The proposed framework draws on our experience of constructing a Web-based information tool for hematological cancer patients and their families. The framework is based on principles for the development and evaluation of complex interventions and draws on the Agile methodology of software programming that emphasizes collaboration and iteration throughout the development process. The DoTTI framework provides a model for a comprehensive and iterative approach to the development of Web-based informational tools for patients. The process involves 4 phases of development: (1) Design and development, (2) Testing early iterations, (3) Testing for effectiveness, and (4) Integration and implementation. At each step, stakeholders (including researchers, clinicians, consumers, and programmers) are engaged in consultations to review progress, provide feedback on versions of the Web-based tool, and based on feedback, determine the appropriate next steps in development. This 4-phase framework is evidence-informed and consumer-centered and could be applied widely to develop Web-based programs for a diverse range of diseases.

  9. LSRN: A PARALLEL ITERATIVE SOLVER FOR STRONGLY OVER- OR UNDERDETERMINED SYSTEMS*

    PubMed Central

    Meng, Xiangrui; Saunders, Michael A.; Mahoney, Michael W.

    2014-01-01

    We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to minx∈ℝn ‖Ax − b‖2, where A ∈ ℝm × n with m ≫ n or m ≪ n, and where A may be rank-deficient. Tikhonov regularization may also be included. Since A is involved only in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when A is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size ⌈γ min(m, n)⌉ × min(m, n), where γ is moderately larger than 1, e.g., γ = 2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results show that on a shared-memory machine, LSRN is very competitive with LAPACK’s DGELSD and a fast randomized least squares solver called Blendenpik on large dense problems, and it outperforms the least squares solver from SuiteSparseQR on sparse problems without sparsity patterns that can be exploited to reduce fill-in. Further experiments show that LSRN scales well on an Amazon Elastic Compute Cloud cluster. PMID:25419094

  10. Conservative tightly-coupled simulations of stochastic multiscale systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taverniers, Søren; Pigarov, Alexander Y.; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu

    2016-05-15

    Multiphysics problems often involve components whose macroscopic dynamics is driven by microscopic random fluctuations. The fidelity of simulations of such systems depends on their ability to propagate these random fluctuations throughout a computational domain, including subdomains represented by deterministic solvers. When the constituent processes take place in nonoverlapping subdomains, system behavior can be modeled via a domain-decomposition approach that couples separate components at the interfaces between these subdomains. Its coupling algorithm has to maintain a stable and efficient numerical time integration even at high noise strength. We propose a conservative domain-decomposition algorithm in which tight coupling is achieved by employingmore » either Picard's or Newton's iterative method. Coupled diffusion equations, one of which has a Gaussian white-noise source term, provide a computational testbed for analysis of these two coupling strategies. Fully-converged (“implicit”) coupling with Newton's method typically outperforms its Picard counterpart, especially at high noise levels. This is because the number of Newton iterations scales linearly with the amplitude of the Gaussian noise, while the number of Picard iterations can scale superlinearly. At large time intervals between two subsequent inter-solver communications, the solution error for single-iteration (“explicit”) Picard's coupling can be several orders of magnitude higher than that for implicit coupling. Increasing the explicit coupling's communication frequency reduces this difference, but the resulting increase in computational cost can make it less efficient than implicit coupling at similar levels of solution error, depending on the communication frequency of the latter and the noise strength. This trend carries over into higher dimensions, although at high noise strength explicit coupling may be the only computationally viable option.« less

  11. On Green's function retrieval by iterative substitution of the coupled Marchenko equations

    NASA Astrophysics Data System (ADS)

    van der Neut, Joost; Vasconcelos, Ivan; Wapenaar, Kees

    2015-11-01

    Iterative substitution of the coupled Marchenko equations is a novel methodology to retrieve the Green's functions from a source or receiver array at an acquisition surface to an arbitrary location in an acoustic medium. The methodology requires as input the single-sided reflection response at the acquisition surface and an initial focusing function, being the time-reversed direct wavefield from the acquisition surface to a specified location in the subsurface. We express the iterative scheme that is applied by this methodology explicitly as the successive actions of various linear operators, acting on an initial focusing function. These operators involve multidimensional crosscorrelations with the reflection data and truncations in time. We offer physical interpretations of the multidimensional crosscorrelations by subtracting traveltimes along common ray paths at the stationary points of the underlying integrals. This provides a clear understanding of how individual events are retrieved by the scheme. Our interpretation also exposes some of the scheme's limitations in terms of what can be retrieved in case of a finite recording aperture. Green's function retrieval is only successful if the relevant stationary points are sampled. As a consequence, internal multiples can only be retrieved at a subsurface location with a particular ray parameter if this location is illuminated by the direct wavefield with this specific ray parameter. Several assumptions are required to solve the Marchenko equations. We show that these assumptions are not always satisfied in arbitrary heterogeneous media, which can result in incomplete Green's function retrieval and the emergence of artefacts. Despite these limitations, accurate Green's functions can often be retrieved by the iterative scheme, which is highly relevant for seismic imaging and inversion of internal multiple reflections.

  12. Final Report on ITER Task Agreement 81-08

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard L. Moore

    As part of an ITER Implementing Task Agreement (ITA) between the ITER US Participant Team (PT) and the ITER International Team (IT), the INL Fusion Safety Program was tasked to provide the ITER IT with upgrades to the fusion version of the MELCOR 1.8.5 code including a beryllium dust oxidation model. The purpose of this model is to allow the ITER IT to investigate hydrogen production from beryllium dust layers on hot surfaces inside the ITER vacuum vessel (VV) during in-vessel loss-of-cooling accidents (LOCAs). Also included in the ITER ITA was a task to construct a RELAP5/ATHENA model of themore » ITER divertor cooling loop to model the draining of the loop during a large ex-vessel pipe break followed by an in-vessel divertor break and compare the results to a simular MELCOR model developed by the ITER IT. This report, which is the final report for this agreement, documents the completion of the work scope under this ITER TA, designated as TA 81-08.« less

  13. ITER Construction—Plant System Integration

    NASA Astrophysics Data System (ADS)

    Tada, E.; Matsuda, S.

    2009-02-01

    This brief paper introduces how the ITER will be built in the international collaboration. The ITER Organization plays a central role in constructing ITER and leading it into operation. Since most of the ITER components are to be provided in-kind from the member countries, integral project management should be scoped in advance of real work. Those include design, procurement, system assembly, testing, licensing and commissioning of ITER.

  14. A review of at rest droplet growth equations for condensing nitrogen in transonic cryogenic wind tunnels

    NASA Technical Reports Server (NTRS)

    Hall, R. M.; Kramer, S. A.

    1979-01-01

    Droplet growth equations are reviewed in the free-molecular, transition, and continuum flow regimes with the assumption that the droplets are at rest with respect to the vapor. As comparison calculations showed, it was important to use a growth equation designed for the flow regime of interest. Otherwise, a serious over-prediction of droplet growth may result. The growth equation by Gyarmathy appeared to be applicable throughout the flow regimes and involved no iteration. His expression also avoided the uncertainty associated with selecting a mass accommodation coefficient and consequently involved less uncertainty in specifying adjustable parameters than many of the other growth equations.

  15. Bringing values and deliberation to science communication

    PubMed Central

    Dietz, Thomas

    2013-01-01

    Decisions always involve both facts and values, whereas most science communication focuses only on facts. If science communication is intended to inform decisions, it must be competent with regard to both facts and values. Public participation inevitably involves both facts and values. Research on public participation suggests that linking scientific analysis to public deliberation in an iterative process can help decision making deal effectively with both facts and values. Thus, linked analysis and deliberation can be an effective tool for science communication. However, challenges remain in conducting such process at the national and global scales, in enhancing trust, and in reconciling diverse values. PMID:23940350

  16. Cell cycle nucleic acids, polypeptides and uses thereof

    DOEpatents

    Gordon-Kamm, William J [Urbandale, IA; Lowe, Keith S [Johnston, IA; Larkins, Brian A [Tucson, AZ; Dilkes, Brian R [Tucson, AZ; Sun, Yuejin [Westfield, IN

    2007-08-14

    The invention provides isolated nucleic acids and their encoded proteins that are involved in cell cycle regulation. The invention further provides recombinant expression cassettes, host cells, transgenic plants, and antibody compositions. The present invention provides methods and compositions relating to altering cell cycle protein content, cell cycle progression, cell number and/or composition of plants.

  17. The role of particular ticks developmental stages in the circulation of tick-borne pathogens in Central Europe. 4. Anaplasmataceae

    PubMed

    Karbowiak, Grzegorz; Biernat, Beata; Stańczak, Joanna; Werszko, Joanna; Wróblewski, Piotr; Szewczyk, Tomasz; Sytykiewicz, Hubert

    In Central European conditions, two species of Anaplasmataceae have epidemiological significance – Candidatus Neoehrlichia micurensis and Anaplasma phagocytophilum. Tick Ixodes ricinus is considered as their main vector, wild mammals as the animal reservoir. There is presented the transstadial transmission in ticks, due to the lack of transovarial mode the circulation goes mainly between immature ticks and hosts; pathogen circulates primarily in the cycle: infected rodent → the tick larva → the nymph → the mammal reservoir → the larva of the tick. The tick stages able to effectively infect human are nymphs and adult females, males do not participate in the follow transmission. The summary of available data of different A. phagocytophilum strains associations with different hosts revealed at least few distinct enzootic cycle, concern the same ticks species and different mammal hosts. It is possible to reveal in Central Europe the existence of at least three different epidemiological transmission cycles of A. phagocytophilum. The first cycle involves strains pathogenic for human and identical strains from horses, dogs, cats, wild boars, hedgehogs, possibly red foxes. The second cycle involves deer, European bison and possibly domestic ruminants. The third cycle contains strains from voles, shrew and possibly Apodemus mice. In Western Europe voles might be involved in separate enzootic cycle with Ixodes trianguliceps as the vector.

  18. A model of the regulatory network involved in the control of the cell cycle and cell differentiation in the Caenorhabditis elegans vulva.

    PubMed

    Weinstein, Nathan; Ortiz-Gutiérrez, Elizabeth; Muñoz, Stalin; Rosenblueth, David A; Álvarez-Buylla, Elena R; Mendoza, Luis

    2015-03-13

    There are recent experimental reports on the cross-regulation between molecules involved in the control of the cell cycle and the differentiation of the vulval precursor cells (VPCs) of Caenorhabditis elegans. Such discoveries provide novel clues on how the molecular mechanisms involved in the cell cycle and cell differentiation processes are coordinated during vulval development. Dynamic computational models are helpful to understand the integrated regulatory mechanisms affecting these cellular processes. Here we propose a simplified model of the regulatory network that includes sufficient molecules involved in the control of both the cell cycle and cell differentiation in the C. elegans vulva to recover their dynamic behavior. We first infer both the topology and the update rules of the cell cycle module from an expected time series. Next, we use a symbolic algorithmic approach to find which interactions must be included in the regulatory network. Finally, we use a continuous-time version of the update rules for the cell cycle module to validate the cyclic behavior of the network, as well as to rule out the presence of potential artifacts due to the synchronous updating of the discrete model. We analyze the dynamical behavior of the model for the wild type and several mutants, finding that most of the results are consistent with published experimental results. Our model shows that the regulation of Notch signaling by the cell cycle preserves the potential of the VPCs and the three vulval fates to differentiate and de-differentiate, allowing them to remain completely responsive to the concentration of LIN-3 and lateral signal in the extracellular microenvironment.

  19. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.

    2014-08-21

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and representmore » the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ–ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.« less

  20. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    NASA Astrophysics Data System (ADS)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.

    2014-08-01

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ-ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.

  1. Hydrodynamics of suspensions of passive and active rigid particles: a rigid multiblob approach

    DOE PAGES

    Usabiaga, Florencio Balboa; Kallemov, Bakytzhan; Delmotte, Blaise; ...

    2016-01-12

    We develop a rigid multiblob method for numerically solving the mobility problem for suspensions of passive and active rigid particles of complex shape in Stokes flow in unconfined, partially confined, and fully confined geometries. As in a number of existing methods, we discretize rigid bodies using a collection of minimally resolved spherical blobs constrained to move as a rigid body, to arrive at a potentially large linear system of equations for the unknown Lagrange multipliers and rigid-body motions. Here we develop a block-diagonal preconditioner for this linear system and show that a standard Krylov solver converges in a modest numbermore » of iterations that is essentially independent of the number of particles. Key to the efficiency of the method is a technique for fast computation of the product of the blob-blob mobility matrix and a vector. For unbounded suspensions, we rely on existing analytical expressions for the Rotne-Prager-Yamakawa tensor combined with a fast multipole method (FMM) to obtain linear scaling in the number of particles. For suspensions sedimented against a single no-slip boundary, we use a direct summation on a graphical processing unit (GPU), which gives quadratic asymptotic scaling with the number of particles. For fully confined domains, such as periodic suspensions or suspensions confined in slit and square channels, we extend a recently developed rigid-body immersed boundary method by B. Kallemov, A. P. S. Bhalla, B. E. Griffith, and A. Donev (Commun. Appl. Math. Comput. Sci. 11 (2016), no. 1, 79-141) to suspensions of freely moving passive or active rigid particles at zero Reynolds number. We demonstrate that the iterative solver for the coupled fluid and rigid-body equations converges in a bounded number of iterations regardless of the system size. In our approach, each iteration only requires a few cycles of a geometric multigrid solver for the Poisson equation, and an application of the block-diagonal preconditioner, leading to linear scaling with the number of particles. We optimize a number of parameters in the iterative solvers and apply our method to a variety of benchmark problems to carefully assess the accuracy of the rigid multiblob approach as a function of the resolution. We also model the dynamics of colloidal particles studied in recent experiments, such as passive boomerangs in a slit channel, as well as a pair of non-Brownian active nanorods sedimented against a wall.« less

  2. EDITORIAL: Safety aspects of fusion power plants

    NASA Astrophysics Data System (ADS)

    Kolbasov, B. N.

    2007-07-01

    This special issue of Nuclear Fusion contains 13 informative papers that were initially presented at the 8th IAEA Technical Meeting on Fusion Power Plant Safety held in Vienna, Austria, 10-13 July 2006. Following recommendation from the International Fusion Research Council, the IAEA organizes Technical Meetings on Fusion Safety with the aim to bring together experts to discuss the ongoing work, share new ideas and outline general guidance and recommendations on different issues related to safety and environmental (S&E) aspects of fusion research and power facilities. Previous meetings in this series were held in Vienna, Austria (1980), Ispra, Italy (1983), Culham, UK (1986), Jackson Hole, USA (1989), Toronto, Canada (1993), Naka, Japan (1996) and Cannes, France (2000). The recognized progress in fusion research and technology over the last quarter of a century has boosted the awareness of the potential of fusion to be a practically inexhaustible and clean source of energy. The decision to construct the International Thermonuclear Experimental Reactor (ITER) represents a landmark in the path to fusion power engineering. Ongoing activities to license ITER in France look for an adequate balance between technological and scientific deliverables and complying with safety requirements. Actually, this is the first instance of licensing a representative fusion machine, and it will very likely shape the way in which a more common basis for establishing safety standards and policies for licensing future fusion power plants will be developed. Now that ITER licensing activities are underway, it is becoming clear that the international fusion community should strengthen its efforts in the area of designing the next generations of fusion power plants—demonstrational and commercial. Therefore, the 8th IAEA Technical Meeting on Fusion Safety focused on the safety aspects of power facilities. Some ITER-related safety issues were reported and discussed owing to their potential importance for the fusion power plant research programmes. The objective of this Technical Meeting was to examine in an integrated way all the safety aspects anticipated to be relevant to the first fusion power plant prototype expected to become operational by the middle of the century, leading to the first generation of economically viable fusion power plants with attractive S&E features. After screening by guest editors and consideration by referees, 13 (out of 28) papers were accepted for publication. They are devoted to the following safety topics: power plant safety; fusion specific operational safety approaches; test blanket modules; accident analysis; tritium safety and inventories; decommissioning and waste. The paper `Main safety issues at the transition from ITER to fusion power plants' by W. Gulden et al (EU) highlights the differences between ITER and future fusion power plants with magnetic confinement (off-site dose acceptance criteria, consequences of accidents inside and outside the design basis, occupational radiation exposure, and waste management, including recycling and/or final disposal in repositories) on the basis of the most recent European fusion power plant conceptual study. Ongoing S&E studies within the US inertial fusion energy (IFE) community are focusing on two design concepts. These are the high average power laser (HAPL) programme for development of a dry-wall, laser-driven IFE power plant, and the Z-pinch IFE programme for the production of an economically-attractive power plant using high-yield Z-pinch-driven targets. The main safety issues related to these programmes are reviewed in the paper `Status of IFE safety and environmental activities in the US' by S. Reyes et al (USA). The authors propose future directions of research in the IFE S&E area. In the paper `Recent accomplishments and future directions in the US Fusion Safety & Environmental Program' D. Petti et al (USA) state that the US fusion programme has long recognized that the S&E potential of fusion can be attained by prudent materials selection, judicious design choices, and integration of safety requirements into the design of the facility. To achieve this goal, S&E research is focused on understanding the behaviour of the largest sources of radioactive and hazardous materials in a fusion facility, understanding how energy sources in a fusion facility could mobilize those materials, developing integrated state-of-the-art S&E computer codes and risk tools for safety assessment, and evaluating and improving fusion facility design in terms of accident safety, worker safety, and waste disposal. There are three papers considering safety issues of the test blanket modules (TBM) producing tritium to be installed in ITER. These modules represent different concepts of demonstration fusion power facilities (DEMO). L. Boccaccini et al (Germany) analyses the possibility of jeopardizing the ITER safety under specific accidents in the European helium-cooled pebble-bed TBM, e.g. pressurization of the vacuum vessel (VV), hydrogen production from the Be-steam reaction, the possible interconnection between the port cell and VV causing air ingress. Safety analysis is also presented for Chinese TBM with a helium-cooled solid breeder to be tested in ITER by Z. Chen et al (China). Radiological inventories, afterheat, waste disposal ratings, electromagnetic characteristics, LOCA and tritium safety management are considered. An overview of a preliminary safety analysis performed for a US proposed TBM is presented by B. Merrill et al (USA). This DEMO relevant dual coolant liquid lead-lithium TBM has been explored both in the USA and EU. T. Pinna et al (Italy) summarize the six-year development of a failure rate database for fusion specific components on the basis of data coming from operating experience gained in various fusion laboratories. The activity began in 2001 with the study of the Joint European Torus vacuum and active gas handling systems. Two years later the neutral beam injectors and the power supply systems were considered. This year the ion cyclotron resonant heating system is under evaluation. I. Cristescu et al (Germany) present the paper `Tritium inventories and tritium safety design principles for the fuel cycle of ITER'. She and her colleagues developed the dynamic mathematical model (TRIMO) for tritium inventory evaluation within each system of the ITER fuel cycle in various operational scenarios. TRIMO is used as a tool for trade-off studies within the fuel cycle systems with the final goal of global tritium inventory minimization. M. Matsuyama et al (Japan) describes a new technique for in situ quantitative measurements of high-level tritium inventory and its distribution in the VV and tritium systems of ITER and future fusion reactors. This technique is based on utilization of x-rays induced by beta-rays emitting from tritium species. It was applied to three physical states of high-level tritium: to gaseous, aqueous and solid tritium retained on/in various materials. Finally, there are four papers devoted to safety issues in fusion reactor decommissioning and waste management. A paper by R. Pampin et al (UK) provides the revised radioactive waste analysis of two models in the PPCS. Another paper by M. Zucchetti (Italy), S.A. Bartenev (Russia) et al describes a radiochemical extraction technology for purification of V-Cr-Ti alloy components from activation products to the dose rate of 10 µSv/h allowing their clearance or hands-on recycling which has been developed and tested in laboratory stationary conditions. L. El-Guebaly (USA) and her colleagues submitted two papers. In the first paper she optimistically considers the possibility of replacing the disposal of fusion power reactor waste with recycling and clearance. Her second paper considers the implications of new clearance guidelines for nuclear applications, particularly for slightly irradiated fusion materials.

  3. 77 FR 137 - Applications and Amendments to Facility Operating Licenses Involving Proposed No Significant...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-03

    ... the LSCS, Cycle 15, operation. Cycle 15 will be the first cycle of operation with a mixed core... methodologies. The analyses for LSCS, Unit 1, Cycle 15 have concluded that a two-loop MCPR SL of >= 1.13, based... accident from any accident previously evaluated? Response: No. The GNF2 fuel to be used in Cycle 15 is of a...

  4. Ultra-low-dose computed tomographic angiography with model-based iterative reconstruction compared with standard-dose imaging after endovascular aneurysm repair: a prospective pilot study.

    PubMed

    Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K

    2014-12-01

    An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.

  5. Nuclear modules for space electric propulsion

    NASA Technical Reports Server (NTRS)

    Difilippo, F. C.

    1998-01-01

    Analysis of interplanetary cargo and piloted missions requires calculations of the performances and masses of subsystems to be integrated in a final design. In a preliminary and scoping stage the designer needs to evaluate options iteratively by using fast computer simulations. The Oak Ridge National Laboratory (ORNL) has been involved in the development of models and calculational procedures for the analysis (neutronic and thermal hydraulic) of power sources for nuclear electric propulsion. The nuclear modules will be integrated into the whole simulation of the nuclear electric propulsion system. The vehicles use either a Brayton direct-conversion cycle, using the heated helium from a NERVA-type reactor, or a potassium Rankine cycle, with the working fluid heated on the secondary side of a heat exchanger and lithium on the primary side coming from a fast reactor. Given a set of input conditions, the codes calculate composition. dimensions, volumes, and masses of the core, reflector, control system, pressure vessel, neutron and gamma shields, as well as the thermal hydraulic conditions of the coolant, clad and fuel. Input conditions are power, core life, pressure and temperature of the coolant at the inlet of the core, either the temperature of the coolant at the outlet of the core or the coolant mass flow and the fluences and integrated doses at the cargo area. Using state-of-the-art neutron cross sections and transport codes, a database was created for the neutronic performance of both reactor designs. The free parameters of the models are the moderator/fuel mass ratio for the NERVA reactor and the enrichment and the pitch of the lattice for the fast reactor. Reactivity and energy balance equations are simultaneously solved to find the reactor design. Thermalhydraulic conditions are calculated by solving the one-dimensional versions of the equations of conservation of mass, energy, and momentum with compressible flow.

  6. Stabilization of business cycles of finance agents using nonlinear optimal control

    NASA Astrophysics Data System (ADS)

    Rigatos, G.; Siano, P.; Ghosh, T.; Sarno, D.

    2017-11-01

    Stabilization of the business cycles of interconnected finance agents is performed with the use of a new nonlinear optimal control method. First, the dynamics of the interacting finance agents and of the associated business cycles is described by a modeled of coupled nonlinear oscillators. Next, this dynamic model undergoes approximate linearization round a temporary operating point which is defined by the present value of the system's state vector and the last value of the control inputs vector that was exerted on it. The linearization procedure is based on Taylor series expansion of the dynamic model and on the computation of Jacobian matrices. The modelling error, which is due to the truncation of higher-order terms in the Taylor series expansion is considered as a disturbance which is compensated by the robustness of the control loop. Next, for the linearized model of the interacting finance agents, an H-infinity feedback controller is designed. The computation of the feedback control gain requires the solution of an algebraic Riccati equation at each iteration of the control algorithm. Through Lyapunov stability analysis it is proven that the control scheme satisfies an H-infinity tracking performance criterion, which signifies elevated robustness against modelling uncertainty and external perturbations. Moreover, under moderate conditions the global asymptotic stability features of the control loop are proven.

  7. Modeling Complex Dynamic Interactions of Nonlinear, Aeroelastic, Multistage, and Localization Phenomena in Turbine Engines

    DTIC Science & Technology

    2011-02-25

    fast method of predicting the number of iterations needed for converged results. A new hybrid technique is proposed to predict the convergence history...interchanging between the modes, whereas a smaller veering (or crossing) region shows fast mode switching. Then, the nonlinear vibration re- sponse of the...problems of interest involve dynamic ( fast ) crack propagation, then the nodes selected by the proposed approach at some time instant might not

  8. Advancing Detached-Eddy Simulation

    DTIC Science & Technology

    2007-01-01

    fluxes leads to an improvement in the stability of the solution . This matrix is solved iteratively using a symmetric Gauss - Seidel procedure. Newtons sub...model (TLM) is a zonal approach, proposed by Balaras and Benocci (5) and Balaras et al. (4). The method involved the solution of filtered Navier...LES mesh. The method was subsequently used by Cabot (6) and Diurno et al. (7) to obtain the solution of the flow over a backward facing step and by

  9. On Dynamics of Spinning Structures

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.; Ibrahim, A.

    2012-01-01

    This paper provides details of developments pertaining to vibration analysis of gyroscopic systems, that involves a finite element structural discretization followed by the solution of the resulting matrix eigenvalue problem by a progressive, accelerated simultaneous iteration technique. Thus Coriolis, centrifugal and geometrical stiffness matrices are derived for shell and line elements, followed by the eigensolution details as well as solution of representative problems that demonstrates the efficacy of the currently developed numerical procedures and tools.

  10. Soft Clustering Criterion Functions for Partitional Document Clustering

    DTIC Science & Technology

    2004-05-26

    in the clus- ter that it already belongs to. The refinement phase ends, as soon as we perform an iteration in which no documents moved between...for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 26 MAY 2004 2... it with the one obtained by the hard criterion functions. We present a comprehensive experimental evaluation involving twelve differ- ent datasets

  11. Fokker-Planck-Based Acceleration for SN Equations with Highly Forward Peaked Scattering in Slab Geometry

    NASA Astrophysics Data System (ADS)

    Patel, Japan

    Short mean free paths are characteristic of charged particles. High energy charged particles often have highly forward peaked scattering cross sections. Transport problems involving such charged particles are also highly optically thick. When problems simultaneously have forward peaked scattering and high optical thickness, their solution, using standard iterative methods, becomes very inefficient. In this dissertation, we explore Fokker-Planck-based acceleration for solving such problems.

  12. Frequency-domain full-waveform inversion with non-linear descent directions

    NASA Astrophysics Data System (ADS)

    Geng, Yu; Pan, Wenyong; Innanen, Kristopher A.

    2018-05-01

    Full-waveform inversion (FWI) is a highly non-linear inverse problem, normally solved iteratively, with each iteration involving an update constructed through linear operations on the residuals. Incorporating a flexible degree of non-linearity within each update may have important consequences for convergence rates, determination of low model wavenumbers and discrimination of parameters. We examine one approach for doing so, wherein higher order scattering terms are included within the sensitivity kernel during the construction of the descent direction, adjusting it away from that of the standard Gauss-Newton approach. These scattering terms are naturally admitted when we construct the sensitivity kernel by varying not the current but the to-be-updated model at each iteration. Linear and/or non-linear inverse scattering methodologies allow these additional sensitivity contributions to be computed from the current data residuals within any given update. We show that in the presence of pre-critical reflection data, the error in a second-order non-linear update to a background of s0 is, in our scheme, proportional to at most (Δs/s0)3 in the actual parameter jump Δs causing the reflection. In contrast, the error in a standard Gauss-Newton FWI update is proportional to (Δs/s0)2. For numerical implementation of more complex cases, we introduce a non-linear frequency-domain scheme, with an inner and an outer loop. A perturbation is determined from the data residuals within the inner loop, and a descent direction based on the resulting non-linear sensitivity kernel is computed in the outer loop. We examine the response of this non-linear FWI using acoustic single-parameter synthetics derived from the Marmousi model. The inverted results vary depending on data frequency ranges and initial models, but we conclude that the non-linear FWI has the capability to generate high-resolution model estimates in both shallow and deep regions, and to converge rapidly, relative to a benchmark FWI approach involving the standard gradient.

  13. Ultimobranchial gland respond in a different way in male and female fresh water teleost Mastacembelus armatus (Lacepede) during reproductive cycle.

    PubMed

    Verma, Sushant Kumar; Alim, Abdul

    2015-05-01

    The present study was carried out to analyze the differences in the activity of ultimobranchial gland (UBG) between male and female fresh water teleost Mastacembelus armatus during reproductive cycle. Considerable variations in the nuclear diameter of UBG cells and plasma calcitonin (CT) levels during different reproductive phases of testicular and ovarian cycle suggested that the activity of the UBG depends upon the sexual maturity of fishes. A positive correlation was observed between plasma CT and sex steroid levels and the gonadosomatic index in both sexes which further confirmed the involvement of UBG in the processes related to gonadal development in fishes irrespective of the sex. Sudden increase in the level of plasma CT and nuclear diameter of UBG cells after administration of 17 α-methyltestosterone in males and 17 β-estradiol in females during resting phase of the reproductive cycle clearly showed that UBG becomes hyperactive with increases in the level of sex steroids. Plasma calcium level was also found to be positively correlated with gonadal maturation in females. However no such change in plasma calcium level in relation to testicular cycle was observed. Thus it can be concluded that UBG becomes hyperactive during gonadal maturation but its role differs between male and female fishes. In females it may involved in both gonadal maturation and plasma calcium regulation while in males its involvement in calcium regulation was not justified. Variations in the level of CT during various phases of testicular cycle evidenced its involvement in gonadal maturation only. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. igun - A program for the simulation of positive ion extraction including magnetic fields

    NASA Astrophysics Data System (ADS)

    Becker, R.; Herrmannsfeldt, W. B.

    1992-04-01

    igun is a program for the simulation of positive ion extraction from plasmas. It is based on the well known program egun for the calculation of electron and ion trajectories in electron guns and lenses. The mathematical treatment of the plasma sheath is based on a simple analytical model, which provides a numerically stable calculation of the sheath potentials. In contrast to other ion extraction programs, igun is able to determine the extracted ion current in succeeding cycles of iteration by itself. However, it is also possible to set values of current, plasma density, or ion current density. Either axisymmetric or rectangular coordinates can be used, including axisymmetric or transverse magnetic fields.

  15. Deployment of e-health services - a business model engineering strategy.

    PubMed

    Kijl, Björn; Nieuwenhuis, Lambert J M; Huis in 't Veld, Rianne M H A; Hermens, Hermie J; Vollenbroek-Hutten, Miriam M R

    2010-01-01

    We designed a business model for deploying a myofeedback-based teletreatment service. An iterative and combined qualitative and quantitative action design approach was used for developing the business model and the related value network. Insights from surveys, desk research, expert interviews, workshops and quantitative modelling were combined to produce the first business model and then to refine it in three design cycles. The business model engineering strategy provided important insights which led to an improved, more viable and feasible business model and related value network design. Based on this experience, we conclude that the process of early stage business model engineering reduces risk and produces substantial savings in costs and resources related to service deployment.

  16. Progress with new malaria vaccines.

    PubMed Central

    Webster, Daniel; Hill, Adrian V. S.

    2003-01-01

    Malaria is a parasitic disease of major global health significance that causes an estimated 2.7 million deaths each year. In this review we describe the burden of malaria and discuss the complicated life cycle of Plasmodium falciparum, the parasite responsible for most of the deaths from the disease, before reviewing the evidence that suggests that a malaria vaccine is an attainable goal. Significant advances have recently been made in vaccine science, and we review new vaccine technologies and the evaluation of candidate malaria vaccines in human and animal studies worldwide. Finally, we discuss the prospects for a malaria vaccine and the need for iterative vaccine development as well as potential hurdles to be overcome. PMID:14997243

  17. Eliciting design patterns for e-learning systems

    NASA Astrophysics Data System (ADS)

    Retalis, Symeon; Georgiakakis, Petros; Dimitriadis, Yannis

    2006-06-01

    Design pattern creation, especially in the e-learning domain, is a highly complex process that has not been sufficiently studied and formalized. In this paper, we propose a systematic pattern development cycle, whose most important aspects focus on reverse engineering of existing systems in order to elicit features that are cross-validated through the use of appropriate, authentic scenarios. However, an iterative pattern process is proposed that takes advantage of multiple data sources, thus emphasizing a holistic view of the teaching learning processes. The proposed schema of pattern mining has been extensively validated for Asynchronous Network Supported Collaborative Learning (ANSCL) systems, as well as for other types of tools in a variety of scenarios, with promising results.

  18. The Knowledge-Based Software Assistant: Beyond CASE

    NASA Technical Reports Server (NTRS)

    Carozzoni, Joseph A.

    1993-01-01

    This paper will outline the similarities and differences between two paradigms of software development. Both support the whole software life cycle and provide automation for most of the software development process, but have different approaches. The CASE approach is based on a set of tools linked by a central data repository. This tool-based approach is data driven and views software development as a series of sequential steps, each resulting in a product. The Knowledge-Based Software Assistant (KBSA) approach, a radical departure from existing software development practices, is knowledge driven and centers around a formalized software development process. KBSA views software development as an incremental, iterative, and evolutionary process with development occurring at the specification level.

  19. Numerical analysis of the hemodynamic effect of plaque ulceration in the stenotic carotid artery bifurcation

    NASA Astrophysics Data System (ADS)

    Wong, Emily Y.; Milner, Jaques S.; Steinman, David A.; Poepping, Tamie L.; Holdsworth, David W.

    2009-02-01

    The presence of ulceration in carotid artery plaque is an independent risk factor for thromboembolic stroke. However, the associated pathophysiological mechanisms - in particular the mechanisms related to the local hemodynamics in the carotid artery bifurcation - are not well understood. We investigated the effect of carotid plaque ulceration on the local time-varying three-dimensional flow field using computational fluid dynamics (CFD) models of a stenosed carotid bifurcation geometry, with and without the presence of ulceration. CFD analysis of each model was performed with a spatial finite element discretization of over 150,000 quadratic tetrahedral elements and a temporal discretization of 4800 timesteps per cardiac cycle, to adequately resolve the flow field and pulsatile flow, respectively. Pulsatile flow simulations were iterated for five cardiac cycles to allow for cycle-to-cycle analysis following the damping of initial transients in the solution. Comparison between models revealed differences in flow patterns induced by flow exiting from the region of the ulcer cavity, in particular, to the shape, orientation and helicity of the high velocity jet through the stenosis. The stenotic jet in both models exhibited oscillatory motion, but produced higher levels of phase-ensembled turbulence intensity in the ulcerated model. In addition, enhanced out-of-plane recirculation and helical flow was observed in the ulcerated model. These preliminary results suggest that local fluid behaviour may contribute to the thrombogenic risk associated with plaque ulcerations in the stenotic carotid artery bifurcation.

  20. The value of usability testing for Internet-based adolescent self-management interventions: "Managing Hemophilia Online".

    PubMed

    Breakey, Vicky R; Warias, Ashley V; Ignas, Danial M; White, Meghan; Blanchette, Victor S; Stinson, Jennifer N

    2013-10-04

    As adolescents with hemophilia approach adulthood, they are expected to assume responsibility for their disease management. A bilingual (English and French) Internet-based self-management program, "Teens Taking Charge: Managing Hemophilia Online," was developed to support adolescents with hemophilia in this transition. This study explored the usability of the website and resulted in refinement of the prototype. A purposive sample (n=18; age 13-18; mean age 15.5 years) was recruited from two tertiary care centers to assess the usability of the program in English and French. Qualitative observations using a "think aloud" usability testing method and semi-structured interviews were conducted in four iterative cycles, with changes to the prototype made as necessary following each cycle. This study was approved by research ethics boards at each site. Teens responded positively to the content and appearance of the website and felt that it was easy to navigate and understand. The multimedia components (videos, animations, quizzes) were felt to enrich the experience. Changes to the presentation of content and the website user-interface were made after the first, second and third cycles of testing in English. Cycle four did not result in any further changes. Overall, teens found the website to be easy to use. Usability testing identified end-user concerns that informed improvements to the program. Usability testing is a crucial step in the development of Internet-based self-management programs to ensure information is delivered in a manner that is accessible and understood by users.

Top