Sample records for iter engineering design

  1. Iteration in Early-Elementary Engineering Design

    NASA Astrophysics Data System (ADS)

    McFarland Kendall, Amber Leigh

    K-12 standards and curricula are beginning to include engineering design as a key practice within Science Technology Engineering and Mathematics (STEM) education. However, there is little research on how the youngest students engage in engineering design within the elementary classroom. This dissertation focuses on iteration as an essential aspect of engineering design, and because research at the college and professional level suggests iteration improves the designer's understanding of problems and the quality of design solutions. My research presents qualitative case studies of students in kindergarten and third-grade as they engage in classroom engineering design challenges which integrate with traditional curricula standards in mathematics, science, and literature. I discuss my results through the lens of activity theory, emphasizing practices, goals, and mediating resources. Through three chapters, I provide insight into how early-elementary students iterate upon their designs by characterizing the ways in which lesson design impacts testing and revision, by analyzing the plan-driven and experimentation-driven approaches that student groups use when solving engineering design challenges, and by investigating how students attend to constraints within the challenge. I connect these findings to teacher practices and curriculum design in order to suggest methods of promoting iteration within open-ended, classroom-based engineering design challenges. This dissertation contributes to the field of engineering education by providing evidence of productive engineering practices in young students and support for the value of engineering design challenges in developing students' participation and agency in these practices.

  2. Iteration in Early-Elementary Engineering Design

    ERIC Educational Resources Information Center

    McFarland Kendall, Amber Leigh

    2017-01-01

    K-12 standards and curricula are beginning to include engineering design as a key practice within Science Technology Engineering and Mathematics (STEM) education. However, there is little research on how the youngest students engage in engineering design within the elementary classroom. This dissertation focuses on iteration as an essential aspect…

  3. The Effect of Iteration on the Design Performance of Primary School Children

    ERIC Educational Resources Information Center

    Looijenga, Annemarie; Klapwijk, Remke; de Vries, Marc J.

    2015-01-01

    Iteration during the design process is an essential element. Engineers optimize their design by iteration. Research on iteration in Primary Design Education is however scarce; possibly teachers believe they do not have enough time for iteration in daily classroom practices. Spontaneous playing behavior of children indicates that iteration fits in…

  4. Multiphysics Engineering Analysis for an Integrated Design of ITER Diagnostic First Wall and Diagnostic Shield Module Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, Y.; Loesser, G.; Smith, M.

    ITER diagnostic first walls (DFWs) and diagnostic shield modules (DSMs) inside the port plugs (PPs) are designed to protect diagnostic instrument and components from a harsh plasma environment and provide structural support while allowing for diagnostic access to the plasma. The design of DFWs and DSMs are driven by 1) plasma radiation and nuclear heating during normal operation 2) electromagnetic loads during plasma events and associate component structural responses. A multi-physics engineering analysis protocol for the design has been established at Princeton Plasma Physics Laboratory and it was used for the design of ITER DFWs and DSMs. The analyses weremore » performed to address challenging design issues based on resultant stresses and deflections of the DFW-DSM-PP assembly for the main load cases. ITER Structural Design Criteria for In-Vessel Components (SDC-IC) required for design by analysis and three major issues driving the mechanical design of ITER DFWs are discussed. The general guidelines for the DSM design have been established as a result of design parametric studies.« less

  5. Evolutionary engineering for industrial microbiology.

    PubMed

    Vanee, Niti; Fisher, Adam B; Fong, Stephen S

    2012-01-01

    Superficially, evolutionary engineering is a paradoxical field that balances competing interests. In natural settings, evolution iteratively selects and enriches subpopulations that are best adapted to a particular ecological niche using random processes such as genetic mutation. In engineering desired approaches utilize rational prospective design to address targeted problems. When considering details of evolutionary and engineering processes, more commonality can be found. Engineering relies on detailed knowledge of the problem parameters and design properties in order to predict design outcomes that would be an optimized solution. When detailed knowledge of a system is lacking, engineers often employ algorithmic search strategies to identify empirical solutions. Evolution epitomizes this iterative optimization by continuously diversifying design options from a parental design, and then selecting the progeny designs that represent satisfactory solutions. In this chapter, the technique of applying the natural principles of evolution to engineer microbes for industrial applications is discussed to highlight the challenges and principles of evolutionary engineering.

  6. A superlinear interior points algorithm for engineering design optimization

    NASA Technical Reports Server (NTRS)

    Herskovits, J.; Asquier, J.

    1990-01-01

    We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.

  7. Optimization applications in aircraft engine design and test

    NASA Technical Reports Server (NTRS)

    Pratt, T. K.

    1984-01-01

    Starting with the NASA-sponsored STAEBL program, optimization methods based primarily upon the versatile program COPES/CONMIN were introduced over the past few years to a broad spectrum of engineering problems in structural optimization, engine design, engine test, and more recently, manufacturing processes. By automating design and testing processes, many repetitive and costly trade-off studies have been replaced by optimization procedures. Rather than taking engineers and designers out of the loop, optimization has, in fact, put them more in control by providing sophisticated search techniques. The ultimate decision whether to accept or reject an optimal feasible design still rests with the analyst. Feedback obtained from this decision process has been invaluable since it can be incorporated into the optimization procedure to make it more intelligent. On several occasions, optimization procedures have produced novel designs, such as the nonsymmetric placement of rotor case stiffener rings, not anticipated by engineering designers. In another case, a particularly difficult resonance contraint could not be satisfied using hand iterations for a compressor blade, when the STAEBL program was applied to the problem, a feasible solution was obtained in just two iterations.

  8. Overview of International Thermonuclear Experimental Reactor (ITER) engineering design activities*

    NASA Astrophysics Data System (ADS)

    Shimomura, Y.

    1994-05-01

    The International Thermonuclear Experimental Reactor (ITER) [International Thermonuclear Experimental Reactor (ITER) (International Atomic Energy Agency, Vienna, 1988), ITER Documentation Series, No. 1] project is a multiphased project, presently proceeding under the auspices of the International Atomic Energy Agency according to the terms of a four-party agreement among the European Atomic Energy Community (EC), the Government of Japan (JA), the Government of the Russian Federation (RF), and the Government of the United States (US), ``the Parties.'' The ITER project is based on the tokamak, a Russian invention, and has since been brought to a high level of development in all major fusion programs in the world. The objective of ITER is to demonstrate the scientific and technological feasibility of fusion energy for peaceful purposes. The ITER design is being developed, with support from the Parties' four Home Teams and is in progress by the Joint Central Team. An overview of ITER Design activities is presented.

  9. The Iterative Design Process in Research and Development: A Work Experience Paper

    NASA Technical Reports Server (NTRS)

    Sullivan, George F. III

    2013-01-01

    The iterative design process is one of many strategies used in new product development. Top-down development strategies, like waterfall development, place a heavy emphasis on planning and simulation. The iterative process, on the other hand, is better suited to the management of small to medium scale projects. Over the past four months, I have worked with engineers at Johnson Space Center on a multitude of electronics projects. By describing the work I have done these last few months, analyzing the factors that have driven design decisions, and examining the testing and verification process, I will demonstrate that iterative design is the obvious choice for research and development projects.

  10. Simulation Modeling to Compare High-Throughput, Low-Iteration Optimization Strategies for Metabolic Engineering

    PubMed Central

    Heinsch, Stephen C.; Das, Siba R.; Smanski, Michael J.

    2018-01-01

    Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems. PMID:29535690

  11. Teaching Engineering Design Through Paper Rockets

    ERIC Educational Resources Information Center

    Welling, Jonathan; Wright, Geoffrey A.

    2018-01-01

    The paper rocket activity described in this article effectively teaches the engineering design process (EDP) by engaging students in a problem-based learning activity that encourages iterative design. For example, the first rockets the students build typically only fly between 30 and 100 feet. As students test and evaluate their rocket designs,…

  12. From Intent to Action: An Iterative Engineering Process

    ERIC Educational Resources Information Center

    Mouton, Patrice; Rodet, Jacques; Vacaresse, Sylvain

    2015-01-01

    Quite by chance, and over the course of a few haphazard meetings, a Master's degree in "E-learning Design" gradually developed in a Faculty of Economics. Its original and evolving design was the result of an iterative process carried out, not by a single Instructional Designer (ID), but by a full ID team. Over the last 10 years it has…

  13. Breadboard RL10-2B low-thrust operating mode (second iteration) test report

    NASA Technical Reports Server (NTRS)

    Kanic, Paul G.; Kaldor, Raymond B.; Watkins, Pia M.

    1988-01-01

    Cryogenic rocket engines requiring a cooling process to thermally condition the engine to operating temperature can be made more efficient if cooling propellants can be burned. Tank head idle and pumped idle modes can be used to burn propellants employed for cooling, thereby providing useful thrust. Such idle modes required the use of a heat exchanger to vaporize oxygen prior to injection into the combustion chamber. During December 1988, Pratt and Whitney conducted a series of engine hot firing demonstrating the operation of two new, previously untested oxidizer heat exchanger designs. The program was a second iteration of previous low thrust testing conducted in 1984, during which a first-generation heat exchanger design was used. Although operation was demonstrated at tank head idle and pumped idle, the engine experienced instability when propellants could not be supplied to the heat exchanger at design conditions.

  14. Multidisciplinary systems optimization by linear decomposition

    NASA Technical Reports Server (NTRS)

    Sobieski, J.

    1984-01-01

    In a typical design process major decisions are made sequentially. An illustrated example is given for an aircraft design in which the aerodynamic shape is usually decided first, then the airframe is sized for strength and so forth. An analogous sequence could be laid out for any other major industrial product, for instance, a ship. The loops in the discipline boxes symbolize iterative design improvements carried out within the confines of a single engineering discipline, or subsystem. The loops spanning several boxes depict multidisciplinary design improvement iterations. Omitted for graphical simplicity is parallelism of the disciplinary subtasks. The parallelism is important in order to develop a broad workfront necessary to shorten the design time. If all the intradisciplinary and interdisciplinary iterations were carried out to convergence, the process could yield a numerically optimal design. However, it usually stops short of that because of time and money limitations. This is especially true for the interdisciplinary iterations.

  15. A Holistic Approach to Systems Development

    NASA Technical Reports Server (NTRS)

    Wong, Douglas T.

    2008-01-01

    Introduces a Holistic and Iterative Design Process. Continuous process but can be loosely divided into four stages. More effort spent early on in the design. Human-centered and Multidisciplinary. Emphasis on Life-Cycle Cost. Extensive use of modeling, simulation, mockups, human subjects, and proven technologies. Human-centered design doesn t mean the human factors discipline is the most important Disciplines should be involved in the design: Subsystem vendors, configuration management, operations research, manufacturing engineering, simulation/modeling, cost engineering, hardware engineering, software engineering, test and evaluation, human factors, electromagnetic compatibility, integrated logistics support, reliability/maintainability/availability, safety engineering, test equipment, training systems, design-to-cost, life cycle cost, application engineering etc. 9

  16. Engineering aspects of design and integration of ECE diagnostic in ITER

    DOE PAGES

    Udintsev, V. S.; Taylor, G.; Pandya, H. K.B.; ...

    2015-03-12

    ITER ECE diagnostic [1] needs not only to meet measurement requirements, but also to withstand various loads, such as electromagnetic, mechanical, neutronic and thermal, and to be protected from stray ECH radiation at 170 GHz and other millimeter wave emission, like Collective Thomson scattering which is planned to operate at 60 GHz. Same or similar loads will be applied to other millimetre-wave diagnostics [2], located both in-vessel and in-port plugs. These loads must be taken into account throughout the design phases of the ECE and other microwave diagnostics to ensure their structural integrity and maintainability. The integration of microwave diagnosticsmore » with other ITER systems is another challenging activity which is currently ongoing through port integration and in-vessel integration work. Port Integration has to address the maintenance and the safety aspects of diagnostics, too. Engineering solutions which are being developed to support and to operate ITER ECE diagnostic, whilst complying with safety and maintenance requirements, are discussed in this paper.« less

  17. Modeling design iteration in product design and development and its solution by a novel artificial bee colony algorithm.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.

  18. Engineering Design of ITER Prototype Fast Plant System Controller

    NASA Astrophysics Data System (ADS)

    Goncalves, B.; Sousa, J.; Carvalho, B.; Rodrigues, A. P.; Correia, M.; Batista, A.; Vega, J.; Ruiz, M.; Lopez, J. M.; Rojo, R. Castro; Wallander, A.; Utzel, N.; Neto, A.; Alves, D.; Valcarcel, D.

    2011-08-01

    The ITER control, data access and communication (CODAC) design team identified the need for two types of plant systems. A slow control plant system is based on industrial automation technology with maximum sampling rates below 100 Hz, and a fast control plant system is based on embedded technology with higher sampling rates and more stringent real-time requirements than that required for slow controllers. The latter is applicable to diagnostics and plant systems in closed-control loops whose cycle times are below 1 ms. Fast controllers will be dedicated industrial controllers with the ability to supervise other fast and/or slow controllers, interface to actuators and sensors and, if necessary, high performance networks. Two prototypes of a fast plant system controller specialized for data acquisition and constrained by ITER technological choices are being built using two different form factors. This prototyping activity contributes to the Plant Control Design Handbook effort of standardization, specifically regarding fast controller characteristics. Envisaging a general purpose fast controller design, diagnostic use cases with specific requirements were analyzed and will be presented along with the interface with CODAC and sensors. The requirements and constraints that real-time plasma control imposes on the design were also taken into consideration. Functional specifications and technology neutral architecture, together with its implications on the engineering design, were considered. The detailed engineering design compliant with ITER standards was performed and will be discussed in detail. Emphasis will be given to the integration of the controller in the standard CODAC environment. Requirements for the EPICS IOC providing the interface to the outside world, the prototype decisions on form factor, real-time operating system, and high-performance networks will also be discussed, as well as the requirements for data streaming to CODAC for visualization and archiving.

  19. Computer-Aided Design Of Turbine Blades And Vanes

    NASA Technical Reports Server (NTRS)

    Hsu, Wayne Q.

    1988-01-01

    Quasi-three-dimensional method for determining aerothermodynamic configuration of turbine uses computer-interactive analysis and design and computer-interactive graphics. Design procedure executed rapidly so designer easily repeats it to arrive at best performance, size, structural integrity, and engine life. Sequence of events in aerothermodynamic analysis and design starts with engine-balance equations and ends with boundary-layer analysis and viscous-flow calculations. Analysis-and-design procedure interactive and iterative throughout.

  20. Methane Dual Expander Aerospike Nozzle Rocket Engine

    DTIC Science & Technology

    2012-03-22

    include O/F ratio, thrust, and engine geometry. After thousands of iterations over the design space , the selected MDEAN engine concept has 349 s of...35 Table 7: Fluid Property Table Supported Parameters...44 Table 8: Fluid Property Input Data Independent Variable Ranges. ................................. 46 Table 9

  1. ITER EDA Newsletter. Volume 3, no. 2

    NASA Astrophysics Data System (ADS)

    1994-02-01

    This issue of the ITER EDA (Engineering Design Activities) Newsletter contains reports on the Fifth ITER Council Meeting held in Garching, Germany, January 27-28, 1994, a visit (January 28, 1994) of an international group of Harvard Fellows to the San Diego Joint Work Site, the Inauguration Ceremony of the EC-hosted ITER joint work site in Garching (January 28, 1994), on an ITER Technical Meeting on Assembly and Maintenance held in Garching, Germany, January 19-26, 1994, and a report on a Technical Committee Meeting on radiation effects on in-vessel components held in Garching, Germany, November 15-19, 1993, as well as an ITER Status Report.

  2. LOW-ENGINE-FRICTION TECHNOLOGY FOR ADVANCED NATURAL-GAS RECIPROCATING ENGINES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Victor W. Wong; Tian Tian; Grant Smedley

    2003-08-28

    This program aims at improving the efficiency of advanced natural-gas reciprocating engines (ANGRE) by reducing piston/ring assembly friction without major adverse effects on engine performance, such as increased oil consumption and emissions. A detailed set of piston/ring dynamic and friction models have been developed and applied that illustrated the fundamental relationships between design parameters and friction losses. Various low-friction strategies and concepts have been explored, and engine experiments will validate these concepts. An iterative process of experimentation, simulation and analysis, will be followed with the goal of demonstrating a complete optimized low-friction engine system. As planned, MIT has developed guidelinesmore » for an initial set of low-friction piston-ring-pack designs. Current recommendations focus on subtle top-piston-ring and oil-control-ring characteristics. A full-scale Waukesha F18 engine has been installed at Colorado State University and testing of the baseline configuration is in progress. Components for the first design iteration are being procured. Subsequent work includes examining the friction and engine performance data and extending the analyses to other areas to evaluate opportunities for further friction improvement and the impact on oil consumption/emission and wear, towards demonstrating an optimized reduced-friction engine system.« less

  3. LOW-ENGINE-FRICTION TECHNOLOGY FOR ADVANCED NATURAL-GAS RECIPROCATING ENGINES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Victor Wong; Tian Tian; Luke Moughon

    2005-09-30

    This program aims at improving the efficiency of advanced natural-gas reciprocating engines (ANGRE) by reducing piston and piston ring assembly friction without major adverse effects on engine performance, such as increased oil consumption and wear. An iterative process of simulation, experimentation and analysis is being followed towards achieving the goal of demonstrating a complete optimized low-friction engine system. To date, a detailed set of piston and piston-ring dynamic and friction models have been developed and applied that illustrate the fundamental relationships between design parameters and friction losses. Low friction ring designs have already been recommended in a previous phase, withmore » full-scale engine validation partially completed. Current accomplishments include the addition of several additional power cylinder design areas to the overall system analysis. These include analyses of lubricant and cylinder surface finish and a parametric study of piston design. The Waukesha engine was found to be already well optimized in the areas of lubricant, surface skewness and honing cross-hatch angle, where friction reductions of 12% for lubricant, and 5% for surface characteristics, are projected. For the piston, a friction reduction of up to 50% may be possible by controlling waviness alone, while additional friction reductions are expected when other parameters are optimized. A total power cylinder friction reduction of 30-50% is expected, translating to an engine efficiency increase of two percentage points from its current baseline towards the goal of 50% efficiency. Key elements of the continuing work include further analysis and optimization of the engine piston design, in-engine testing of recommended lubricant and surface designs, design iteration and optimization of previously recommended technologies, and full-engine testing of a complete, optimized, low-friction power cylinder system.« less

  4. Doppler Lidar System Design via Interdisciplinary Design Concept at NASA Langley Research Center - Part III

    NASA Technical Reports Server (NTRS)

    Barnes, Bruce W.; Sessions, Alaric M.; Beyon, Jeffrey; Petway, Larry B.

    2014-01-01

    Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. The existing power system was analyzed to rank components in terms of inefficiency, power dissipation, footprint and mass. Design considerations and priorities are compared along with the results of each design iteration. Overall power system improvements are summarized for design implementations.

  5. Design for disassembly and sustainability assessment to support aircraft end-of-life treatment

    NASA Astrophysics Data System (ADS)

    Savaria, Christian

    Gas turbine engine design is a multidisciplinary and iterative process. Many design iterations are necessary to address the challenges among the disciplines. In the creation of a new engine architecture, the design time is crucial in capturing new business opportunities. At the detail design phase, it was proven very difficult to correct an unsatisfactory design. To overcome this difficulty, the concept of Multi-Disciplinary Optimization (MDO) at the preliminary design phase (Preliminary MDO or PMDO) is used allowing more freedom to perform changes in the design. PMDO also reduces the design time at the preliminary design phase. The concept of PMDO was used was used to create parametric models, and new correlations for high pressure gas turbine housing and shroud segments towards a new design process. First, dedicated parametric models were created because of their reusability and versatility. Their ease of use compared to non-parameterized models allows more design iterations thus reduces set up and design time. Second, geometry correlations were created to minimize the number of parameters used in turbine housing and shroud segment design. Since the turbine housing and the shroud segment geometries are required in tip clearance analyses, care was taken as to not oversimplify the parametric formulation. In addition, a user interface was developed to interact with the parametric models and improve the design time. Third, the cooling flow predictions require many engine parameters (i.e. geometric and performance parameters and air properties) and a reference shroud segments. A second correlation study was conducted to minimize the number of engine parameters required in the cooling flow predictions and to facilitate the selection of a reference shroud segment. Finally, the parametric models, the geometry correlations, and the user interface resulted in a time saving of 50% and an increase in accuracy of 56% in the new design system compared to the existing design system. Also, regarding the cooling flow correlations, the number of engine parameters was reduced by a factor of 6 to create a simplified prediction model and hence a faster shroud segment selection process. None

  6. Genome scale engineering techniques for metabolic engineering.

    PubMed

    Liu, Rongming; Bassalo, Marcelo C; Zeitoun, Ramsey I; Gill, Ryan T

    2015-11-01

    Metabolic engineering has expanded from a focus on designs requiring a small number of genetic modifications to increasingly complex designs driven by advances in genome-scale engineering technologies. Metabolic engineering has been generally defined by the use of iterative cycles of rational genome modifications, strain analysis and characterization, and a synthesis step that fuels additional hypothesis generation. This cycle mirrors the Design-Build-Test-Learn cycle followed throughout various engineering fields that has recently become a defining aspect of synthetic biology. This review will attempt to summarize recent genome-scale design, build, test, and learn technologies and relate their use to a range of metabolic engineering applications. Copyright © 2015 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  7. Modeling Design Iteration in Product Design and Development and Its Solution by a Novel Artificial Bee Colony Algorithm

    PubMed Central

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584

  8. DART system analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boggs, Paul T.; Althsuler, Alan; Larzelere, Alex R.

    2005-08-01

    The Design-through-Analysis Realization Team (DART) is chartered with reducing the time Sandia analysts require to complete the engineering analysis process. The DART system analysis team studied the engineering analysis processes employed by analysts in Centers 9100 and 8700 at Sandia to identify opportunities for reducing overall design-through-analysis process time. The team created and implemented a rigorous analysis methodology based on a generic process flow model parameterized by information obtained from analysts. They also collected data from analysis department managers to quantify the problem type and complexity distribution throughout Sandia's analyst community. They then used this information to develop a communitymore » model, which enables a simple characterization of processes that span the analyst community. The results indicate that equal opportunity for reducing analysis process time is available both by reducing the ''once-through'' time required to complete a process step and by reducing the probability of backward iteration. In addition, reducing the rework fraction (i.e., improving the engineering efficiency of subsequent iterations) offers approximately 40% to 80% of the benefit of reducing the ''once-through'' time or iteration probability, depending upon the process step being considered. Further, the results indicate that geometry manipulation and meshing is the largest portion of an analyst's effort, especially for structural problems, and offers significant opportunity for overall time reduction. Iteration loops initiated late in the process are more costly than others because they increase ''inner loop'' iterations. Identifying and correcting problems as early as possible in the process offers significant opportunity for time savings.« less

  9. Simulation and Spacecraft Design: Engineering Mars Landings.

    PubMed

    Conway, Erik M

    2015-10-01

    A key issue in history of technology that has received little attention is the use of simulation in engineering design. This article explores the use of both mechanical and numerical simulation in the design of the Mars atmospheric entry phases of the Viking and Mars Pathfinder missions to argue that engineers used both kinds of simulation to develop knowledge of their designs' likely behavior in the poorly known environment of Mars. Each kind of simulation could be used as a warrant of the other's fidelity, in an iterative process of knowledge construction.

  10. Physics and Engineering Design of the ITER Electron Cyclotron Emission Diagnostic

    NASA Astrophysics Data System (ADS)

    Rowan, W. L.; Austin, M. E.; Houshmandyar, S.; Phillips, P. E.; Beno, J. H.; Ouroua, A.; Weeks, D. A.; Hubbard, A. E.; Stillerman, J. A.; Feder, R. E.; Khodak, A.; Taylor, G.; Pandya, H. K.; Danani, S.; Kumar, R.

    2015-11-01

    Electron temperature (Te) measurements and consequent electron thermal transport inferences will be critical to the non-active phases of ITER operation and will take on added importance during the alpha heating phase. Here, we describe our design for the diagnostic that will measure spatial and temporal profiles of Te using electron cyclotron emission (ECE). Other measurement capability includes high frequency instabilities (e.g. ELMs, NTMs, and TAEs). Since results from TFTR and JET suggest that Thomson Scattering and ECE differ at high Te due to driven non-Maxwellian distributions, non-thermal features of the ITER electron distribution must be documented. The ITER environment presents other challenges including space limitations, vacuum requirements, and very high-neutron-fluence. Plasma control in ITER will require real-time Te. The diagnosic design that evolved from these sometimes-conflicting needs and requirements will be described component by component with special emphasis on the integration to form a single effective diagnostic system. Supported by PPPL/US-DA via subcontract S013464-C to UT Austin.

  11. ITER ECE Diagnostic: Design Progress of IN-DA and the diagnostic role for Physics

    NASA Astrophysics Data System (ADS)

    Pandya, H. K. B.; Kumar, Ravinder; Danani, S.; Shrishail, P.; Thomas, Sajal; Kumar, Vinay; Taylor, G.; Khodak, A.; Rowan, W. L.; Houshmandyar, S.; Udintsev, V. S.; Casal, N.; Walsh, M. J.

    2017-04-01

    The ECE Diagnostic system in ITER will be used for measuring the electron temperature profile evolution, electron temperature fluctuations, the runaway electron spectrum, and the radiated power in the electron cyclotron frequency range (70-1000 GHz), These measurements will be used for advanced real time plasma control (e.g. steering the electron cyclotron heating beams), and physics studies. The scope of the Indian Domestic Agency (IN-DA) is to design and develop the polarizer splitter units; the broadband (70 to 1000 GHz) transmission lines; a high temperature calibration source in the Diagnostics Hall; two Michelson Interferometers (70 to 1000 GHz) and a 122-230 GHz radiometer. The remainder of the ITER ECE diagnostic system is the responsibility of the US domestic agency and the ITER Organization (IO). The design needs to conform to the ITER Organization’s strict requirements for reliability, availability, maintainability and inspect-ability. Progress in the design and development of various subsystems and components considering various engineering challenges and solutions will be discussed in this paper. This paper will also highlight how various ECE measurements can enhance understanding of plasma physics in ITER.

  12. Incorporating Prototyping and Iteration into Intervention Development: A Case Study of a Dining Hall-Based Intervention

    ERIC Educational Resources Information Center

    McClain, Arianna D.; Hekler, Eric B.; Gardner, Christopher D.

    2013-01-01

    Background: Previous research from the fields of computer science and engineering highlight the importance of an iterative design process (IDP) to create more creative and effective solutions. Objective: This study describes IDP as a new method for developing health behavior interventions and evaluates the effectiveness of a dining hall--based…

  13. Engineering design skills coverage in K-12 engineering program curriculum materials in the USA

    NASA Astrophysics Data System (ADS)

    Chabalengula, Vivien M.; Mumba, Frackson

    2017-11-01

    The current K-12 Science Education framework and Next Generation Science Standards (NGSS) in the United States emphasise the integration of engineering design in science instruction to promote scientific literacy and engineering design skills among students. As such, many engineering education programmes have developed curriculum materials that are being used in K-12 settings. However, little is known about the nature and extent to which engineering design skills outlined in NGSS are addressed in these K-12 engineering education programme curriculum materials. We analysed nine K-12 engineering education programmes for the nature and extent of engineering design skills coverage. Results show that developing possible solutions and actual designing of prototypes were the highly covered engineering design skills; specification of clear goals, criteria, and constraints received medium coverage; defining and identifying an engineering problem; optimising the design solution; and demonstrating how a prototype works, and making iterations to improve designs were lowly covered. These trends were similar across grade levels and across discipline-specific curriculum materials. These results have implications on engineering design-integrated science teaching and learning in K-12 settings.

  14. ITER's Tokamak Cooling Water System and the the Use of ASME Codes to Comply with French Regulations of Nuclear Pressure Equipment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Jan; Ferrada, Juan J; Curd, Warren

    During inductive plasma operation of ITER, fusion power will reach 500 MW with an energy multiplication factor of 10. The heat will be transferred by the Tokamak Cooling Water System (TCWS) to the environment using the secondary cooling system. Plasma operations are inherently safe even under the most severe postulated accident condition a large, in-vessel break that results in a loss-of-coolant accident. A functioning cooling water system is not required to ensure safe shutdown. Even though ITER is inherently safe, TCWS equipment (e.g., heat exchangers, piping, pressurizers) are classified as safety important components. This is because the water is predictedmore » to contain low-levels of radionuclides (e.g., activated corrosion products, tritium) with activity levels high enough to require the design of components to be in accordance with French regulations for nuclear pressure equipment, i.e., the French Order dated 12 December 2005 (ESPN). ESPN has extended the practical application of the methodology established by the Pressure Equipment Directive (97/23/EC) to nuclear pressure equipment, under French Decree 99-1046 dated 13 December 1999, and Order dated 21 December 1999 (ESP). ASME codes and supplementary analyses (e.g., Failure Modes and Effects Analysis) will be used to demonstrate that the TCWS equipment meets these essential safety requirements. TCWS is being designed to provide not only cooling, with a capacity of approximately 1 GW energy removal, but also elevated temperature baking of first-wall/blanket, vacuum vessel, and divertor. Additional TCWS functions include chemical control of water, draining and drying for maintenance, and facilitation of leak detection/localization. The TCWS interfaces with the majority of ITER systems, including the secondary cooling system. U.S. ITER is responsible for design, engineering, and procurement of the TCWS with industry support from an Engineering Services Organization (ESO) (AREVA Federal Services, with support from Northrop Grumman, and OneCIS). ITER International Organization (ITER-IO) is responsible for design oversight and equipment installation in Cadarache, France. TCWS equipment will be fabricated using ASME design codes with quality assurance and oversight by an Agreed Notified Body (approved by the French regulator) that will ensure regulatory compliance. This paper describes the TCWS design and how U.S. ITER and fabricators will use ASME codes to comply with EU Directives and French Orders and Decrees.« less

  15. LATUX: An Iterative Workflow for Designing, Validating, and Deploying Learning Analytics Visualizations

    ERIC Educational Resources Information Center

    Martinez-Maldonado, Roberto; Pardo, Abelardo; Mirriahi, Negin; Yacef, Kalina; Kay, Judy; Clayphan, Andrew

    2015-01-01

    Designing, validating, and deploying learning analytics tools for instructors or students is a challenge that requires techniques and methods from different disciplines, such as software engineering, human-computer interaction, computer graphics, educational design, and psychology. Whilst each has established its own design methodologies, we now…

  16. Application of IPAD to missile design

    NASA Technical Reports Server (NTRS)

    Santa, J. E.; Whiting, T. R.

    1974-01-01

    The application of an integrated program for aerospace-vehicle design (IPAD) to the design of a tactical missile is examined. The feasibility of modifying a proposed IPAD system for aircraft design work for use in missile design is evaluated. The tasks, cost, and schedule for the modification are presented. The basic engineering design process is described, explaining how missile design is achieved through iteration of six logical problem solving functions throughout the system studies, preliminary design, and detailed design phases of a new product. Existing computer codes used in various engineering disciplines are evaluated for their applicability to IPAD in missile design.

  17. Reducing Design Cycle Time and Cost Through Process Resequencing

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    2004-01-01

    In today's competitive environment, companies are under enormous pressure to reduce the time and cost of their design cycle. One method for reducing both time and cost is to develop an understanding of the flow of the design processes and the effects of the iterative subcycles that are found in complex design projects. Once these aspects are understood, the design manager can make decisions that take advantage of decomposition, concurrent engineering, and parallel processing techniques to reduce the total time and the total cost of the design cycle. One software tool that can aid in this decision-making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). The DeMAID software minimizes the feedback couplings that create iterative subcycles, groups processes into iterative subcycles, and decomposes the subcycles into a hierarchical structure. The real benefits of producing the best design in the least time and at a minimum cost are obtained from sequencing the processes in the subcycles.

  18. Physics and engineering design of the accelerator and electron dump for SPIDER

    NASA Astrophysics Data System (ADS)

    Agostinetti, P.; Antoni, V.; Cavenago, M.; Chitarin, G.; Marconato, N.; Marcuzzi, D.; Pilan, N.; Serianni, G.; Sonato, P.; Veltri, P.; Zaccaria, P.

    2011-06-01

    The ITER Neutral Beam Test Facility (PRIMA) is planned to be built at Consorzio RFX (Padova, Italy). PRIMA includes two experimental devices: a full size ion source with low voltage extraction called SPIDER and a full size neutral beam injector at full beam power called MITICA. SPIDER is the first experimental device to be built and operated, aiming at testing the extraction of a negative ion beam (made of H- and in a later stage D- ions) from an ITER size ion source. The main requirements of this experiment are a H-/D- extracted current density larger than 355/285 A m-2, an energy of 100 keV and a pulse duration of up to 3600 s. Several analytical and numerical codes have been used for the design optimization process, some of which are commercial codes, while some others were developed ad hoc. The codes are used to simulate the electrical fields (SLACCAD, BYPO, OPERA), the magnetic fields (OPERA, ANSYS, COMSOL, PERMAG), the beam aiming (OPERA, IRES), the pressure inside the accelerator (CONDUCT, STRIP), the stripping reactions and transmitted/dumped power (EAMCC), the operating temperature, stress and deformations (ALIGN, ANSYS) and the heat loads on the electron dump (ED) (EDAC, BACKSCAT). An integrated approach, taking into consideration at the same time physics and engineering aspects, has been adopted all along the design process. Particular care has been taken in investigating the many interactions between physics and engineering aspects of the experiment. According to the 'robust design' philosophy, a comprehensive set of sensitivity analyses was performed, in order to investigate the influence of the design choices on the most relevant operating parameters. The design of the SPIDER accelerator, here described, has been developed in order to satisfy with reasonable margin all the requirements given by ITER, from the physics and engineering points of view. In particular, a new approach to the compensation of unwanted beam deflections inside the accelerator and a new concept for the ED have been introduced.

  19. Integration of rocket turbine design and analysis through computer graphics

    NASA Technical Reports Server (NTRS)

    Hsu, Wayne; Boynton, Jim

    1988-01-01

    An interactive approach with engineering computer graphics is used to integrate the design and analysis processes of a rocket engine turbine into a progressive and iterative design procedure. The processes are interconnected through pre- and postprocessors. The graphics are used to generate the blade profiles, their stacking, finite element generation, and analysis presentation through color graphics. Steps of the design process discussed include pitch-line design, axisymmetric hub-to-tip meridional design, and quasi-three-dimensional analysis. The viscous two- and three-dimensional analysis codes are executed after acceptable designs are achieved and estimates of initial losses are confirmed.

  20. Systems Engineering of Electric and Hybrid Vehicles

    NASA Technical Reports Server (NTRS)

    Kurtz, D. W.; Levin, R. R.

    1986-01-01

    Technical paper notes systems engineering principles applied to development of electric and hybrid vehicles such that system performance requirements support overall program goal of reduced petroleum consumption. Paper discusses iterative design approach dictated by systems analyses. In addition to obvious peformance parameters of range, acceleration rate, and energy consumption, systems engineering also considers such major factors as cost, safety, reliability, comfort, necessary supporting infrastructure, and availability of materials.

  1. U.S. Seismic Design Maps Web Application

    NASA Astrophysics Data System (ADS)

    Martinez, E.; Fee, J.

    2015-12-01

    The application computes earthquake ground motion design parameters compatible with the International Building Code and other seismic design provisions. It is the primary method for design engineers to obtain ground motion parameters for multiple building codes across the country. When designing new buildings and other structures, engineers around the country use the application. Users specify the design code of interest, location, and other parameters to obtain necessary ground motion information consisting of a high-level executive summary as well as detailed information including maps, data, and graphs. Results are formatted such that they can be directly included in a final engineering report. In addition to single-site analysis, the application supports a batch mode for simultaneous consideration of multiple locations. Finally, an application programming interface (API) is available which allows other application developers to integrate this application's results into larger applications for additional processing. Development on the application has proceeded in an iterative manner working with engineers through email, meetings, and workshops. Each iteration provided new features, improved performance, and usability enhancements. This development approach positioned the application to be integral to the structural design process and is now used to produce over 1800 reports daily. Recent efforts have enhanced the application to be a data-driven, mobile-first, responsive web application. Development is ongoing, and source code has recently been published into the open-source community on GitHub. Open-sourcing the code facilitates improved incorporation of user feedback to add new features ensuring the application's continued success.

  2. OVERVIEW OF NEUTRON MEASUREMENTS IN JET FUSION DEVICE.

    PubMed

    Batistoni, P; Villari, R; Obryk, B; Packer, L W; Stamatelatos, I E; Popovichev, S; Colangeli, A; Colling, B; Fonnesu, N; Loreti, S; Klix, A; Klosowski, M; Malik, K; Naish, J; Pillon, M; Vasilopoulou, T; De Felice, P; Pimpinella, M; Quintieri, L

    2017-10-05

    The design and operation of ITER experimental fusion reactor requires the development of neutron measurement techniques and numerical tools to derive the fusion power and the radiation field in the device and in the surrounding areas. Nuclear analyses provide essential input to the conceptual design, optimisation, engineering and safety case in ITER and power plant studies. The required radiation transport calculations are extremely challenging because of the large physical extent of the reactor plant, the complexity of the geometry, and the combination of deep penetration and streaming paths. This article reports the experimental activities which are carried-out at JET to validate the neutronics measurements methods and numerical tools used in ITER and power plant design. A new deuterium-tritium campaign is proposed in 2019 at JET: the unique 14 MeV neutron yields produced will be exploited as much as possible to validate measurement techniques, codes, procedures and data currently used in ITER design thus reducing the related uncertainties and the associated risks in the machine operation. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.

    1996-01-01

    The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.

  4. GLobal Integrated Design Environment

    NASA Technical Reports Server (NTRS)

    Kunkel, Matthew; McGuire, Melissa; Smith, David A.; Gefert, Leon P.

    2011-01-01

    The GLobal Integrated Design Environment (GLIDE) is a collaborative engineering application built to resolve the design session issues of real-time passing of data between multiple discipline experts in a collaborative environment. Utilizing Web protocols and multiple programming languages, GLIDE allows engineers to use the applications to which they are accustomed in this case, Excel to send and receive datasets via the Internet to a database-driven Web server. Traditionally, a collaborative design session consists of one or more engineers representing each discipline meeting together in a single location. The discipline leads exchange parameters and iterate through their respective processes to converge on an acceptable dataset. In cases in which the engineers are unable to meet, their parameters are passed via e-mail, telephone, facsimile, or even postal mail. The result of this slow process of data exchange would elongate a design session to weeks or even months. While the iterative process remains in place, software can now exchange parameters securely and efficiently, while at the same time allowing for much more information about a design session to be made available. GLIDE is written in a compilation of several programming languages, including REALbasic, PHP, and Microsoft Visual Basic. GLIDE client installers are available to download for both Microsoft Windows and Macintosh systems. The GLIDE client software is compatible with Microsoft Excel 2000 or later on Windows systems, and with Microsoft Excel X or later on Macintosh systems. GLIDE follows the Client-Server paradigm, transferring encrypted and compressed data via standard Web protocols. Currently, the engineers use Excel as a front end to the GLIDE Client, as many of their custom tools run in Excel.

  5. Deployment of e-health services - a business model engineering strategy.

    PubMed

    Kijl, Björn; Nieuwenhuis, Lambert J M; Huis in 't Veld, Rianne M H A; Hermens, Hermie J; Vollenbroek-Hutten, Miriam M R

    2010-01-01

    We designed a business model for deploying a myofeedback-based teletreatment service. An iterative and combined qualitative and quantitative action design approach was used for developing the business model and the related value network. Insights from surveys, desk research, expert interviews, workshops and quantitative modelling were combined to produce the first business model and then to refine it in three design cycles. The business model engineering strategy provided important insights which led to an improved, more viable and feasible business model and related value network design. Based on this experience, we conclude that the process of early stage business model engineering reduces risk and produces substantial savings in costs and resources related to service deployment.

  6. Design Features of the Neutral Particle Diagnostic System for the ITER Tokamak

    NASA Astrophysics Data System (ADS)

    Petrov, S. Ya.; Afanasyev, V. I.; Melnik, A. D.; Mironov, M. I.; Navolotsky, A. S.; Nesenevich, V. G.; Petrov, M. P.; Chernyshev, F. V.; Kedrov, I. V.; Kuzmin, E. G.; Lyublin, B. V.; Kozlovski, S. S.; Mokeev, A. N.

    2017-12-01

    The control of the deuterium-tritium (DT) fuel isotopic ratio has to ensure the best performance of the ITER thermonuclear fusion reactor. The diagnostic system described in this paper allows the measurement of this ratio analyzing the hydrogen isotope fluxes (performing neutral particle analysis (NPA)). The development and supply of the NPA diagnostics for ITER was delegated to the Russian Federation. The diagnostics is being developed at the Ioffe Institute. The system consists of two analyzers, viz., LENPA (Low Energy Neutral Particle Analyzer) with 10-200 keV energy range and HENPA (High Energy Neutral Particle Analyzer) with 0.1-4.0MeV energy range. Simultaneous operation of both analyzers in different energy ranges enables researchers to measure the DT fuel ratio both in the central burning plasma (thermonuclear burn zone) and at the edge as well. When developing the diagnostic complex, it was necessary to account for the impact of several factors: high levels of neutron and gamma radiation, the direct vacuum connection to the ITER vessel, implying high tritium containment, strict requirements on reliability of all units and mechanisms, and the limited space available for accommodation of the diagnostic hardware at the ITER tokamak. The paper describes the design of the diagnostic complex and the engineering solutions that make it possible to conduct measurements under tokamak reactor conditions. The proposed engineering solutions provide a safe—with respect to thermal and mechanical loads—common vacuum channel for hydrogen isotope atoms to pass to the analyzers; ensure efficient shielding of the analyzers from the ITER stray magnetic field (up to 1 kG); provide the remote control of the NPA diagnostic complex, in particular, connection/disconnection of the NPA vacuum beamline from the ITER vessel; meet the ITER radiation safety requirements; and ensure measurements of the fuel isotopic ratio under high levels of neutron and gamma radiation.

  7. Preliminary consideration of CFETR ITER-like case diagnostic system.

    PubMed

    Li, G S; Yang, Y; Wang, Y M; Ming, T F; Han, X; Liu, S C; Wang, E H; Liu, Y K; Yang, W J; Li, G Q; Hu, Q S; Gao, X

    2016-11-01

    Chinese Fusion Engineering Test Reactor (CFETR) is a new superconducting tokamak device being designed in China, which aims at bridging the gap between ITER and DEMO, where DEMO is a tokamak demonstration fusion reactor. Two diagnostic cases, ITER-like case and towards DEMO case, have been considered for CFETR early and later operating phases, respectively. In this paper, some preliminary consideration of ITER-like case will be presented. Based on ITER diagnostic system, three versions of increased complexity and coverage of the ITER-like case diagnostic system have been developed with different goals and functions. Version A aims only machine protection and basic control. Both of version B and version C are mainly for machine protection, basic and advanced control, but version C has an increased level of redundancy necessary for improved measurements capability. The performance of these versions and needed R&D work are outlined.

  8. Preliminary consideration of CFETR ITER-like case diagnostic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, G. S.; Liu, Y. K.; Gao, X.

    2016-11-15

    Chinese Fusion Engineering Test Reactor (CFETR) is a new superconducting tokamak device being designed in China, which aims at bridging the gap between ITER and DEMO, where DEMO is a tokamak demonstration fusion reactor. Two diagnostic cases, ITER-like case and towards DEMO case, have been considered for CFETR early and later operating phases, respectively. In this paper, some preliminary consideration of ITER-like case will be presented. Based on ITER diagnostic system, three versions of increased complexity and coverage of the ITER-like case diagnostic system have been developed with different goals and functions. Version A aims only machine protection and basicmore » control. Both of version B and version C are mainly for machine protection, basic and advanced control, but version C has an increased level of redundancy necessary for improved measurements capability. The performance of these versions and needed R&D work are outlined.« less

  9. Usability engineering for augmented reality: employing user-based studies to inform design.

    PubMed

    Gabbard, Joseph L; Swan, J Edward

    2008-01-01

    A major challenge, and thus opportunity, in the field of human-computer interaction and specifically usability engineering is designing effective user interfaces for emerging technologies that have no established design guidelines or interaction metaphors or introduce completely new ways for users to perceive and interact with technology and the world around them. Clearly, augmented reality is one such emerging technology. We propose a usability engineering approach that employs user-based studies to inform design, by iteratively inserting a series of user-based studies into a traditional usability engineering lifecycle to better inform initial user interface designs. We present an exemplar user-based study conducted to gain insight into how users perceive text in outdoor augmented reality settings and to derive implications for design in outdoor augmented reality. We also describe lessons learned from our experiences conducting user-based studies as part of the design process.

  10. Cell-free synthetic biology for in vitro prototype engineering.

    PubMed

    Moore, Simon J; MacDonald, James T; Freemont, Paul S

    2017-06-15

    Cell-free transcription-translation is an expanding field in synthetic biology as a rapid prototyping platform for blueprinting the design of synthetic biological devices. Exemplar efforts include translation of prototype designs into medical test kits for on-site identification of viruses (Zika and Ebola), while gene circuit cascades can be tested, debugged and re-designed within rapid turnover times. Coupled with mathematical modelling, this discipline lends itself towards the precision engineering of new synthetic life. The next stages of cell-free look set to unlock new microbial hosts that remain slow to engineer and unsuited to rapid iterative design cycles. It is hoped that the development of such systems will provide new tools to aid the transition from cell-free prototype designs to functioning synthetic genetic circuits and engineered natural product pathways in living cells. © 2017 The Author(s).

  11. Cell-free synthetic biology for in vitro prototype engineering

    PubMed Central

    Moore, Simon J.; MacDonald, James T.

    2017-01-01

    Cell-free transcription–translation is an expanding field in synthetic biology as a rapid prototyping platform for blueprinting the design of synthetic biological devices. Exemplar efforts include translation of prototype designs into medical test kits for on-site identification of viruses (Zika and Ebola), while gene circuit cascades can be tested, debugged and re-designed within rapid turnover times. Coupled with mathematical modelling, this discipline lends itself towards the precision engineering of new synthetic life. The next stages of cell-free look set to unlock new microbial hosts that remain slow to engineer and unsuited to rapid iterative design cycles. It is hoped that the development of such systems will provide new tools to aid the transition from cell-free prototype designs to functioning synthetic genetic circuits and engineered natural product pathways in living cells. PMID:28620040

  12. ITER Cryoplant Infrastructures

    NASA Astrophysics Data System (ADS)

    Fauve, E.; Monneret, E.; Voigt, T.; Vincent, G.; Forgeas, A.; Simon, M.

    2017-02-01

    The ITER Tokamak requires an average 75 kW of refrigeration power at 4.5 K and 600 kW of refrigeration Power at 80 K to maintain the nominal operation condition of the ITER thermal shields, superconducting magnets and cryopumps. This is produced by the ITER Cryoplant, a complex cluster of refrigeration systems including in particular three identical Liquid Helium Plants and two identical Liquid Nitrogen Plants. Beyond the equipment directly part of the Cryoplant, colossal infrastructures are required. These infrastructures account for a large part of the Cryoplants lay-out, budget and engineering efforts. It is ITER Organization responsibility to ensure that all infrastructures are adequately sized and designed to interface with the Cryoplant. This proceeding presents the overall architecture of the cryoplant. It provides order of magnitude related to the cryoplant building and utilities: electricity, cooling water, heating, ventilation and air conditioning (HVAC).

  13. Computational methods of robust controller design for aerodynamic flutter suppression

    NASA Technical Reports Server (NTRS)

    Anderson, L. R.

    1981-01-01

    The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.

  14. Physics and engineering studies on the MITICA accelerator: comparison among possible design solutions

    NASA Astrophysics Data System (ADS)

    Agostinetti, P.; Antoni, V.; Cavenago, M.; Chitarin, G.; Pilan, N.; Marcuzzi, D.; Serianni, G.; Veltri, P.

    2011-09-01

    Consorzio RFX in Padova is currently using a comprehensive set of numerical and analytical codes, for the physics and engineering design of the SPIDER (Source for Production of Ion of Deuterium Extracted from RF plasma) and MITICA (Megavolt ITER Injector Concept Advancement) experiments, planned to be built at Consorzio RFX. This paper presents a set of studies on different possible geometries for the MITICA accelerator, with the objective to compare different design concepts and choose the most suitable one (or ones) to be further developed and possibly adopted in the experiment. Different design solutions have been discussed and compared, taking into account their advantages and drawbacks by both the physics and engineering points of view.

  15. 3D Printing: Exploring Capabilities

    ERIC Educational Resources Information Center

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  16. Application of iterative robust model-based optimal experimental design for the calibration of biocatalytic models.

    PubMed

    Van Daele, Timothy; Gernaey, Krist V; Ringborg, Rolf H; Börner, Tim; Heintz, Søren; Van Hauwermeiren, Daan; Grey, Carl; Krühne, Ulrich; Adlercreutz, Patrick; Nopens, Ingmar

    2017-09-01

    The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during experimentation is not actively used to optimize the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω-transaminase catalyzed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is not only more accurate but also a computationally more expensive method. As a result, an important deviation between both approaches is found, confirming that linearization methods should be applied with care for nonlinear models. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:1278-1293, 2017. © 2017 American Institute of Chemical Engineers.

  17. Constructing integrable high-pressure full-current free-boundary stellarator magnetohydrodynamic equilibrium solutions

    NASA Astrophysics Data System (ADS)

    Hudson, S. R.; Monticello, D. A.; Reiman, A. H.; Strickler, D. J.; Hirshman, S. P.; Ku, L.-P.; Lazarus, E.; Brooks, A.; Zarnstorff, M. C.; Boozer, A. H.; Fu, G.-Y.; Neilson, G. H.

    2003-10-01

    For the (non-axisymmetric) stellarator class of plasma confinement devices to be feasible candidates for fusion power stations it is essential that, to a good approximation, the magnetic field lines lie on nested flux surfaces; however, the inherent lack of a continuous symmetry implies that magnetic islands responsible for breaking the smooth topology of the flux surfaces are guaranteed to exist. Thus, the suppression of magnetic islands is a critical issue for stellarator design, particularly for small aspect ratio devices. Pfirsch-Schlüter currents, diamagnetic currents and resonant coil fields contribute to the formation of magnetic islands, and the challenge is to design the plasma and coils such that these effects cancel. Magnetic islands in free-boundary high-pressure full-current stellarator magnetohydrodynamic equilibria are suppressed using a procedure based on the Princeton Iterative Equilibrium Solver (Reiman and Greenside 1986 Comput. Phys. Commun. 43 157) which iterates the equilibrium equations to obtain the plasma equilibrium. At each iteration, changes to a Fourier representation of the coil geometry are made to cancel resonant fields produced by the plasma. The changes are constrained to preserve certain measures of engineering acceptability and to preserve the stability of ideal kink modes. As the iterations continue, the coil geometry and the plasma simultaneously converge to an equilibrium in which the island content is negligible, the plasma is stable to ideal kink modes, and the coils satisfy engineering constraints. The method is applied to a candidate plasma and coil design for the National Compact Stellarator eXperiment (Reiman et al 2001 Phys. Plasma 8 2083).

  18. Concise Review: Organ Engineering: Design, Technology, and Integration.

    PubMed

    Kaushik, Gaurav; Leijten, Jeroen; Khademhosseini, Ali

    2017-01-01

    Engineering complex tissues and whole organs has the potential to dramatically impact translational medicine in several avenues. Organ engineering is a discipline that integrates biological knowledge of embryological development, anatomy, physiology, and cellular interactions with enabling technologies including biocompatible biomaterials and biofabrication platforms such as three-dimensional bioprinting. When engineering complex tissues and organs, core design principles must be taken into account, such as the structure-function relationship, biochemical signaling, mechanics, gradients, and spatial constraints. Technological advances in biomaterials, biofabrication, and biomedical imaging allow for in vitro control of these factors to recreate in vivo phenomena. Finally, organ engineering emerges as an integration of biological design and technical rigor. An overall workflow for organ engineering and guiding technology to advance biology as well as a perspective on necessary future iterations in the field is discussed. Stem Cells 2017;35:51-60. © 2016 AlphaMed Press.

  19. Principles of Biomimetic Vascular Network Design Applied to a Tissue-Engineered Liver Scaffold

    PubMed Central

    Hoganson, David M.; Pryor, Howard I.; Spool, Ira D.; Burns, Owen H.; Gilmore, J. Randall

    2010-01-01

    Branched vascular networks are a central component of scaffold architecture for solid organ tissue engineering. In this work, seven biomimetic principles were established as the major guiding technical design considerations of a branched vascular network for a tissue-engineered scaffold. These biomimetic design principles were applied to a branched radial architecture to develop a liver-specific vascular network. Iterative design changes and computational fluid dynamic analysis were used to optimize the network before mold manufacturing. The vascular network mold was created using a new mold technique that achieves a 1:1 aspect ratio for all channels. In vitro blood flow testing confirmed the physiologic hemodynamics of the network as predicted by computational fluid dynamic analysis. These results indicate that this biomimetic liver vascular network design will provide a foundation for developing complex vascular networks for solid organ tissue engineering that achieve physiologic blood flow. PMID:20001254

  20. Principles of biomimetic vascular network design applied to a tissue-engineered liver scaffold.

    PubMed

    Hoganson, David M; Pryor, Howard I; Spool, Ira D; Burns, Owen H; Gilmore, J Randall; Vacanti, Joseph P

    2010-05-01

    Branched vascular networks are a central component of scaffold architecture for solid organ tissue engineering. In this work, seven biomimetic principles were established as the major guiding technical design considerations of a branched vascular network for a tissue-engineered scaffold. These biomimetic design principles were applied to a branched radial architecture to develop a liver-specific vascular network. Iterative design changes and computational fluid dynamic analysis were used to optimize the network before mold manufacturing. The vascular network mold was created using a new mold technique that achieves a 1:1 aspect ratio for all channels. In vitro blood flow testing confirmed the physiologic hemodynamics of the network as predicted by computational fluid dynamic analysis. These results indicate that this biomimetic liver vascular network design will provide a foundation for developing complex vascular networks for solid organ tissue engineering that achieve physiologic blood flow.

  1. Comparing Freshman and doctoral engineering students in design: mapping with a descriptive framework

    NASA Astrophysics Data System (ADS)

    Carmona Marques, P.

    2017-11-01

    This paper reports the results of a study of engineering students' approaches to an open-ended design problem. To carry out this, sketches and interviews were collected from 9 freshmen (first year) and 10 doctoral engineering students, when they designed solutions for orange squeezers. Sketches and interviews were analysed and mapped with a descriptive 'ideation framework' (IF) of the design process, to document and compare their design creativity (Carmona Marques, P., A. Silva, E. Henriques, and C. Magee. 2014. "A Descriptive Framework of the Design Process from a Dual Cognitive Engineering Perspective." International Journal of Design Creativity and Innovation 2 (3): 142-164). The results show that the designers worked in a manner largely consistent with the IF for generalisation and specialisation loops. Also, doctoral students produced more alternative solutions during the ideation process. In addition, compared to freshman, doctoral used the generalisation loop of the IF, working at higher levels of abstraction. The iterative nature of design is highlighted during this study - a potential contribution to decrease the gap between both groups in engineering education.

  2. Engineering design in the primary school: applying stem concepts to build an optical instrument

    NASA Astrophysics Data System (ADS)

    King, Donna; English, Lyn D.

    2016-12-01

    Internationally there is a need for research that focuses on STEM (Science, Technology, Engineering and Mathematics) education to equip students with the skills needed for a rapidly changing future. One way to do this is through designing engineering activities that reflect real-world problems and contextualise students' learning of STEM concepts. As such, this study examined the learning that occurred when fifth-grade students completed an optical engineering activity using an iterative engineering design model. Through a qualitative methodology using a case study design, we analysed multiple data sources including students' design sketches from eight focus groups. Three key findings emerged: first, the collaborative process of the first design sketch enabled students to apply core STEM concepts to model construction; second, during the construction stage students used experimentation for the positioning of lenses, mirrors and tubes resulting in a simpler 'working' model; and third, the redesign process enabled students to apply structural changes to their design. The engineering design model was useful for structuring stages of design, construction and redesign; however, we suggest a more flexible approach for advanced applications of STEM concepts in the future.

  3. A generalized computer code for developing dynamic gas turbine engine models (DIGTEM)

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.

    1984-01-01

    This paper describes DIGTEM (digital turbofan engine model), a computer program that simulates two spool, two stream (turbofan) engines. DIGTEM was developed to support the development of a real time multiprocessor based engine simulator being designed at the Lewis Research Center. The turbofan engine model in DIGTEM contains steady state performance maps for all the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. DIGTEM features an implicit integration scheme for integrating stiff systems and trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off design points and iterates to a balanced engine condition. Transients are generated by defining the engine inputs as functions of time in a user written subroutine (TMRSP). Closed loop controls can also be simulated. DIGTEM is generalized in the aerothermodynamic treatment of components. This feature, along with DIGTEM's trimming at a design point, make it a very useful tool for developing a model of a specific turbofan engine.

  4. A generalized computer code for developing dynamic gas turbine engine models (DIGTEM)

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.

    1983-01-01

    This paper describes DIGTEM (digital turbofan engine model), a computer program that simulates two spool, two stream (turbofan) engines. DIGTEM was developed to support the development of a real time multiprocessor based engine simulator being designed at the Lewis Research Center. The turbofan engine model in DIGTEM contains steady state performance maps for all the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. DIGTEM features an implicit integration scheme for integrating stiff systems and trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off design points and iterates to a balanced engine condition. Transients are generated by defining the engine inputs as functions of time in a user written subroutine (TMRSP). Closed loop controls can also be simulated. DIGTEM is generalized in the aerothermodynamic treatment of components. This feature, along with DIGTEM's trimming at a design point, make it a very useful tool for developing a model of a specific turbofan engine.

  5. Establishing Physical and Engineering Science Base to Bridge from ITER to Demo

    NASA Astrophysics Data System (ADS)

    Peng, Y.-K. Martin; Abdou, M.; Gates, D.; Hegna, C.; Hill, D.; Najmabadi, F.; Navratil, G.; Parker, R.

    2007-11-01

    A Nuclear Component Testing (NCT) Discussion Group emerged recently to clarify how ``a lowered-risk, reduced-cost approach can provide a progressive fusion environment beyond the ITER level to explore, discover, and help establish the remaining, critically needed physical and engineering sciences knowledge base for Demo.'' The group, assuming success of ITER and other contemporary projects, identified critical ``gap-filling'' investigations: plasma startup, tritium self-sufficiency, plasma facing surface performance and maintainability, first wall/blanket/divertor materials defect control and lifetime management, and remote handling. Only standard or spherical tokamak plasma conditions below the advanced regime are assumed to lower the anticipated physics risk to continuous operation (˜2 weeks). Modular designs and remote handling capabilities are included to mitigate the risk of component failure and ease replacement. Aspect ratio should be varied to lower the cost, accounting for the contending physics risks and the near-term R&D. Cost and time-effective staging from H-H, D-D, to D-T will also be considered. *Work supported by USDOE.

  6. An application generator for rapid prototyping of Ada real-time control software

    NASA Technical Reports Server (NTRS)

    Johnson, Jim; Biglari, Haik; Lehman, Larry

    1990-01-01

    The need to increase engineering productivity and decrease software life cycle costs in real-time system development establishes a motivation for a method of rapid prototyping. The design by iterative rapid prototyping technique is described. A tool which facilitates such a design methodology for the generation of embedded control software is described.

  7. Software Estimates Costs of Testing Rocket Engines

    NASA Technical Reports Server (NTRS)

    Smith, C. L.

    2003-01-01

    Simulation-Based Cost Model (SiCM), a discrete event simulation developed in Extend , simulates pertinent aspects of the testing of rocket propulsion test articles for the purpose of estimating the costs of such testing during time intervals specified by its users. A user enters input data for control of simulations; information on the nature of, and activity in, a given testing project; and information on resources. Simulation objects are created on the basis of this input. Costs of the engineering-design, construction, and testing phases of a given project are estimated from numbers and labor rates of engineers and technicians employed in each phase, the duration of each phase; costs of materials used in each phase; and, for the testing phase, the rate of maintenance of the testing facility. The three main outputs of SiCM are (1) a curve, updated at each iteration of the simulation, that shows overall expenditures vs. time during the interval specified by the user; (2) a histogram of the total costs from all iterations of the simulation; and (3) table displaying means and variances of cumulative costs for each phase from all iterations. Other outputs include spending curves for each phase.

  8. Nozzle Numerical Analysis Of The Scimitar Engine

    NASA Astrophysics Data System (ADS)

    Battista, F.; Marini, M.; Cutrone, L.

    2011-05-01

    This work describes part of the activities on the LAPCAT-II A2 vehicle, in which starting from the available conceptual vehicle design and the related pre- cooled turbo-ramjet engine called SCIMITAR, well- thought assumptions made for performance figures of different components during the iteration process within LAPCAT-I will be assessed in more detail. In this paper it is presented a numerical analysis aimed at the design optimization of the nozzle contour of the LAPCAT A2 SCIMITAR engine designed by Reaction Engines Ltd. (REL) (see Figure 1). In particular, nozzle shape optimization process is presented for cruise conditions. All the computations have been carried out by using the CIRA C3NS code in non equilibrium conditions. The effect of considering detailed or reduced chemical kinetic schemes has been analyzed with a particular focus on the production of pollutants. An analysis of engine performance parameters, such as thrust and combustion efficiency has been carried out.

  9. Understanding Biological Regulation Through Synthetic Biology.

    PubMed

    Bashor, Caleb J; Collins, James J

    2018-05-20

    Engineering synthetic gene regulatory circuits proceeds through iterative cycles of design, building, and testing. Initial circuit designs must rely on often-incomplete models of regulation established by fields of reductive inquiry-biochemistry and molecular and systems biology. As differences in designed and experimentally observed circuit behavior are inevitably encountered, investigated, and resolved, each turn of the engineering cycle can force a resynthesis in understanding of natural network function. Here, we outline research that uses the process of gene circuit engineering to advance biological discovery. Synthetic gene circuit engineering research has not only refined our understanding of cellular regulation but furnished biologists with a toolkit that can be directed at natural systems to exact precision manipulation of network structure. As we discuss, using circuit engineering to predictively reorganize, rewire, and reconstruct cellular regulation serves as the ultimate means of testing and understanding how cellular phenotype emerges from systems-level network function.

  10. Experiences with a generator tool for building clinical application modules.

    PubMed

    Kuhn, K A; Lenz, R; Elstner, T; Siegele, H; Moll, R

    2003-01-01

    To elaborate main system characteristics and relevant deployment experiences for the health information system (HIS) Orbis/OpenMed, which is in widespread use in Germany, Austria, and Switzerland. In a deployment phase of 3 years in a 1.200 bed university hospital, where the system underwent significant improvements, the system's functionality and its software design have been analyzed in detail. We focus on an integrated CASE tool for generating embedded clinical applications and for incremental system evolution. We present a participatory and iterative software engineering process developed for efficient utilization of such a tool. The system's functionality is comparable to other commercial products' functionality; its components are embedded in a vendor-specific application framework, and standard interfaces are being used for connecting subsystems. The integrated generator tool is a remarkable feature; it became a key factor of our project. Tool generated applications are workflow enabled and embedded into the overall data base schema. Rapid prototyping and iterative refinement are supported, so application modules can be adapted to the users' work practice. We consider tools supporting an iterative and participatory software engineering process highly relevant for health information system architects. The potential of a system to continuously evolve and to be effectively adapted to changing needs may be more important than sophisticated but hard-coded HIS functionality. More work will focus on HIS software design and on software engineering. Methods and tools are needed for quick and robust adaptation of systems to health care processes and changing requirements.

  11. On the Development of a Computing Infrastructure that Facilitates IPPD from a Decision-Based Design Perspective

    NASA Technical Reports Server (NTRS)

    Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.

    1995-01-01

    Integrated Product and Process Development (IPPD) embodies the simultaneous application of both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. Georgia Tech has proposed the development of an Integrated Design Engineering Simulator that will merge Integrated Product and Process Development with interdisciplinary analysis techniques and state-of-the-art computational technologies. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. The current status of development is given and future directions are outlined.

  12. Engineering and manufacturing of ITER first mirror mock-ups.

    PubMed

    Joanny, M; Travère, J M; Salasca, S; Corre, Y; Marot, L; Thellier, C; Gallay, G; Cammarata, C; Passier, B; Fermé, J J

    2010-10-01

    Most of the ITER optical diagnostics aiming at viewing and monitoring plasma facing components will use in-vessel metallic mirrors. These mirrors will be exposed to a severe plasma environment and lead to an important tradeoff on their design and manufacturing. As a consequence, investigations are carried out on diagnostic mirrors toward the development of optimal and reliable solutions. The goals are to assess the manufacturing feasibility of the mirror coatings, evaluate the manufacturing capability and associated performances for the mirrors cooling and polishing, and finally determine the costs and delivery time of the first prototypes with a diameter of 200 and 500 mm. Three kinds of ITER candidate mock-ups are being designed and manufactured: rhodium films on stainless steel substrate, molybdenum on TZM substrate, and silver films on stainless steel substrate. The status of the project is presented in this paper.

  13. Doppler Lidar System Design via Interdisciplinary Design Concept at NASA Langley Research Center - Part II

    NASA Technical Reports Server (NTRS)

    Crasner, Aaron I.; Scola,Salvatore; Beyon, Jeffrey Y.; Petway, Larry B.

    2014-01-01

    Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. Thermal modeling software was used to run steady state thermal analyses, which were used to both validate the designs and recommend further changes. Analyses were run on each redesign, as well as the original system. Thermal Desktop was used to run trade studies to account for uncertainty and assumptions about fan performance and boundary conditions. The studies suggested that, even if the assumptions were significantly wrong, the redesigned systems would remain within operating temperature limits.

  14. Model-Based Systems Engineering in Concurrent Engineering Centers

    NASA Technical Reports Server (NTRS)

    Iwata, Curtis; Infeld, Samantha; Bracken, Jennifer Medlin; McGuire; McQuirk, Christina; Kisdi, Aron; Murphy, Jonathan; Cole, Bjorn; Zarifian, Pezhman

    2015-01-01

    Concurrent Engineering Centers (CECs) are specialized facilities with a goal of generating and maturing engineering designs by enabling rapid design iterations. This is accomplished by co-locating a team of experts (either physically or virtually) in a room with a focused design goal and a limited timeline of a week or less. The systems engineer uses a model of the system to capture the relevant interfaces and manage the overall architecture. A single model that integrates other design information and modeling allows the entire team to visualize the concurrent activity and identify conflicts more efficiently, potentially resulting in a systems model that will continue to be used throughout the project lifecycle. Performing systems engineering using such a system model is the definition of model-based systems engineering (MBSE); therefore, CECs evolving their approach to incorporate advances in MBSE are more successful in reducing time and cost needed to meet study goals. This paper surveys space mission CECs that are in the middle of this evolution, and the authors share their experiences in order to promote discussion within the community.

  15. Model-Based Systems Engineering in Concurrent Engineering Centers

    NASA Technical Reports Server (NTRS)

    Iwata, Curtis; Infeld, Samatha; Bracken, Jennifer Medlin; McGuire, Melissa; McQuirk, Christina; Kisdi, Aron; Murphy, Jonathan; Cole, Bjorn; Zarifian, Pezhman

    2015-01-01

    Concurrent Engineering Centers (CECs) are specialized facilities with a goal of generating and maturing engineering designs by enabling rapid design iterations. This is accomplished by co-locating a team of experts (either physically or virtually) in a room with a narrow design goal and a limited timeline of a week or less. The systems engineer uses a model of the system to capture the relevant interfaces and manage the overall architecture. A single model that integrates other design information and modeling allows the entire team to visualize the concurrent activity and identify conflicts more efficiently, potentially resulting in a systems model that will continue to be used throughout the project lifecycle. Performing systems engineering using such a system model is the definition of model-based systems engineering (MBSE); therefore, CECs evolving their approach to incorporate advances in MBSE are more successful in reducing time and cost needed to meet study goals. This paper surveys space mission CECs that are in the middle of this evolution, and the authors share their experiences in order to promote discussion within the community.

  16. Unsteady Probabilistic Analysis of a Gas Turbine System

    NASA Technical Reports Server (NTRS)

    Brown, Marilyn

    2003-01-01

    In this work, we have considered an annular cascade configuration subjected to unsteady inflow conditions. The unsteady response calculation has been implemented into the time marching CFD code, MSUTURBO. The computed steady state results for the pressure distribution demonstrated good agreement with experimental data. We have computed results for the amplitudes of the unsteady pressure over the blade surfaces. With the increase in gas turbine engine structural complexity and performance over the past 50 years, structural engineers have created an array of safety nets to ensure against component failures in turbine engines. In order to reduce what is now considered to be excessive conservatism and yet maintain the same adequate margins of safety, there is a pressing need to explore methods of incorporating probabilistic design procedures into engine development. Probabilistic methods combine and prioritize the statistical distributions of each design variable, generate an interactive distribution and offer the designer a quantified relationship between robustness, endurance and performance. The designer can therefore iterate between weight reduction, life increase, engine size reduction, speed increase etc.

  17. A user-centered, iterative engineering approach for advanced biomass cookstove design and development

    NASA Astrophysics Data System (ADS)

    Shan, Ming; Carter, Ellison; Baumgartner, Jill; Deng, Mengsi; Clark, Sierra; Schauer, James J.; Ezzati, Majid; Li, Jiarong; Fu, Yu; Yang, Xudong

    2017-09-01

    Unclean combustion of solid fuel for cooking and other household energy needs leads to severe household air pollution and adverse health impacts in adults and children. Replacing traditional solid fuel stoves with high efficiency, low-polluting semi-gasifier stoves can potentially contribute to addressing this global problem. The success of semi-gasifier cookstove implementation initiatives depends not only on the technical performance and safety of the stove, but also the compatibility of the stove design with local cooking practices, the needs and preferences of stove users, and community economic structures. Many past stove design initiatives have failed to address one or more of these dimensions during the design process, resulting in failure of stoves to achieve long-term, exclusive use and market penetration. This study presents a user-centered, iterative engineering design approach to developing a semi-gasifier biomass cookstove for rural Chinese homes. Our approach places equal emphasis on stove performance and meeting the preferences of individuals most likely to adopt the clean stove technology. Five stove prototypes were iteratively developed following energy market and policy evaluation, laboratory and field evaluations of stove performance and user experience, and direct interactions with stove users. The most current stove prototype achieved high performance in the field on thermal efficiency (ISO Tier 3) and pollutant emissions (ISO Tier 4), and was received favorably by rural households in the Sichuan province of Southwest China. Among household cooks receiving the final prototype of the intervention stove, 88% reported lighting and using it at least once. At five months post-intervention, the semi-gasifier stoves were used at least once on an average of 68% [95% CI: 43, 93] of days. Our proposed design strategy can be applied to other stove development initiatives in China and other countries.

  18. A Framework for Automating Cost Estimates in Assembly Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calton, T.L.; Peters, R.R.

    1998-12-09

    When a product concept emerges, the manufacturing engineer is asked to sketch out a production strategy and estimate its cost. The engineer is given an initial product design, along with a schedule of expected production volumes. The engineer then determines the best approach to manufacturing the product, comparing a variey of alternative production strategies. The engineer must consider capital cost, operating cost, lead-time, and other issues in an attempt to maximize pro$ts. After making these basic choices and sketching the design of overall production, the engineer produces estimates of the required capital, operating costs, and production capacity. 177is process maymore » iterate as the product design is refined in order to improve its pe~ormance or manufacturability. The focus of this paper is on the development of computer tools to aid manufacturing engineers in their decision-making processes. This computer sof~are tool provides aj?amework in which accurate cost estimates can be seamlessly derivedfiom design requirements at the start of any engineering project. Z+e result is faster cycle times through first-pass success; lower ll~e cycie cost due to requirements-driven design and accurate cost estimates derived early in the process.« less

  19. Redesign and Rehost of the BIG STICK Nuclear Wargame Simulation

    DTIC Science & Technology

    1988-12-01

    described by Pressman [16]. The 4GT soft- ware development approach consists of four iterative phases: the requirements gathering phase, the design strategy...2. BIG STICK Instructions and Planning Guidance. Air Command and Staff College, Air University, Maxwell AFB AL, 1987. Unpublished Manual. 3. Barry W...Software Engineering Notes, 7:29-32, April 1982. 81 17. Roger S. Pressman . Software Engineering: A Practitioner’s Approach. Mc-Craw-llill Book

  20. Active spectroscopic measurements using the ITER diagnostic system.

    PubMed

    Thomas, D M; Counsell, G; Johnson, D; Vasu, P; Zvonkov, A

    2010-10-01

    Active (beam-based) spectroscopic measurements are intended to provide a number of crucial parameters for the ITER device being built in Cadarache, France. These measurements include the determination of impurity ion temperatures, absolute densities, and velocity profiles, as well as the determination of the plasma current density profile. Because ITER will be the first experiment to study long timescale (∼1 h) fusion burn plasmas, of particular interest is the ability to study the profile of the thermalized helium ash resulting from the slowing down and confinement of the fusion alphas. These measurements will utilize both the 1 MeV heating neutral beams and a dedicated 100 keV hydrogen diagnostic neutral beam. A number of separate instruments are being designed and built by several of the ITER partners to meet the different spectroscopic measurement needs and to provide the maximum physics information. In this paper, we describe the planned measurements, the intended diagnostic ensemble, and we will discuss specific physics and engineering challenges for these measurements in ITER.

  1. The MEOW lunar project for education and science based on concurrent engineering approach

    NASA Astrophysics Data System (ADS)

    Roibás-Millán, E.; Sorribes-Palmer, F.; Chimeno-Manguán, M.

    2018-07-01

    The use of concurrent engineering in the design of space missions allows to take into account in an interrelated methodology the high level of coupling and iteration of mission subsystems in the preliminary conceptual phase. This work presents the result of applying concurrent engineering in a short time lapse to design the main elements of the preliminary design for a lunar exploration mission, developed within ESA Academy Concurrent Engineering Challenge 2017. During this program, students of the Master in Space Systems at Technical University of Madrid designed a low cost satellite to find water on the Moon south pole as prospect of a future human lunar base. The resulting mission, The Moon Explorer And Observer of Water/Ice (MEOW) compromises a 262 kg spacecraft to be launched into a Geostationary Transfer Orbit as a secondary payload in the 2023/2025 time frame. A three months Weak Stability Boundary transfer via the Sun-Earth L1 Lagrange point allows for a high launch timeframe flexibility. The different aspects of the mission (orbit analysis, spacecraft design and payload) and possibilities of concurrent engineering are described.

  2. Millstone: software for multiplex microbial genome analysis and engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodman, Daniel B.; Kuznetsov, Gleb; Lajoie, Marc J.

    Inexpensive DNA sequencing and advances in genome editing have made computational analysis a major rate-limiting step in adaptive laboratory evolution and microbial genome engineering. Here, we describe Millstone, a web-based platform that automates genotype comparison and visualization for projects with up to hundreds of genomic samples. To enable iterative genome engineering, Millstone allows users to design oligonucleotide libraries and create successive versions of reference genomes. Millstone is open source and easily deployable to a cloud platform, local cluster, or desktop, making it a scalable solution for any lab.

  3. Millstone: software for multiplex microbial genome analysis and engineering.

    PubMed

    Goodman, Daniel B; Kuznetsov, Gleb; Lajoie, Marc J; Ahern, Brian W; Napolitano, Michael G; Chen, Kevin Y; Chen, Changping; Church, George M

    2017-05-25

    Inexpensive DNA sequencing and advances in genome editing have made computational analysis a major rate-limiting step in adaptive laboratory evolution and microbial genome engineering. We describe Millstone, a web-based platform that automates genotype comparison and visualization for projects with up to hundreds of genomic samples. To enable iterative genome engineering, Millstone allows users to design oligonucleotide libraries and create successive versions of reference genomes. Millstone is open source and easily deployable to a cloud platform, local cluster, or desktop, making it a scalable solution for any lab.

  4. Millstone: software for multiplex microbial genome analysis and engineering

    DOE PAGES

    Goodman, Daniel B.; Kuznetsov, Gleb; Lajoie, Marc J.; ...

    2017-05-25

    Inexpensive DNA sequencing and advances in genome editing have made computational analysis a major rate-limiting step in adaptive laboratory evolution and microbial genome engineering. Here, we describe Millstone, a web-based platform that automates genotype comparison and visualization for projects with up to hundreds of genomic samples. To enable iterative genome engineering, Millstone allows users to design oligonucleotide libraries and create successive versions of reference genomes. Millstone is open source and easily deployable to a cloud platform, local cluster, or desktop, making it a scalable solution for any lab.

  5. New design of cable-in-conduit conductor for application in future fusion reactors

    NASA Astrophysics Data System (ADS)

    Qin, Jinggang; Wu, Yu; Li, Jiangang; Liu, Fang; Dai, Chao; Shi, Yi; Liu, Huajun; Mao, Zhehua; Nijhuis, Arend; Zhou, Chao; Yagotintsev, Konstantin A.; Lubkemann, Ruben; Anvar, V. A.; Devred, Arnaud

    2017-11-01

    The China Fusion Engineering Test Reactor (CFETR) is a new tokamak device whose magnet system includes toroidal field, central solenoid (CS) and poloidal field coils. The main goal is to build a fusion engineering tokamak reactor with about 1 GW fusion power and self-sufficiency by blanket. In order to reach this high performance, the magnet field target is 15 T. However, the huge electromagnetic load caused by high field and current is a threat for conductor degradation under cycling. The conductor with a short-twist-pitch (STP) design has large stiffness, which enables a significant performance improvement in view of load and thermal cycling. But the conductor with STP design has a remarkable disadvantage: it can easily cause severe strand indentation during cabling. The indentation can reduce the strand performance, especially under high load cycling. In order to overcome this disadvantage, a new design is proposed. The main characteristic of this new design is an updated layout in the triplet. The triplet is made of two Nb3Sn strands and one soft copper strand. The twist pitch of the two Nb3Sn strands is large and cabled first. The copper strand is then wound around the two superconducting strands (CWS) with a shorter twist pitch. The following cable stages layout and twist pitches are similar to the ITER CS conductor with STP design. One short conductor sample with a similar scale to the ITER CS was manufactured and tested with the Twente Cable Press to investigate the mechanical properties, AC loss and internal inspection by destructive examination. The results are compared to the STP conductor (ITER CS and CFETR CSMC) tests. The results show that the new conductor design has similar stiffness, but much lower strand indentation than the STP design. The new design shows potential for application in future fusion reactors.

  6. Constructing Integrable High-pressure Full-current Free-boundary Stellarator Magnetohydrodynamic Equilibrium Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S.R. Hudson; D.A. Monticello; A.H. Reiman

    For the (non-axisymmetric) stellarator class of plasma confinement devices to be feasible candidates for fusion power stations it is essential that, to a good approximation, the magnetic field lines lie on nested flux surfaces; however, the inherent lack of a continuous symmetry implies that magnetic islands responsible for breaking the smooth topology of the flux surfaces are guaranteed to exist. Thus, the suppression of magnetic islands is a critical issue for stellarator design, particularly for small aspect ratio devices. Pfirsch-Schluter currents, diamagnetic currents, and resonant coil fields contribute to the formation of magnetic islands, and the challenge is to designmore » the plasma and coils such that these effects cancel. Magnetic islands in free-boundary high-pressure full-current stellarator magnetohydrodynamic equilibria are suppressed using a procedure based on the Princeton Iterative Equilibrium Solver [Reiman and Greenside, Comp. Phys. Comm. 43 (1986) 157] which iterate s the equilibrium equations to obtain the plasma equilibrium. At each iteration, changes to a Fourier representation of the coil geometry are made to cancel resonant fields produced by the plasma. The changes are constrained to preserve certain measures of engineering acceptability and to preserve the stability of ideal kink modes. As the iterations continue, the coil geometry and the plasma simultaneously converge to an equilibrium in which the island content is negligible, the plasma is stable to ideal kink modes, and the coils satisfy engineering constraints. The method is applied to a candidate plasma and coil design for the National Compact Stellarator Experiment [Reiman, et al., Phys. Plasmas 8 (May 2001) 2083].« less

  7. Effects of Discovery, Iteration, and Collaboration in Laboratory Courses on Undergraduates' Research Career Intentions Fully Mediated by Student Ownership.

    PubMed

    Corwin, Lisa A; Runyon, Christopher R; Ghanem, Eman; Sandy, Moriah; Clark, Greg; Palmer, Gregory C; Reichler, Stuart; Rodenbusch, Stacia E; Dolan, Erin L

    2018-06-01

    Course-based undergraduate research experiences (CUREs) provide a promising avenue to attract a larger and more diverse group of students into research careers. CUREs are thought to be distinctive in offering students opportunities to make discoveries, collaborate, engage in iterative work, and develop a sense of ownership of their lab course work. Yet how these elements affect students' intentions to pursue research-related careers remain unexplored. To address this knowledge gap, we collected data on three design features thought to be distinctive of CUREs (discovery, iteration, collaboration) and on students' levels of ownership and career intentions from ∼800 undergraduates who had completed CURE or inquiry courses, including courses from the Freshman Research Initiative (FRI), which has a demonstrated positive effect on student retention in college and in science, technology, engineering, and mathematics. We used structural equation modeling to test relationships among the design features and student ownership and career intentions. We found that discovery, iteration, and collaboration had small but significant effects on students' intentions; these effects were fully mediated by student ownership. Students in FRI courses reported significantly higher levels of discovery, iteration, and ownership than students in other CUREs. FRI research courses alone had a significant effect on students' career intentions.

  8. Doppler Lidar System Design via Interdisciplinary Design Concept at NASA Langley Research Center - Part I

    NASA Technical Reports Server (NTRS)

    Boyer, Charles M.; Jackson, Trevor P.; Beyon, Jeffrey Y.; Petway, Larry B.

    2013-01-01

    Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. Mechanical placement collaboration reduced potential electromagnetic interference (EMI). Through application of newly selected electrical components and thermal analysis data, a total electronic chassis redesign was accomplished. Use of an innovative forced convection tunnel heat sink was employed to meet and exceed project requirements for cooling, mass reduction, and volume reduction. Functionality was a key concern to make efficient use of airflow, and accessibility was also imperative to allow for servicing of chassis internals. The collaborative process provided for accelerated design maturation with substantiated function.

  9. Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.

    2004-01-01

    Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.

  10. Hardware architecture design of image restoration based on time-frequency domain computation

    NASA Astrophysics Data System (ADS)

    Wen, Bo; Zhang, Jing; Jiao, Zipeng

    2013-10-01

    The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed. Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality, hardware realizability and high efficiency.

  11. New Parallel Algorithms for Structural Analysis and Design of Aerospace Structures

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.

    1998-01-01

    Subspace and Lanczos iterations have been developed, well documented, and widely accepted as efficient methods for obtaining p-lowest eigen-pair solutions of large-scale, practical engineering problems. The focus of this paper is to incorporate recent developments in vectorized sparse technologies in conjunction with Subspace and Lanczos iterative algorithms for computational enhancements. Numerical performance, in terms of accuracy and efficiency of the proposed sparse strategies for Subspace and Lanczos algorithm, is demonstrated by solving for the lowest frequencies and mode shapes of structural problems on the IBM-R6000/590 and SunSparc 20 workstations.

  12. Task toward a Realization of Commercial Tokamak Fusion Plants in 2050 -The Role of ITER and the Succeeding Developments- 4.Technology and Material Research in Fusion Power Plant Development

    NASA Astrophysics Data System (ADS)

    Akiba, Masato; Matsui, Hideki; Takatsu, Hideyuki; Konishi, Satoshi

    Technical issues regarding the fusion power plant that are required to be developed in the period of ITER construction and operation, both with ITER and with other facilities that complement ITER are described in this section. Three major fields are considered to be important in fusion technology. Section 4.1 summarizes blanket study, and ITER Test Blanket Module (TBM) development that focuses its effort on the first generation power blanket to be installed in DEMO. ITER will be equipped with 6 TBMs which are developed under each party's fusion program. In Japan, the solid breeder using water as a coolant is the primary candidate, and He-cooled pebble bed is the alternative. Other liquid options such as LiPb, Li or molten salt are developed by other parties' initiatives. The Test Blanket Working Group (TBWG) is coordinating these efforts. Japanese universities are investigating advanced concepts and fundamental crosscutting technologies. Section 4.2 introduces material development and particularly, the international irradiation facility, IFMIF. Reduced activation ferritic/martensitic steels are identified as promising candidates for the structural material of the first generation fusion blanket, while and vanadium alloy and SiC/SiC composite are pursued as advanced options. The IFMIF is currently planning the next phase of joint activity, EVEDA (Engineering Validation and Engineering Design Activity) that encompasses construction. Material studies together with the ITER TBM will provide essential technical information for development of the fusion power plant. Other technical issues to be addressed regarding the first generation fusion power plant are summarized in section 4.3. Development of components for ITER made remarkable progress for the major essential technology also necessary for future fusion plants, however many still need further improvements toward power plant. Such areas includes; the divertor, plasma heating/current drive, magnets, tritium, and remote handling. There remain many other technical issues for power plant which require integrated efforts.

  13. CANISTER HANDLING FACILITY DESCRIPTION DOCUMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J.F. Beesley

    The purpose of this facility description document (FDD) is to establish requirements and associated bases that drive the design of the Canister Handling Facility (CHF), which will allow the design effort to proceed to license application. This FDD will be revised at strategic points as the design matures. This FDD identifies the requirements and describes the facility design, as it currently exists, with emphasis on attributes of the design provided to meet the requirements. This FDD is an engineering tool for design control; accordingly, the primary audience and users are design engineers. This FDD is part of an iterative designmore » process. It leads the design process with regard to the flowdown of upper tier requirements onto the facility. Knowledge of these requirements is essential in performing the design process. The FDD follows the design with regard to the description of the facility. The description provided in this FDD reflects the current results of the design process.« less

  14. The Complex Dynamics of Student Engagement in Novel Engineering Design Activities

    NASA Astrophysics Data System (ADS)

    McCormick, Mary

    In engineering design, making sense of "messy," design situations is at the heart of the discipline (Schon, 1983); engineers in practice bring structure to design situations by organizing, negotiating, and coordinating multiple aspects (Bucciarelli, 1994; Stevens, Johri, & O'Connor, 2014). In classroom settings, however, students are more often given well-defined, content-focused engineering tasks (Jonassen, 2014). These tasks are based on the assumption that elementary students are unable to grapple with the complexity or open-endedness of engineering design (Crismond & Adams, 2012). The data I present in this dissertation suggest the opposite. I show that students are not only able to make sense of, or frame (Goffman, 1974), complex design situations, but that their framings dynamically involve their nascent abilities for engineering design. The context of this work is Novel Engineering, a larger research project that explores using children's literature as an access point for engineering design. Novel Engineering activities are inherently messy: there are characters with needs, settings with implicit constraints, and rich design situations. In a series of three studies, I show how students' framings of Novel Engineering design activities involve their reasoning and acting as beginning engineers. In the first study, I show two students whose caring for the story characters contributes to their stability in framing the task: they identify the needs of their fictional clients and iteratively design a solution to meet their clients' needs. In the second, I show how students' shifting and negotiating framings influence their engineering assumptions and evaluation criteria. In the third, I show how students' coordinating framings involve navigating a design process to meet clients' needs, classroom expectations, and technical requirements. Collectively, these studies contribute to literature by documenting students' productive beginnings in engineering design. The implications span research and practice, specifically targeting how we attend to and support students as they engage in engineering design.

  15. The Design and Development of a Web-Interface for the Software Engineering Automation System

    DTIC Science & Technology

    2001-09-01

    application on the Internet. 14. SUBJECT TERMS Computer Aided Prototyping, Real Time Systems , Java 15. NUMBER OF...difficult. Developing the entire system only to find it does not meet the customer’s needs is a tremendous waste of time. Real - time systems need a...software prototyping is an iterative software development methodology utilized to improve the analysis and design of real - time systems [2]. One

  16. Optimization of the ITER electron cyclotron equatorial launcher for improved heating and current drive functional capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farina, D.; Figini, L.; Henderson, M.

    2014-06-15

    The design of the ITER Electron Cyclotron Heating and Current Drive (EC H and CD) system has evolved in the last years both in goals and functionalities by considering an expanded range of applications. A large effort has been devoted to a better integration of the equatorial and the upper launchers, both from the point of view of the performance and of the design impact on the engineering constraints. However, from the analysis of the ECCD performance in two references H-mode scenarios at burn (the inductive H-mode and the advanced non-inductive scenario), it was clear that the EC power depositionmore » was not optimal for steady-state applications in the plasma region around mid radius. An optimization study of the equatorial launcher is presented here aiming at removing this limitation of the EC system capabilities. Changing the steering of the equatorial launcher from toroidal to poloidal ensures EC power deposition out to the normalized toroidal radius ρ ≈ 0.6, and nearly doubles the EC driven current around mid radius, without significant performance degradation in the core plasma region. In addition to the improved performance, the proposed design change is able to relax some engineering design constraints on both launchers.« less

  17. Performance Limiting Flow Processes in High-State Loading High-Mach Number Compressors

    DTIC Science & Technology

    2008-03-13

    the Doctoral Thesis Committee of the doctoral student. 3 3.0 Technical Background A strong incentive exists to reduce airfoil count in aircraft engine ...Advanced Turbine Engine ). A basic constraint on blade reduction is seen from the Euler turbine equation, which shows that, although a design can be carried...on the vane to rotor blade ratio of 8:11). Within the MSU Turbo code, specifying a small number of time steps requires more iteration at each time

  18. Quiet Clean Short-haul Experimental Engine (QCSEE) composite fan frame design report

    NASA Technical Reports Server (NTRS)

    Mitchell, S. C.

    1978-01-01

    An advanced composite frame which is flight-weight and integrates the functions of several structures was developed for the over the wing (OTW) engine and for the under the wing (UTW) engine. The composite material system selected as the basic material for the frame is Type AS graphite fiber in a Hercules 3501 epoxy resin matrix. The frame was analyzed using a finite element digital computer program. This program was used in an iterative fashion to arrive at practical thicknesses and ply orientations to achieve a final design that met all strength and stiffness requirements for critical conditions. Using this information, the detail design of each of the individual parts of the frame was completed and released. On the basis of these designs, the required tooling was designed to fabricate the various component parts of the frame. To verify the structural integrity of the critical joint areas, a full-scale test was conducted on the frame before engine testing. The testing of the frame established critical spring constants and subjected the frame to three critical load cases. The successful static load test was followed by 153 and 58 hours respectively of successful running on the UTW and OTW engines.

  19. Low heat transfer oxidizer heat exchanger design and analysis

    NASA Technical Reports Server (NTRS)

    Kanic, P. G.; Kmiec, T. D.; Peckham, R. J.

    1987-01-01

    The RL10-IIB engine, a derivative of the RLIO, is capable of multi-mode thrust operation. This engine operates at two low thrust levels: tank head idle (THI), which is approximately 1 to 2 percent of full thrust, and pumped idle (PI), which is 10 percent of full thrust. Operation at THI provides vehicle propellant settling thrust and efficient engine thermal conditioning; PI operation provides vehicle tank pre-pressurization and maneuver thrust for log-g deployment. Stable combustion of the RL10-IIB engine at THI and PI thrust levels can be accomplished by providing gaseous oxygen at the propellant injector. Using gaseous hydrogen from the thrust chamber jacket as an energy source, a heat exchanger can be used to vaporize liquid oxygen without creating flow instability. This report summarizes the design and analysis of a United Aircraft Products (UAP) low-rate heat transfer heat exchanger concept for the RL10-IIB rocket engine. The design represents a second iteration of the RL10-IIB heat exchanger investigation program. The design and analysis of the first heat exchanger effort is presented in more detail in NASA CR-174857. Testing of the previous design is detailed in NASA CR-179487.

  20. Arc detection for the ICRF system on ITER

    NASA Astrophysics Data System (ADS)

    D'Inca, R.

    2011-12-01

    The ICRF system for ITER is designed to respect the high voltage breakdown limits. However arcs can still statistically happen and must be quickly detected and suppressed by shutting the RF power down. For the conception of a reliable and efficient detector, the analysis of the mechanism of arcs is necessary to find their unique signature. Numerous systems have been conceived to address the issues of arc detection. VSWR-based detectors, RF noise detectors, sound detectors, optical detectors, S-matrix based detectors. Until now, none of them has succeeded in demonstrating the fulfillment of all requirements and the studies for ITER now follow three directions: improvement of the existing concepts to fix their flaws, development of new theoretically fully compliant detectors (like the GUIDAR) and combination of several detectors to benefit from the advantages of each of them. Together with the physical and engineering challenges, the development of an arc detection system for ITER raises methodological concerns to extrapolate the results from basic experiments and present machines to the ITER scale ICRF system and to conduct a relevant risk analysis.

  1. Design, Manufacture, and Experimental Serviceability Validation of ITER Blanket Components

    NASA Astrophysics Data System (ADS)

    Leshukov, A. Yu.; Strebkov, Yu. S.; Sviridenko, M. N.; Safronov, V. M.; Putrik, A. B.

    2017-12-01

    In 2014, the Russian Federation and the ITER International Organization signed two Procurement Arrangements (PAs) for ITER blanket components: 1.6.P1ARF.01 "Blanket First Wall" of February 14, 2014, and 1.6.P3.RF.01 "Blanket Module Connections" of December 19, 2014. The first PA stipulates development, manufacture, testing, and delivery to the ITER site of 179 Enhanced Heat Flux (EHF) First Wall (FW) Panels intended for withstanding the heat flux from the plasma up to 4.7MW/m2. Two Russian institutions, NIIEFA (Efremov Institute) and NIKIET, are responsible for the implementation of this PA. NIIEFA manufactures plasma-facing components (PFCs) of the EHF FW panels and performs the final assembly and testing of the panels, and NIKIET manufactures FW beam structures, load-bearing structures of PFCs, and all elements of the panel attachment system. As for the second PA, NIKIET is the sole official supplier of flexible blanket supports, electrical insulation key pads (EIKPs), and blanket module/vacuum vessel electrical connectors. Joint activities of NIKIET and NIIEFA for implementing PA 1.6.P1ARF.01 are briefly described, and information on implementation of PA 1.6.P3.RF.01 is given. Results of the engineering design and research efforts in the scope of the above PAs in 2015-2016 are reported, and results of developing the technology for manufacturing ITER blanket components are presented.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zaupa, M., E-mail: matteo.zaupa@igi.cnr.it; Consorzio RFX, Corso Stati Uniti 4, Padova 35127; Sartori, E.

    Megavolt ITER Injector Concept Advancement is the full scale prototype of the heating and current drive neutral beam injectors for ITER, to be built at Consorzio RFX (Padova). The engineering design of its components is challenging: the total heat loads they will be subjected to (expected between 2 and 19 MW), the high heat fluxes (up to 20 MW/m{sup 2}), and the beam pulse duration up to 1 h, set demanding requirements for reliable active cooling circuits. In support of the design, the thermo-hydraulic behavior of each cooling circuit under steady state condition has been investigated by using one-dimensional models.more » The final results, obtained considering a number of optimizations for the cooling circuits, show that all the requirements in terms of flow rate, temperature, and pressure drop are properly fulfilled.« less

  3. Engineering Design Theory: Applying the Success of the Modern World to Campaign Creation

    DTIC Science & Technology

    2009-05-21

    and school of thought) to the simple methods of design.6 This progression is analogous to Peter Senge’s levels of learning disciplines.7 Senge...iterative learning and adaptive action that develops and employs critical and creative thinking , enabling leaders to apply the necessary logic to...overcome mental rigidity and develop group insight, the Army must learn to utilize group learning and thinking , through a fluid and creative open process

  4. A Burning Plasma Experiment: the role of international collaboration

    NASA Astrophysics Data System (ADS)

    Prager, Stewart

    2003-04-01

    The world effort to develop fusion energy is at the threshold of a new stage in its research: the investigation of burning plasmas. A burning plasma is self-heated. The 100 million degree temperature of the plasma is maintained by the heat generated by the fusion reactions themselves, as occurs in burning stars. The fusion-generated alpha particles produce new physical phenomena that are strongly coupled together as a nonlinear complex system, posing a major plasma physics challenge. Two attractive options are being considered by the US fusion community as burning plasma facilities: the international ITER experiment and the US-based FIRE experiment. ITER (the International Thermonuclear Experimental Reactor) is a large, power-plant scale facility. It was conceived and designed by a partnership of the European Union, Japan, the Soviet Union, and the United States. At the completion of the first engineering design in 1998, the US discontinued its participation. FIRE (the Fusion Ignition Research Experiment) is a smaller, domestic facility that is at an advanced pre-conceptual design stage. Each facility has different scientific, programmatic and political implications. Selecting the optimal path for burning plasma science is itself a challenge. Recently, the Fusion Energy Sciences Advisory Committee recommended a dual path strategy in which the US seek to rejoin ITER, but be prepared to move forward with FIRE if the ITER negotiations do not reach fruition by July, 2004. Either the ITER or FIRE experiment would reveal the behavior of burning plasmas, generate large amounts of fusion power, and be a huge step in establishing the potential of fusion energy to contribute to the world's energy security.

  5. Design advances of the Core Plasma Thomson Scattering diagnostic for ITER

    NASA Astrophysics Data System (ADS)

    Scannell, R.; Maslov, M.; Naylor, G.; O'Gorman, T.; Kempenaars, M.; Carr, M.; Bilkova, P.; Bohm, P.; Giudicotti, L.; Pasqualotto, R.; Bassan, M.; Vayakis, G.; Walsh, M.; Huxford, R.

    2017-11-01

    The Core Plasma Thomson Scattering (CPTS) diagnostic on ITER performs measurements of the electron temperature and density profiles which are critical to the understanding of the ITER plasma. The diagnostic must satisfy the ITER project requirements, which translate to requirements on performance as well as reliability, safety and engineering. The implications are particularly challenging for beam dump lifetime, the need for continuous active alignment of the diagnostic during operation, allowable neutron flux in the interspace and the protection of the first mirror from plasma deposition. The CPTS design has been evolving over a number of years. One recent improvement is that the collection optics have been modified to include freeform surfaces. These freeform surfaces introduce extra complexity to the manufacturing but provide greater flexibility in the design. The greater flexibility introduced allows for example to lower neutron throughput or use fewer surfaces while improving optical performance. Performance assessment has shown that scattering from a 1064 nm laser will be sufficient to meet the measurement requirements, at least for the system at the start of operations. Optical transmission at λ < 600 nm is expected to degrade over the ITER lifetime due to fibre darkening and deposition on the first mirror. For this reason, it is proposed that the diagnostic should additionally include measurements of TS 'depolarised light' and a 1319 nm laser system. These additional techniques have different spectral and polarisation dependencies compared to scattering from a 1064 nm laser and hence provide greater robustness into the inferred measurements of Te and ne in the core.

  6. Consistent design schematics for biological systems: standardization of representation in biological engineering

    PubMed Central

    Matsuoka, Yukiko; Ghosh, Samik; Kitano, Hiroaki

    2009-01-01

    The discovery by design paradigm driving research in synthetic biology entails the engineering of de novo biological constructs with well-characterized input–output behaviours and interfaces. The construction of biological circuits requires iterative phases of design, simulation and assembly, leading to the fabrication of a biological device. In order to represent engineered models in a consistent visual format and further simulating them in silico, standardization of representation and model formalism is imperative. In this article, we review different efforts for standardization, particularly standards for graphical visualization and simulation/annotation schemata adopted in systems biology. We identify the importance of integrating the different standardization efforts and provide insights into potential avenues for developing a common framework for model visualization, simulation and sharing across various tools. We envision that such a synergistic approach would lead to the development of global, standardized schemata in biology, empowering deeper understanding of molecular mechanisms as well as engineering of novel biological systems. PMID:19493898

  7. User engineering: A new look at system engineering

    NASA Technical Reports Server (NTRS)

    Mclaughlin, Larry L.

    1987-01-01

    User Engineering is a new System Engineering perspective responsible for defining and maintaining the user view of the system. Its elements are a process to guide the project and customer, a multidisciplinary team including hard and soft sciences, rapid prototyping tools to build user interfaces quickly and modify them frequently at low cost, and a prototyping center for involving users and designers in an iterative way. The main consideration is reducing the risk that the end user will not or cannot effectively use the system. The process begins with user analysis to produce cognitive and work style models, and task analysis to produce user work functions and scenarios. These become major drivers of the human computer interface design which is presented and reviewed as an interactive prototype by users. Feedback is rapid and productive, and user effectiveness can be measured and observed before the system is built and fielded. Requirements are derived via the prototype and baselined early to serve as an input to the architecture and software design.

  8. Application of computational fluid dynamics to the design of the Space Transportation Main Engine subscale nozzle

    NASA Technical Reports Server (NTRS)

    Garrett, J. L.; Syed, S. A.

    1992-01-01

    CFD analyses of the Space Transportation Main Engine film/dump cooled subscale nozzle are presented, with an emphasis on the timely impact of CFD in the design of the subscale nozzle secondary coolant system. Calculations were performed with the Generalized Aerodynamic Simulation Program (GASP), using a Baldwin-Lomas Turbulence model, and finite rate hydrogen-oxygen chemistry. Design iterations for both the secondary coolant cavity passage and the secondary coolant lip are presented. In addition, validation of the GASP chemistry and turbulence models by comparison with data and other CFD codes are presented for a hypersonic laminar separation corner, a backward facing step, and a 2D scramjet nozzle with hydrogen-oxygen kinetics.

  9. Development and Application of an Integrated Approach toward NASA Airspace Systems Research

    NASA Technical Reports Server (NTRS)

    Barhydt, Richard; Fong, Robert K.; Abramson, Paul D.; Koenke, Ed

    2008-01-01

    The National Aeronautics and Space Administration's (NASA) Airspace Systems Program is contributing air traffic management research in support of the 2025 Next Generation Air Transportation System (NextGen). Contributions support research and development needs provided by the interagency Joint Planning and Development Office (JPDO). These needs generally call for integrated technical solutions that improve system-level performance and work effectively across multiple domains and planning time horizons. In response, the Airspace Systems Program is pursuing an integrated research approach and has adapted systems engineering best practices for application in a research environment. Systems engineering methods aim to enable researchers to methodically compare different technical approaches, consider system-level performance, and develop compatible solutions. Systems engineering activities are performed iteratively as the research matures. Products of this approach include a demand and needs analysis, system-level descriptions focusing on NASA research contributions, system assessment and design studies, and common systemlevel metrics, scenarios, and assumptions. Results from the first systems engineering iteration include a preliminary demand and needs analysis; a functional modeling tool; and initial system-level metrics, scenario characteristics, and assumptions. Demand and needs analysis results suggest that several advanced concepts can mitigate demand/capacity imbalances for NextGen, but fall short of enabling three-times current-day capacity at the nation s busiest airports and airspace. Current activities are focusing on standardizing metrics, scenarios, and assumptions, conducting system-level performance assessments of integrated research solutions, and exploring key system design interfaces.

  10. Assessing students' performance in software requirements engineering education using scoring rubrics

    NASA Astrophysics Data System (ADS)

    Mkpojiogu, Emmanuel O. C.; Hussain, Azham

    2017-10-01

    The study investigates how helpful the use of scoring rubrics is, in the performance assessment of software requirements engineering students and whether its use can lead to students' performance improvement in the development of software requirements artifacts and models. Scoring rubrics were used by two instructors to assess the cognitive performance of a student in the design and development of software requirements artifacts. The study results indicate that the use of scoring rubrics is very helpful in objectively assessing the performance of software requirements or software engineering students. Furthermore, the results revealed that the use of scoring rubrics can also produce a good achievement assessments direction showing whether a student is either improving or not in a repeated or iterative assessment. In a nutshell, its use leads to the performance improvement of students. The results provided some insights for further investigation and will be beneficial to researchers, requirements engineers, system designers, developers and project managers.

  11. Model-Eliciting Activities (MEAs) as a Bridge between Engineering Education Research and Mathematics Education Research

    ERIC Educational Resources Information Center

    Hamilton, Eric; Lesh, Richard; Lester, Frank; Brilleslyper, Michael

    2008-01-01

    This article introduces Model-Eliciting Activities (MEAs) as a form of case study team problem-solving. MEA design focuses on eliciting from students conceptual models that they iteratively revise in problem-solving. Though developed by mathematics education researchers to study the evolution of mathematical problem-solving expertise in middle…

  12. Human Engineering of Space Vehicle Displays and Controls

    NASA Technical Reports Server (NTRS)

    Whitmore, Mihriban; Holden, Kritina L.; Boyer, Jennifer; Stephens, John-Paul; Ezer, Neta; Sandor, Aniko

    2010-01-01

    Proper attention to the integration of the human needs in the vehicle displays and controls design process creates a safe and productive environment for crew. Although this integration is critical for all phases of flight, for crew interfaces that are used during dynamic phases (e.g., ascent and entry), the integration is particularly important because of demanding environmental conditions. This panel addresses the process of how human engineering involvement ensures that human-system integration occurs early in the design and development process and continues throughout the lifecycle of a vehicle. This process includes the development of requirements and quantitative metrics to measure design success, research on fundamental design questions, human-in-the-loop evaluations, and iterative design. Processes and results from research on displays and controls; the creation and validation of usability, workload, and consistency metrics; and the design and evaluation of crew interfaces for NASA's Crew Exploration Vehicle are used as case studies.

  13. Flutter optimization in fighter aircraft design

    NASA Technical Reports Server (NTRS)

    Triplett, W. E.

    1984-01-01

    The efficient design of aircraft structure involves a series of compromises among various engineering disciplines. These compromises are necessary to ensure the best overall design. To effectively reconcile the various technical constraints requires a number of design iterations, with the accompanying long elapsed time. Automated procedures can reduce the elapsed time, improve productivity and hold the promise of optimum designs which may be missed by batch processing. Several examples are given of optimization applications including aeroelastic constraints. Particular attention is given to the success or failure of each example and the lessons learned. The specific applications are shown. The final two applications were made recently.

  14. The ICARE Method

    NASA Technical Reports Server (NTRS)

    Henke, Luke

    2010-01-01

    The ICARE method is a flexible, widely applicable method for systems engineers to solve problems and resolve issues in a complete and comprehensive manner. The method can be tailored by diverse users for direct application to their function (e.g. system integrators, design engineers, technical discipline leads, analysts, etc.). The clever acronym, ICARE, instills the attitude of accountability, safety, technical rigor and engagement in the problem resolution: Identify, Communicate, Assess, Report, Execute (ICARE). This method was developed through observation of Space Shuttle Propulsion Systems Engineering and Integration (PSE&I) office personnel approach in an attempt to succinctly describe the actions of an effective systems engineer. Additionally it evolved from an effort to make a broadly-defined checklist for a PSE&I worker to perform their responsibilities in an iterative and recursive manner. The National Aeronautics and Space Administration (NASA) Systems Engineering Handbook states, engineering of NASA systems requires a systematic and disciplined set of processes that are applied recursively and iteratively for the design, development, operation, maintenance, and closeout of systems throughout the life cycle of the programs and projects. ICARE is a method that can be applied within the boundaries and requirements of NASA s systems engineering set of processes to provide an elevated sense of duty and responsibility to crew and vehicle safety. The importance of a disciplined set of processes and a safety-conscious mindset increases with the complexity of the system. Moreover, the larger the system and the larger the workforce, the more important it is to encourage the usage of the ICARE method as widely as possible. According to the NASA Systems Engineering Handbook, elements of a system can include people, hardware, software, facilities, policies and documents; all things required to produce system-level results, qualities, properties, characteristics, functions, behavior and performance. The ICARE method can be used to improve all elements of a system and, consequently, the system-level functional, physical and operational performance. Even though ICARE was specifically designed for a systems engineer, any person whose job is to examine another person, product, or process can use the ICARE method to improve effectiveness, implementation, usefulness, value, capability, efficiency, integration, design, and/or marketability. This paper provides the details of the ICARE method, emphasizing the method s application to systems engineering. In addition, a sample of other, non-systems engineering applications are briefly discussed to demonstrate how ICARE can be tailored to a variety of diverse jobs (from project management to parenting).

  15. Defining Gas Turbine Engine Performance Requirements for the Large Civil TiltRotor (LCTR2)

    NASA Technical Reports Server (NTRS)

    Snyder, Christopher A.

    2013-01-01

    Defining specific engine requirements is a critical part of identifying technologies and operational models for potential future rotary wing vehicles. NASA's Fundamental Aeronautics Program, Subsonic Rotary Wing Project has identified the Large Civil TiltRotor (LCTR) as the configuration to best meet technology goals. This notional vehicle concept has evolved with more clearly defined mission and operational requirements to the LCTR-iteration 2 (LCTR2). This paper reports on efforts to further review and refine the LCTR2 analyses to ascertain specific engine requirements and propulsion sizing criteria. The baseline mission and other design or operational requirements are reviewed. Analysis tools are described to help understand their interactions and underlying assumptions. Various design and operational conditions are presented and explained for their contribution to defining operational and engine requirements. These identified engine requirements are discussed to suggest which are most critical to the engine sizing and operation. The most-critical engine requirements are compared to in-house NASA engine simulations to try to ascertain which operational requirements define engine requirements versus points within the available engine operational capability. Finally, results are summarized with suggestions for future efforts to improve analysis capabilities, and better define and refine mission and operational requirements.

  16. Application of Elements of Numerical Methods in the Analysis of Journal Bearings in AC Induction Motors: An Industry Case Study

    ERIC Educational Resources Information Center

    Ahrens, Fred; Mistry, Rajendra

    2005-01-01

    In product engineering there often arise design analysis problems for which a commercial software package is either unavailable or cost prohibitive. Further, these calculations often require successive iterations that can be time intensive when performed by hand, thus development of a software application is indicated. This case relates to the…

  17. Design consideration for a nuclear electric propulsion system

    NASA Technical Reports Server (NTRS)

    Phillips, W. M.; Pawlik, E. V.

    1978-01-01

    A study is currently underway to design a nuclear electric propulsion vehicle capable of performing detailed exploration of the outer-planets. Primary emphasis is on the power subsystem. Secondary emphasis includes integration into a spacecraft, and integration with the thrust subsystem and science package or payload. The results of several design iterations indicate an all-heat-pipe system offers greater reliability, elimination of many technology development areas and a specific weight of under 20 kg/kWe at the 400 kWe power level. The system is compatible with a single Shuttle launch and provides greater safety than could be obtained with designs using pumped liquid metal cooling. Two configurations, one with the reactor and power conversion forward on the spacecraft with the ion engines aft and the other with reactor, power conversion and ion engines aft were selected as dual baseline designs based on minimum weight, minimum required technology development and maximum growth potential and flexibility.

  18. Scientific and technical challenges on the road towards fusion electricity

    NASA Astrophysics Data System (ADS)

    Donné, A. J. H.; Federici, G.; Litaudon, X.; McDonald, D. C.

    2017-10-01

    The goal of the European Fusion Roadmap is to deliver fusion electricity to the grid early in the second half of this century. It breaks the quest for fusion energy into eight missions, and for each of them it describes a research and development programme to address all the open technical gaps in physics and technology and estimates the required resources. It points out the needs to intensify industrial involvement and to seek all opportunities for collaboration outside Europe. The roadmap covers three periods: the short term, which runs parallel to the European Research Framework Programme Horizon 2020, the medium term and the long term. ITER is the key facility of the roadmap as it is expected to achieve most of the important milestones on the path to fusion power. Thus, the vast majority of present resources are dedicated to ITER and its accompanying experiments. The medium term is focussed on taking ITER into operation and bringing it to full power, as well as on preparing the construction of a demonstration power plant DEMO, which will for the first time demonstrate fusion electricity to the grid around the middle of this century. Building and operating DEMO is the subject of the last roadmap phase: the long term. Clearly, the Fusion Roadmap is tightly connected to the ITER schedule. Three key milestones are the first operation of ITER, the start of the DT operation in ITER and reaching the full performance at which the thermal fusion power is 10 times the power put in to the plasma. The Engineering Design Activity of DEMO needs to start a few years after the first ITER plasma, while the start of the construction phase will be a few years after ITER reaches full performance. In this way ITER can give viable input to the design and development of DEMO. Because the neutron fluence in DEMO will be much higher than in ITER, it is important to develop and validate materials that can handle these very high neutron loads. For the testing of the materials, a dedicated 14 MeV neutron source is needed. This DEMO Oriented Neutron Source (DONES) is therefore an important facility to support the fusion roadmap.

  19. The ITER project construction status

    NASA Astrophysics Data System (ADS)

    Motojima, O.

    2015-10-01

    The pace of the ITER project in St Paul-lez-Durance, France is accelerating rapidly into its peak construction phase. With the completion of the B2 slab in August 2014, which will support about 400 000 metric tons of the tokamak complex structures and components, the construction is advancing on a daily basis. Magnet, vacuum vessel, cryostat, thermal shield, first wall and divertor structures are under construction or in prototype phase in the ITER member states of China, Europe, India, Japan, Korea, Russia, and the United States. Each of these member states has its own domestic agency (DA) to manage their procurements of components for ITER. Plant systems engineering is being transformed to fully integrate the tokamak and its auxiliary systems in preparation for the assembly and operations phase. CODAC, diagnostics, and the three main heating and current drive systems are also progressing, including the construction of the neutral beam test facility building in Padua, Italy. The conceptual design of the Chinese test blanket module system for ITER has been completed and those of the EU are well under way. Significant progress has been made addressing several outstanding physics issues including disruption load characterization, prediction, avoidance, and mitigation, first wall and divertor shaping, edge pedestal and SOL plasma stability, fuelling and plasma behaviour during confinement transients and W impurity transport. Further development of the ITER Research Plan has included a definition of the required plant configuration for 1st plasma and subsequent phases of ITER operation as well as the major plasma commissioning activities and the needs of the accompanying R&D program to ITER construction by the ITER parties.

  20. Engineering the on-axis intensity of Bessel beam by a feedback tuning loop

    NASA Astrophysics Data System (ADS)

    Li, Runze; Yu, Xianghua; Yang, Yanlong; Peng, Tong; Yao, Baoli; Zhang, Chunmin; Ye, Tong

    2018-02-01

    The Bessel beam belongs to a typical class of non-diffractive optical fields that are characterized by their invariant focal profiles along the propagation direction. However, ideal Bessel beams only rigorously exist in theory; Bessel beams generated in the lab are quasi-Bessel beams with finite focal extensions and varying intensity profiles along the propagation axis. The ability to engineer the on-axis intensity profile to the desired shape is essential for many applications. Here we demonstrate an iterative optimization-based approach to engineering the on-axis intensity of Bessel beams. The genetic algorithm is used to demonstrate this approach. Starting with a traditional axicon phase mask, in the design process, the computed on-axis beam profile is fed into a feedback tuning loop of an iterative optimization process, which searches for an optimal radial phase distribution that can generate a generalized Bessel beam with the desired onaxis intensity profile. The experimental implementation involves a fine-tuning process that adjusts the originally targeted profile so that the optimization process can optimize the phase mask to yield an improved on-axis profile. Our proposed method has been demonstrated in engineering several zeroth-order Bessel beams with customized on-axis profiles. High accuracy and high energy throughput merit its use in many applications.

  1. Usability engineering: domain analysis activities for augmented-reality systems

    NASA Astrophysics Data System (ADS)

    Gabbard, Joseph; Swan, J. E., II; Hix, Deborah; Lanzagorta, Marco O.; Livingston, Mark; Brown, Dennis B.; Julier, Simon J.

    2002-05-01

    This paper discusses our usability engineering process for the Battlefield Augmented Reality System (BARS). Usability engineering is a structured, iterative, stepwise development process. Like the related disciplines of software and systems engineering, usability engineering is a combination of management principals and techniques, formal and semi- formal evaluation techniques, and computerized tools. BARS is an outdoor augmented reality system that displays heads- up battlefield intelligence information to a dismounted warrior. The paper discusses our general usability engineering process. We originally developed the process in the context of virtual reality applications, but in this work we are adapting the procedures to an augmented reality system. The focus of this paper is our work on domain analysis, the first activity of the usability engineering process. We describe our plans for and our progress to date on our domain analysis for BARS. We give results in terms of a specific urban battlefield use case we have designed.

  2. FENDL: International reference nuclear data library for fusion applications

    NASA Astrophysics Data System (ADS)

    Pashchenko, A. B.; Wienke, H.; Ganesan, S.

    1996-10-01

    The IAEA Nuclear Data Section, in co-operation with several national nuclear data centres and research groups, has created the first version of an internationally available Fusion Evaluated Nuclear Data Library (FENDL-1). The FENDL library has been selected to serve as a comprehensive source of processed and tested nuclear data tailored to the requirements of the engineering design activity (EDA) of the ITER project and other fusion-related development projects. The present version of FENDL consists of the following sublibraries covering the necessary nuclear input for all physics and engineering aspects of the material development, design, operation and safety of the ITER project in its current EDA phase: FENDL/A-1.1: neutron activation cross-sections, selected from different available sources, for 636 nuclides, FENDL/D-1.0: nuclear decay data for 2900 nuclides in ENDF-6 format, FENDL/DS-1.0: neutron activation data for dosimetry by foil activation, FENDL/C-1.0: data for the fusion reactions D(d,n), D(d,p), T(d,n), T(t,2n), He-3(d,p) extracted from ENDF/B-6 and processed, FENDL/E-1.0:data for coupled neutron—photon transport calculations, including a data library for neutron interaction and photon production for 63 elements or isotopes, selected from ENDF/B-6, JENDL-3, or BROND-2, and a photon—atom interaction data library for 34 elements. The benchmark validation of FENDL-1 as required by the customer, i.e. the ITER team, is considered to be a task of high priority in the coming months. The well tested and validated nuclear data libraries in processed form of the FENDL-2 are expected to be ready by mid 1996 for use by the ITER team in the final phase of ITER EDA after extensive benchmarking and integral validation studies in the 1995-1996 period. The FENDL data files can be electronically transferred to users from the IAEA nuclear data section online system through INTERNET. A grand total of 54 (sub)directories with 845 files with total size of about 2 million blocks or about 1 Gigabyte (1 block = 512 bytes) of numerical data is currently available on-line.

  3. Laser-Etched Designs for Molding Hydrogel-Based Engineered Tissues

    PubMed Central

    Munarin, Fabiola; Kaiser, Nicholas J.; Kim, Tae Yun; Choi, Bum-Rak

    2017-01-01

    Rapid prototyping and fabrication of elastomeric molds for sterile culture of engineered tissues allow for the development of tissue geometries that can be tailored to different in vitro applications and customized as implantable scaffolds for regenerative medicine. Commercially available molds offer minimal capabilities for adaptation to unique conditions or applications versus those for which they are specifically designed. Here we describe a replica molding method for the design and fabrication of poly(dimethylsiloxane) (PDMS) molds from laser-etched acrylic negative masters with ∼0.2 mm resolution. Examples of the variety of mold shapes, sizes, and patterns obtained from laser-etched designs are provided. We use the patterned PDMS molds for producing and culturing engineered cardiac tissues with cardiomyocytes derived from human-induced pluripotent stem cells. We demonstrate that tight control over tissue morphology and anisotropy results in modulation of cell alignment and tissue-level conduction properties, including the appearance and elimination of reentrant arrhythmias, or circular electrical activation patterns. Techniques for handling engineered cardiac tissues during implantation in vivo in a rat model of myocardial infarction have been developed and are presented herein to facilitate development and adoption of surgical techniques for use with hydrogel-based engineered tissues. In summary, the method presented herein for engineered tissue mold generation is straightforward and low cost, enabling rapid design iteration and adaptation to a variety of applications in tissue engineering. Furthermore, the burden of equipment and expertise is low, allowing the technique to be accessible to all. PMID:28457187

  4. Bi-Level Integrated System Synthesis (BLISS)

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Agte, Jeremy S.; Sandusky, Robert R., Jr.

    1998-01-01

    BLISS is a method for optimization of engineering systems by decomposition. It separates the system level optimization, having a relatively small number of design variables, from the potentially numerous subsystem optimizations that may each have a large number of local design variables. The subsystem optimizations are autonomous and may be conducted concurrently. Subsystem and system optimizations alternate, linked by sensitivity data, producing a design improvement in each iteration. Starting from a best guess initial design, the method improves that design in iterative cycles, each cycle comprised of two steps. In step one, the system level variables are frozen and the improvement is achieved by separate, concurrent, and autonomous optimizations in the local variable subdomains. In step two, further improvement is sought in the space of the system level variables. Optimum sensitivity data link the second step to the first. The method prototype was implemented using MATLAB and iSIGHT programming software and tested on a simplified, conceptual level supersonic business jet design, and a detailed design of an electronic device. Satisfactory convergence and favorable agreement with the benchmark results were observed. Modularity of the method is intended to fit the human organization and map well on the computing technology of concurrent processing.

  5. Preliminary Design of a Helium-Cooled Ceramic Breeder Blanket for CFETR Based on the BIT Concept

    NASA Astrophysics Data System (ADS)

    Ma, Xuebin; Liu, Songlin; Li, Jia; Pu, Yong; Chen, Xiangcun

    2014-04-01

    CFETR is the “ITER-like” China fusion engineering test reactor. The design of the breeding blanket is one of the key issues in achieving the required tritium breeding radio for the self-sufficiency of tritium as a fuel. As one option, a BIT (breeder insider tube) type helium cooled ceramic breeder blanket (HCCB) was designed. This paper presents the design of the BIT—HCCB blanket configuration inside a reactor and its structure, along with neutronics, thermo-hydraulics and thermal stress analyses. Such preliminary performance analyses indicate that the design satisfies the requirements and the material allowable limits.

  6. Viscous Aerodynamic Shape Optimization with Installed Propulsion Effects

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.; Seidel, Jonathan A.; Rallabhandi, Sriram K.

    2017-01-01

    Aerodynamic shape optimization is demonstrated to tailor the under-track pressure signature of a conceptual low-boom supersonic aircraft. Primarily, the optimization reduces nearfield pressure waveforms induced by propulsion integration effects. For computational efficiency, gradient-based optimization is used and coupled to the discrete adjoint formulation of the Reynolds-averaged Navier Stokes equations. The engine outer nacelle, nozzle, and vertical tail fairing are axi-symmetrically parameterized, while the horizontal tail is shaped using a wing-based parameterization. Overall, 48 design variables are coupled to the geometry and used to deform the outer mold line. During the design process, an inequality drag constraint is enforced to avoid major compromise in aerodynamic performance. Linear elastic mesh morphing is used to deform volume grids between design iterations. The optimization is performed at Mach 1.6 cruise, assuming standard day altitude conditions at 51,707-ft. To reduce uncertainty, a coupled thermodynamic engine cycle model is employed that captures installed inlet performance effects on engine operation.

  7. Multi-Attribute Tradespace Exploration in Space System Design

    NASA Astrophysics Data System (ADS)

    Ross, A. M.; Hastings, D. E.

    2002-01-01

    The complexity inherent in space systems necessarily requires intense expenditures of resources both human and monetary. The high level of ambiguity present in the early design phases of these systems causes long, highly iterative, and costly design cycles. This paper looks at incorporating decision theory methods into the early design processes to streamline communication of wants and needs among stakeholders and between levels of design. Communication channeled through formal utility interviews and analysis enables engineers to better understand the key drivers for the system and allows a more thorough exploration of the design tradespace. Multi-Attribute Tradespace Exploration (MATE), an evolving process incorporating decision theory into model and simulation- based design, has been applied to several space system case studies at MIT. Preliminary results indicate that this process can improve the quality of communication to more quickly resolve project ambiguity, and enable the engineer to discover better value designs for multiple stakeholders. MATE is also being integrated into a concurrent design environment to facilitate the transfer knowledge of important drivers into higher fidelity design phases. Formal utility theory provides a mechanism to bridge the language barrier between experts of different backgrounds and differing needs (e.g. scientists, engineers, managers, etc). MATE with concurrent design couples decision makers more closely to the design, and most importantly, maintains their presence between formal reviews.

  8. ITER-FEAT operation

    NASA Astrophysics Data System (ADS)

    Shimomura, Y.; Aymar, R.; Chuyanov, V. A.; Huguet, M.; Matsumoto, H.; Mizoguchi, T.; Murakami, Y.; Polevoi, A. R.; Shimada, M.; ITER Joint Central Team; ITER Home Teams

    2001-03-01

    ITER is planned to be the first fusion experimental reactor in the world operating for research in physics and engineering. The first ten years of operation will be devoted primarily to physics issues at low neutron fluence and the following ten years of operation to engineering testing at higher fluence. ITER can accommodate various plasma configurations and plasma operation modes, such as inductive high Q modes, long pulse hybrid modes and non-inductive steady state modes, with large ranges of plasma current, density, beta and fusion power, and with various heating and current drive methods. This flexibility will provide an advantage for coping with uncertainties in the physics database, in studying burning plasmas, in introducing advanced features and in optimizing the plasma performance for the different programme objectives. Remote sites will be able to participate in the ITER experiment. This concept will provide an advantage not only in operating ITER for 24 hours a day but also in involving the worldwide fusion community and in promoting scientific competition among the ITER Parties.

  9. Variable aperture-based ptychographical iterative engine method.

    PubMed

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  10. Helping System Engineers Bridge the Peaks

    NASA Technical Reports Server (NTRS)

    Rungta, Neha; Tkachuk, Oksana; Person, Suzette; Biatek, Jason; Whalen, Michael W.; Castle, Joseph; Castle, JosephGundy-Burlet, Karen

    2014-01-01

    In our experience at NASA, system engineers generally follow the Twin Peaks approach when developing safety-critical systems. However, iterations between the peaks require considerable manual, and in some cases duplicate, effort. A significant part of the manual effort stems from the fact that requirements are written in English natural language rather than a formal notation. In this work, we propose an approach that enables system engineers to leverage formal requirements and automated test generation to streamline iterations, effectively "bridging the peaks". The key to the approach is a formal language notation that a) system engineers are comfortable with, b) is supported by a family of automated V&V tools, and c) is semantically rich enough to describe the requirements of interest. We believe the combination of formalizing requirements and providing tool support to automate the iterations will lead to a more efficient Twin Peaks implementation at NASA.

  11. Status of Europe's contribution to the ITER EC system

    NASA Astrophysics Data System (ADS)

    Albajar, F.; Aiello, G.; Alberti, S.; Arnold, F.; Avramidis, K.; Bader, M.; Batista, R.; Bertizzolo, R.; Bonicelli, T.; Braunmueller, F.; Brescan, C.; Bruschi, A.; von Burg, B.; Camino, K.; Carannante, G.; Casarin, V.; Castillo, A.; Cauvard, F.; Cavalieri, C.; Cavinato, M.; Chavan, R.; Chelis, J.; Cismondi, F.; Combescure, D.; Darbos, C.; Farina, D.; Fasel, D.; Figini, L.; Gagliardi, M.; Gandini, F.; Gantenbein, G.; Gassmann, T.; Gessner, R.; Goodman, T. P.; Gracia, V.; Grossetti, G.; Heemskerk, C.; Henderson, M.; Hermann, V.; Hogge, J. P.; Illy, S.; Ioannidis, Z.; Jelonnek, J.; Jin, J.; Kasparek, W.; Koning, J.; Krause, A. S.; Landis, J. D.; Latsas, G.; Li, F.; Mazzocchi, F.; Meier, A.; Moro, A.; Nousiainen, R.; Purohit, D.; Nowak, S.; Omori, T.; van Oosterhout, J.; Pacheco, J.; Pagonakis, I.; Platania, P.; Poli, E.; Preis, A. K.; Ronden, D.; Rozier, Y.; Rzesnicki, T.; Saibene, G.; Sanchez, F.; Sartori, F.; Sauter, O.; Scherer, T.; Schlatter, C.; Schreck, S.; Serikov, A.; Siravo, U.; Sozzi, C.; Spaeh, P.; Spichiger, A.; Strauss, D.; Takahashi, K.; Thumm, M.; Tigelis, I.; Vaccaro, A.; Vomvoridis, J.; Tran, M. Q.; Weinhorst, B.

    2015-03-01

    The electron cyclotron (EC) system of ITER for the initial configuration is designed to provide 20MW of RF power into the plasma during 3600s and a duty cycle of up to 25% for heating and (co and counter) non-inductive current drive, also used to control the MHD plasma instabilities. The EC system is being procured by 5 domestic agencies plus the ITER Organization (IO). F4E has the largest fraction of the EC procurements, which includes 8 high voltage power supplies (HVPS), 6 gyrotrons, the ex-vessel waveguides (includes isolation valves and diamond windows) for all launchers, 4 upper launchers and the main control system. F4E is working with IO to improve the overall design of the EC system by integrating consolidated technological advances, simplifying the interfaces, and doing global engineering analysis and assessments of EC heating and current drive physics and technology capabilities. Examples are the optimization of the HVPS and gyrotron requirements and performance relative to power modulation for MHD control, common qualification programs for diamond window procurements, assessment of the EC grounding system, and the optimization of the launcher steering angles for improved EC access. Here we provide an update on the status of Europe's contribution to the ITER EC system, and a summary of the global activities underway by F4E in collaboration with IO for the optimization of the subsystems.

  12. Iterative procedures for space shuttle main engine performance models

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    1989-01-01

    Performance models of the Space Shuttle Main Engine (SSME) contain iterative strategies for determining approximate solutions to nonlinear equations reflecting fundamental mass, energy, and pressure balances within engine flow systems. Both univariate and multivariate Newton-Raphson algorithms are employed in the current version of the engine Test Information Program (TIP). Computational efficiency and reliability of these procedures is examined. A modified trust region form of the multivariate Newton-Raphson method is implemented and shown to be superior for off nominal engine performance predictions. A heuristic form of Broyden's Rank One method is also tested and favorable results based on this algorithm are presented.

  13. Digital computer program for generating dynamic turbofan engine models (DIGTEM)

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.; Krosel, S. M.; Szuch, J. R.; Westerkamp, E. J.

    1983-01-01

    This report describes DIGTEM, a digital computer program that simulates two spool, two-stream turbofan engines. The turbofan engine model in DIGTEM contains steady-state performance maps for all of the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. Altogether there are 16 state variables and state equations. DIGTEM features a backward-differnce integration scheme for integrating stiff systems. It trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off-design points and iterates to a balanced engine condition. Transients can also be run. They are generated by defining controls as a function of time (open-loop control) in a user-written subroutine (TMRSP). DIGTEM has run on the IBM 370/3033 computer using implicit integration with time steps ranging from 1.0 msec to 1.0 sec. DIGTEM is generalized in the aerothermodynamic treatment of components.

  14. Neutronics Comparison Analysis of the Water Cooled Ceramics Breeding Blanket for CFETR

    NASA Astrophysics Data System (ADS)

    Li, Jia; Zhang, Xiaokang; Gao, Fangfang; Pu, Yong

    2016-02-01

    China Fusion Engineering Test Reactor (CFETR) is an ITER-like fusion engineering test reactor that is intended to fill the scientific and technical gaps between ITER and DEMO. One of the main missions of CFETR is to achieve a tritium breeding ratio that is no less than 1.2 to ensure tritium self-sufficiency. A concept design for a water cooled ceramics breeding blanket (WCCB) is presented based on a scheme with the breeder and the multiplier located in separate panels for CFETR. Based on this concept, a one-dimensional (1D) radial built breeding blanket was first designed, and then several three-dimensional models were developed with various neutron source definitions and breeding blanket module arrangements based on the 1D radial build. A set of nuclear analyses have been carried out to compare the differences in neutronics characteristics given by different calculation models, addressing neutron wall loading (NWL), tritium breeding ratio (TBR), fast neutron flux on inboard side and nuclear heating deposition on main in-vessel components. The impact of differences in modeling on the nuclear performance has been analyzed and summarized regarding the WCCB concept design. supported by the National Special Project for Magnetic Confined Nuclear Fusion Energy (Nos. 2013GB108004, 2014GB122000, and 2014GB119000), and National Natural Science Foundation of China (No. 11175207)

  15. Additive Manufacturing and High-Performance Computing: a Disruptive Latent Technology

    NASA Astrophysics Data System (ADS)

    Goodwin, Bruce

    2015-03-01

    This presentation will discuss the relationship between recent advances in Additive Manufacturing (AM) technology, High-Performance Computing (HPC) simulation and design capabilities, and related advances in Uncertainty Quantification (UQ), and then examines their impacts upon national and international security. The presentation surveys how AM accelerates the fabrication process, while HPC combined with UQ provides a fast track for the engineering design cycle. The combination of AM and HPC/UQ almost eliminates the engineering design and prototype iterative cycle, thereby dramatically reducing cost of production and time-to-market. These methods thereby present significant benefits for US national interests, both civilian and military, in an age of austerity. Finally, considering cyber security issues and the advent of the ``cloud,'' these disruptive, currently latent technologies may well enable proliferation and so challenge both nuclear and non-nuclear aspects of international security.

  16. A Fully Non-Metallic Gas Turbine Engine Enabled by Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Grady, Joseph E.

    2015-01-01

    The Non-Metallic Gas Turbine Engine project, funded by NASA Aeronautics Research Institute, represents the first comprehensive evaluation of emerging materials and manufacturing technologies that will enable fully nonmetallic gas turbine engines. This will be achieved by assessing the feasibility of using additive manufacturing technologies to fabricate polymer matrix composite and ceramic matrix composite turbine engine components. The benefits include: 50 weight reduction compared to metallic parts, reduced manufacturing costs, reduced part count and rapid design iterations. Two high payoff metallic components have been identified for replacement with PMCs and will be fabricated using fused deposition modeling (FDM) with high temperature polymer filaments. The CMC effort uses a binder jet process to fabricate silicon carbide test coupons and demonstration articles. Microstructural analysis and mechanical testing will be conducted on the PMC and CMC materials. System studies will assess the benefits of fully nonmetallic gas turbine engine in terms of fuel burn, emissions, reduction of part count, and cost. The research project includes a multidisciplinary, multiorganization NASA - industry team that includes experts in ceramic materials and CMCs, polymers and PMCs, structural engineering, additive manufacturing, engine design and analysis, and system analysis.

  17. Overview of the preliminary design of the ITER plasma control system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snipes, J. A.; Albanese, R.; Ambrosino, G.

    An overview of the Preliminary Design of the ITER Plasma Control System (PCS) is described here, which focusses on the needs for 1st plasma and early plasma operation in hydrogen/helium (H/He) up to a plasma current of 15 MA with moderate auxiliary heating power in low confinement mode (L-mode). Candidate control schemes for basic magnetic control, including divertor operation and kinetic control of the electron density with gas puffing and pellet injection, were developed. Commissioning of the auxiliary heating systems is included as well as support functions for stray field topology and real-time plasma boundary reconstruction. Initial exception handling schemesmore » for faults of essential plant systems and for disruption protection were developed. The PCS architecture was also developed to be capable of handling basic control for early commissioning and the advanced control functions that will be needed for future high performance operation. A plasma control simulator is also being developed to test and validate control schemes. To handle the complexity of the ITER PCS, a systems engineering approach has been adopted with the development of a plasma control database to keep track of all control requirements.« less

  18. Overview of the preliminary design of the ITER plasma control system

    NASA Astrophysics Data System (ADS)

    Snipes, J. A.; Albanese, R.; Ambrosino, G.; Ambrosino, R.; Amoskov, V.; Blanken, T. C.; Bremond, S.; Cinque, M.; de Tommasi, G.; de Vries, P. C.; Eidietis, N.; Felici, F.; Felton, R.; Ferron, J.; Formisano, A.; Gribov, Y.; Hosokawa, M.; Hyatt, A.; Humphreys, D.; Jackson, G.; Kavin, A.; Khayrutdinov, R.; Kim, D.; Kim, S. H.; Konovalov, S.; Lamzin, E.; Lehnen, M.; Lukash, V.; Lomas, P.; Mattei, M.; Mineev, A.; Moreau, P.; Neu, G.; Nouailletas, R.; Pautasso, G.; Pironti, A.; Rapson, C.; Raupp, G.; Ravensbergen, T.; Rimini, F.; Schneider, M.; Travere, J.-M.; Treutterer, W.; Villone, F.; Walker, M.; Welander, A.; Winter, A.; Zabeo, L.

    2017-12-01

    An overview of the preliminary design of the ITER plasma control system (PCS) is described here, which focusses on the needs for 1st plasma and early plasma operation in hydrogen/helium (H/He) up to a plasma current of 15 MA with moderate auxiliary heating power in low confinement mode (L-mode). Candidate control schemes for basic magnetic control, including divertor operation and kinetic control of the electron density with gas puffing and pellet injection, were developed. Commissioning of the auxiliary heating systems is included as well as support functions for stray field topology and real-time plasma boundary reconstruction. Initial exception handling schemes for faults of essential plant systems and for disruption protection were developed. The PCS architecture was also developed to be capable of handling basic control for early commissioning and the advanced control functions that will be needed for future high performance operation. A plasma control simulator is also being developed to test and validate control schemes. To handle the complexity of the ITER PCS, a systems engineering approach has been adopted with the development of a plasma control database to keep track of all control requirements.

  19. Overview of the preliminary design of the ITER plasma control system

    DOE PAGES

    Snipes, J. A.; Albanese, R.; Ambrosino, G.; ...

    2017-09-11

    An overview of the Preliminary Design of the ITER Plasma Control System (PCS) is described here, which focusses on the needs for 1st plasma and early plasma operation in hydrogen/helium (H/He) up to a plasma current of 15 MA with moderate auxiliary heating power in low confinement mode (L-mode). Candidate control schemes for basic magnetic control, including divertor operation and kinetic control of the electron density with gas puffing and pellet injection, were developed. Commissioning of the auxiliary heating systems is included as well as support functions for stray field topology and real-time plasma boundary reconstruction. Initial exception handling schemesmore » for faults of essential plant systems and for disruption protection were developed. The PCS architecture was also developed to be capable of handling basic control for early commissioning and the advanced control functions that will be needed for future high performance operation. A plasma control simulator is also being developed to test and validate control schemes. To handle the complexity of the ITER PCS, a systems engineering approach has been adopted with the development of a plasma control database to keep track of all control requirements.« less

  20. Pollution Reduction Technology Program for Small Jet Aircraft Engines, Phase 2

    NASA Technical Reports Server (NTRS)

    Bruce, T. W.; Davis, F. G.; Kuhn, T. E.; Mongia, H. C.

    1978-01-01

    A series of iterative combustor pressure rig tests were conducted on two combustor concepts applied to the AiResearch TFE731-2 turbofan engine combustion system for the purpose of optimizing combustor performance and operating characteristics consistant with low emissions. The two concepts were an axial air-assisted airblast fuel injection configuration with variable-geometry air swirlers and a staged premix/prevaporization configuration. The iterative rig testing and modification sequence on both concepts was intended to provide operational compatibility with the engine and determine one concept for further evaluation in a TFE731-2 engine.

  1. Optimized bio-inspired stiffening design for an engine nacelle.

    PubMed

    Lazo, Neil; Vodenitcharova, Tania; Hoffman, Mark

    2015-11-04

    Structural efficiency is a common engineering goal in which an ideal solution provides a structure with optimized performance at minimized weight, with consideration of material mechanical properties, structural geometry, and manufacturability. This study aims to address this goal in developing high performance lightweight, stiff mechanical components by creating an optimized design from a biologically-inspired template. The approach is implemented on the optimization of rib stiffeners along an aircraft engine nacelle. The helical and angled arrangements of cellulose fibres in plants were chosen as the bio-inspired template. Optimization of total displacement and weight was carried out using a genetic algorithm (GA) coupled with finite element analysis. Iterations showed a gradual convergence in normalized fitness. Displacement was given higher emphasis in optimization, thus the GA optimization tended towards individual designs with weights near the mass constraint. Dominant features of the resulting designs were helical ribs with rectangular cross-sections having large height-to-width ratio. Displacement reduction was at 73% as compared to an unreinforced nacelle, and is attributed to the geometric features and layout of the stiffeners, while mass is maintained within the constraint.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hale, M.A.; Craig, J.I.

    Integrated Product and Process Development (IPPD) embodies the simultaneous application to both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. Agents are used to implementmore » the overall infrastructure on the computer. Successful agent utilization requires that they be made of three components: the resource, the model, and the wrap. Current work is focused on the development of generalized agent schemes and associated demonstration projects. When in place, the technology independent computing infrastructure will aid the designer in systematically generating knowledge used to facilitate decision-making.« less

  3. Use of agents to implement an integrated computing environment

    NASA Technical Reports Server (NTRS)

    Hale, Mark A.; Craig, James I.

    1995-01-01

    Integrated Product and Process Development (IPPD) embodies the simultaneous application to both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. Agents are used to implement the overall infrastructure on the computer. Successful agent utilization requires that they be made of three components: the resource, the model, and the wrap. Current work is focused on the development of generalized agent schemes and associated demonstration projects. When in place, the technology independent computing infrastructure will aid the designer in systematically generating knowledge used to facilitate decision-making.

  4. Designing magnetic systems for reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitzenroeder, P.J.

    1991-01-01

    Designing magnetic system is an iterative process in which the requirements are set, a design is developed, materials and manufacturing processes are defined, interrelationships with the various elements of the system are established, engineering analyses are performed, and fault modes and effects are studied. Reliability requires that all elements of the design process, from the seemingly most straightforward such as utilities connection design and implementation, to the most sophisticated such as advanced finite element analyses, receives a balanced and appropriate level of attention. D.B. Montgomery's study of magnet failures has shown that the predominance of magnet failures tend not tomore » be in the most intensively engineered areas, but are associated with insulation, leads, ad unanticipated conditions. TFTR, JET, JT-60, and PBX are all major tokamaks which have suffered loss of reliability due to water leaks. Similarly the majority of causes of loss of magnet reliability at PPPL has not been in the sophisticated areas of the design but are due to difficulties associated with coolant connections, bus connections, and external structural connections. Looking towards the future, the major next-devices such as BPX and ITER are most costly and complex than any of their predecessors and are pressing the bounds of operating levels, materials, and fabrication. Emphasis on reliability is a must as the fusion program enters a phase where there are fewer, but very costly devices with the goal of reaching a reactor prototype stage in the next two or three decades. This paper reviews some of the magnet reliability issues which PPPL has faced over the years the lessons learned from them, and magnet design and fabrication practices which have been found to contribute to magnet reliability.« less

  5. Failure is an option: Reactions to failure in elementary engineering design projects

    NASA Astrophysics Data System (ADS)

    Johnson, Matthew M.

    Recent reform documents in science education have called for teachers to use epistemic practices of science and engineering researchers to teach disciplinary content (NRC, 2007; NRC, 2012; NGSS Lead States, 2013). Although this creates challenges for classroom teachers unfamiliar with engineering, it has created a need for high quality research about how students and teachers engage in engineering activities to improve curriculum development and teaching pedagogy. While framers of the Next Generation Science Standards (NRC, 2012; NGSS Lead States 2013) focused on the similarities of the practices of science researchers and engineering designers, some have proposed that engineering has a unique set of epistemic practices, including improving from failure (Cunningham & Carlsen, 2014; Cunningham & Kelly, in review). While no one will deny failures occur in science, failure in engineering is thought of in fundamentally different ways. In the study presented here, video data from eight classes of elementary students engaged in one of two civil engineering units were analyzed using methods borrowed from psychology, anthropology, and sociolinguistics to investigate: 1) the nature of failure in elementary engineering design; 2) the ways in which teachers react to failure; and 3) how the collective actions of students and teachers support or constrain improvement in engineering design. I propose new ways of considering the types and causes of failure, and note three teacher reactions to failure: the manager, the cheerleader, and the strategic partner. Because the goal of iteration in engineering is improvement, I also studied improvement. Students only systematically improve when they have the opportunity, productive strategies, and fair comparisons between prototypes. I then investigate the use of student engineering journals to assess learning from the process of improvement after failure. After discussion, I consider implications from this work as well as future research to advance our understanding in this area.

  6. RAMI Analysis for Designing and Optimizing Tokamak Cooling Water System (TCWS) for the ITER's Fusion Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrada, Juan J; Reiersen, Wayne T

    U.S.-ITER is responsible for the design, engineering, and procurement of the Tokamak Cooling Water System (TCWS). TCWS is designed to provide cooling and baking for client systems that include the first wall/blanket, vacuum vessel, divertor, and neutral beam injector. Additional operations that support these primary functions include chemical control of water provided to client systems, draining and drying for maintenance, and leak detection/localization. TCWS interfaces with 27 systems including the secondary cooling system, which rejects this heat to the environment. TCWS transfers heat generated in the Tokamak during nominal pulsed operation - 850 MW at up to 150 C andmore » 4.2 MPa water pressure. Impurities are diffused from in-vessel components and the vacuum vessel by water baking at 200-240 C at up to 4.4 MPa. TCWS is complex because it serves vital functions for four primary clients whose performance is critical to ITER's success and interfaces with more than 20 additional ITER systems. Conceptual design of this one-of-a-kind cooling system has been completed; however, several issues remain that must be resolved before moving to the next stage of the design process. The 2004 baseline design indicated cooling loops that have no fault tolerance for component failures. During plasma operation, each cooling loop relies on a single pump, a single pressurizer, and one heat exchanger. Consequently, failure of any of these would render TCWS inoperable, resulting in plasma shutdown. The application of reliability, availability, maintainability, and inspectability (RAMI) tools during the different stages of TCWS design is crucial for optimization purposes and for maintaining compliance with project requirements. RAMI analysis will indicate appropriate equipment redundancy that provides graceful degradation in the event of an equipment failure. This analysis helps demonstrate that using proven, commercially available equipment is better than using custom-designed equipment with no field experience and lowers specific costs while providing higher reliability. This paper presents a brief description of the TCWS conceptual design and the application of RAMI tools to optimize the design at different stages during the project.« less

  7. Advanced Gas Turbine (AGT) powertrain system

    NASA Technical Reports Server (NTRS)

    Helms, H. E.; Kaufeld, J.; Kordes, R.

    1981-01-01

    A 74.5 kW(100 hp) advanced automotive gas turbine engine is described. A design iteration to improve the weight and production cost associated with the original concept is discussed. Major rig tests included 15 hours of compressor testing to 80% design speed and the results are presented. Approximately 150 hours of cold flow testing showed duct loss to be less than the design goal. Combustor test results are presented for initial checkout tests. Turbine design and rig fabrication is discussed. From a materials study of six methods to fabricate rotors, two have been selected for further effort. A discussion of all six methods is given.

  8. Elementary students' engagement in failure-prone engineering design tasks

    NASA Astrophysics Data System (ADS)

    Andrews, Chelsea Joy

    Although engineering education has been practiced at the undergraduate level for over a century, only fairly recently has the field broadened to include the elementary level; the pre-college division of the American Society of Engineering Education was established in 2003. As a result, while recent education standards require engineering in elementary schools, current studies are still filling in basic research on how best to design and implement elementary engineering activities. One area in need of investigation is how students engage with physical failure in design tasks. In this dissertation, I explore how upper elementary students engage in failure-prone engineering design tasks in an out-of-school environment. In a series of three empirical case studies, I look closely at how students evaluate failed tests and decide on changes to their design constructions, how their reasoning evolves as they repeatedly encounter physical failure, and how students and facilitators co-construct testing norms where repetitive failure is manageable. I also briefly investigate how students' engagement differs in a task that features near-immediate success. By closely examining student groups' discourse and their interactions with their design constructions, I found that these students: are able to engage in iteration and see failure-as-feedback with minimal externally-imposed structure; seem to be designing in a more sophisticated manner, attending to multiple causal factors, after experiencing repetitive failure; and are able to manage the stress and frustration of repetitive failure, provided the co-constructed testing norms of the workshop environment are supportive of failure management. These results have both pedagogical implications, in terms of how to create and facilitate design tasks, and methodological implications--namely, I highlight the particular insights afforded by a case study approach for analyzing engagement in design tasks.

  9. Hierarchical Engine for Large-scale Infrastructure Co-Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-04-24

    HELICS is designed to support very-large-scale (100,000+ federates) cosimulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features include cross platform operating system support, the integration of both event driven (e.g., packetized communication) and time-series (e.g., power flow) simulations, and the ability to co-iterate among federates to ensure physical model convergence at each time step.

  10. Towards Single-Step Biofabrication of Organs on a Chip via 3D Printing.

    PubMed

    Knowlton, Stephanie; Yenilmez, Bekir; Tasoglu, Savas

    2016-09-01

    Organ-on-a-chip engineering employs microfabrication of living tissues within microscale fluid channels to create constructs that closely mimic human organs. With the advent of 3D printing, we predict that single-step fabrication of these devices will enable rapid design and cost-effective iterations in the development stage, facilitating rapid innovation in this field. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Re-typograph phase I: a proof-of-concept for typeface parameter extraction from historical documents

    NASA Astrophysics Data System (ADS)

    Lamiroy, Bart; Bouville, Thomas; Blégean, Julien; Cao, Hongliu; Ghamizi, Salah; Houpin, Romain; Lloyd, Matthias

    2015-01-01

    This paper reports on the first phase of an attempt to create a full retro-engineering pipeline that aims to construct a complete set of coherent typographic parameters defining the typefaces used in a printed homogenous text. It should be stressed that this process cannot reasonably be expected to be fully automatic and that it is designed to include human interaction. Although font design is governed by a set of quite robust and formal geometric rulesets, it still heavily relies on subjective human interpretation. Furthermore, different parameters, applied to the generic rulesets may actually result in quite similar and visually difficult to distinguish typefaces, making the retro-engineering an inverse problem that is ill conditioned once shape distortions (related to the printing and/or scanning process) come into play. This work is the first phase of a long iterative process, in which we will progressively study and assess the techniques from the state-of-the-art that are most suited to our problem and investigate new directions when they prove to not quite adequate. As a first step, this is more of a feasibility proof-of-concept, that will allow us to clearly pinpoint the items that will require more in-depth research over the next iterations.

  12. Computational Investigation of a Boundary-Layer Ingesting Propulsion System for the Common Research Model

    NASA Technical Reports Server (NTRS)

    Blumenthal, Brennan T.; Elmiligui, Alaa; Geiselhart, Karl A.; Campbell, Richard L.; Maughmer, Mark D.; Schmitz, Sven

    2016-01-01

    The present paper examines potential propulsive and aerodynamic benefits of integrating a Boundary-Layer Ingestion (BLI) propulsion system into a typical commercial aircraft using the Common Research Model (CRM) geometry and the NASA Tetrahedral Unstructured Software System (TetrUSS). The Numerical Propulsion System Simulation (NPSS) environment is used to generate engine conditions for CFD analysis. Improvements to the BLI geometry are made using the Constrained Direct Iterative Surface Curvature (CDISC) design method. Previous studies have shown reductions of up to 25% in terms of propulsive power required for cruise for other axisymmetric geometries using the BLI concept. An analysis of engine power requirements, drag, and lift coefficients using the baseline and BLI geometries coupled with the NPSS model are shown. Potential benefits of the BLI system relating to cruise propulsive power are quantified using a power balance method, and a comparison to the baseline case is made. Iterations of the BLI geometric design are shown and any improvements between subsequent BLI designs presented. Simulations are conducted for a cruise flight condition of Mach 0.85 at an altitude of 38,500 feet and an angle of attack of 2 deg for all geometries. A comparison between available wind tunnel data, previous computational results, and the original CRM model is presented for model verification purposes along with full results for BLI power savings. Results indicate a 14.4% reduction in engine power requirements at cruise for the BLI configuration over the baseline geometry. Minor shaping of the aft portion of the fuselage using CDISC has been shown to increase the benefit from Boundary-Layer Ingestion further, resulting in a 15.6% reduction in power requirements for cruise as well as a drag reduction of eighteen counts over the baseline geometry.

  13. Computational Investigation of a Boundary-Layer Ingestion Propulsion System for the Common Research Model

    NASA Technical Reports Server (NTRS)

    Blumenthal, Brennan

    2016-01-01

    This thesis will examine potential propulsive and aerodynamic benefits of integrating a boundary-layer ingestion (BLI) propulsion system with a typical commercial aircraft using the Common Research Model geometry and the NASA Tetrahedral Unstructured Software System (TetrUSS). The Numerical Propulsion System Simulation (NPSS) environment will be used to generate engine conditions for CFD analysis. Improvements to the BLI geometry will be made using the Constrained Direct Iterative Surface Curvature (CDISC) design method. Previous studies have shown reductions of up to 25% in terms of propulsive power required for cruise for other axisymmetric geometries using the BLI concept. An analysis of engine power requirements, drag, and lift coefficients using the baseline and BLI geometries coupled with the NPSS model are shown. Potential benefits of the BLI system relating to cruise propulsive power are quantified using a power balance method and a comparison to the baseline case is made. Iterations of the BLI geometric design are shown and any improvements between subsequent BLI designs presented. Simulations are conducted for a cruise flight condition of Mach 0.85 at an altitude of 38,500 feet and an angle of attack of 2deg for all geometries. A comparison between available wind tunnel data, previous computational results, and the original CRM model is presented for model verification purposes along with full results for BLI power savings. Results indicate a 14.3% reduction in engine power requirements at cruise for the BLI configuration over the baseline geometry. Minor shaping of the aft portion of the fuselage using CDISC has been shown to increase the benefit from boundary-layer ingestion further, resulting in a 15.6% reduction in power requirements for cruise as well as a drag reduction of eighteen counts over the baseline geometry.

  14. Engineering a Healthier Watershed: Middle School Students Use Engineering Design to Lessen the Impact of Their Campus' Impervious Surfaces on Their Local Watershed

    NASA Astrophysics Data System (ADS)

    Gardner, Elizabeth Claire

    It is important that students understand not only how their local watershed functions, but also how it is being impacted by impervious surfaces. Additionally, students need experience exploring the scientific and engineering practices that are necessary for a strong STEM background. With this knowledge students can be empowered to tackle this real and local problem using engineering design, a powerful practice gaining momentum and clarity through its prominence in the recent Framework for K-12 Science Education. Twenty classes of suburban sixth-graders participated in a new five-week Watershed Engineering Design Unit taught by their regular science teachers. Students engaged in scientific inquiry to learn about the structure, function, and health of their local watersheds, focusing on the effects of impervious surfaces. In small groups, students used the engineering design process to propose solutions to lessen the impact of runoff from their school campuses. The goal of this evaluation was to determine the effectiveness of the curriculum in terms of student gains in understanding of (1) watershed function, (2) the impact of impervious surfaces, and (3) the engineering design process. To determine the impact of this curriculum on their learning, students took multiple-choice pre- and post-assessments made up of items covering the three categories above. This data was analyzed for statistical significance using a lower-tailed paired sample t-test. All three objectives showed statistically significant learning gains and the results were used to recommend improvements to the curriculum and the assessment instrument for future iterations.

  15. Performance of a Laser Ignited Multicylinder Lean Burn Natural Gas Engine

    DOE PAGES

    Almansour, Bader; Vasu, Subith; Gupta, Sreenath B.; ...

    2017-06-06

    Market demands for lower fueling costs and higher specific powers in stationary natural gas engines has engine designs trending towards higher in-cylinder pressures and leaner combustion operation. However, Ignition remains as the main limiting factor in achieving further performance improvements in these engines. Addressing this concern, while incorporating various recent advances in optics and laser technologies, laser igniters were designed and developed through numerous iterations. Final designs incorporated water-cooled, passively Q-switched, Nd:YAG micro-lasers that were optimized for stable operation under harsh engine conditions. Subsequently, the micro-lasers were installed in the individual cylinders of a lean-burn, 350 kW, inline 6-cylinder, open-chamber,more » spark ignited engine and tests were conducted. To the best of our knowledge, this is the world’s first demonstration of a laser ignited multi-cylinder natural gas engine. The engine was operated at high-load (298 kW) and rated speed (1800 rpm) conditions. Ignition timing sweeps and excess-air ratio (λ) sweeps were performed while keeping the NOx emissions below the USEPA regulated value (BSNOx < 1.34 g/kW-hr), and while maintaining ignition stability at industry acceptable values (COV_IMEP <5 %). Through such engine tests, the relative merits of (i) standard electrical ignition system, and (ii) laser ignition system were determined. In conclusion, a rigorous combustion data analysis was performed and the main reasons leading to improved performance in the case of laser ignition were identified.« less

  16. Performance of a Laser Ignited Multicylinder Lean Burn Natural Gas Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almansour, Bader; Vasu, Subith; Gupta, Sreenath B.

    Market demands for lower fueling costs and higher specific powers in stationary natural gas engines has engine designs trending towards higher in-cylinder pressures and leaner combustion operation. However, Ignition remains as the main limiting factor in achieving further performance improvements in these engines. Addressing this concern, while incorporating various recent advances in optics and laser technologies, laser igniters were designed and developed through numerous iterations. Final designs incorporated water-cooled, passively Q-switched, Nd:YAG micro-lasers that were optimized for stable operation under harsh engine conditions. Subsequently, the micro-lasers were installed in the individual cylinders of a lean-burn, 350 kW, inline 6-cylinder, open-chamber,more » spark ignited engine and tests were conducted. To the best of our knowledge, this is the world’s first demonstration of a laser ignited multi-cylinder natural gas engine. The engine was operated at high-load (298 kW) and rated speed (1800 rpm) conditions. Ignition timing sweeps and excess-air ratio (λ) sweeps were performed while keeping the NOx emissions below the USEPA regulated value (BSNOx < 1.34 g/kW-hr), and while maintaining ignition stability at industry acceptable values (COV_IMEP <5 %). Through such engine tests, the relative merits of (i) standard electrical ignition system, and (ii) laser ignition system were determined. In conclusion, a rigorous combustion data analysis was performed and the main reasons leading to improved performance in the case of laser ignition were identified.« less

  17. Design Issues of the Pre-Compression Rings of Iter

    NASA Astrophysics Data System (ADS)

    Knaster, J.; Baker, W.; Bettinali, L.; Jong, C.; Mallick, K.; Nardi, C.; Rajainmaki, H.; Rossi, P.; Semeraro, L.

    2010-04-01

    The pre-compression system is the keystone of ITER. A centripetal force of ˜30 MN will be applied at cryogenic conditions on top and bottom of each TF coil. It will prevent the `breathing effect' caused by the bursting forces occurring during plasma operation that would affect the machine design life of 30000 cycles. Different alternatives have been studied throughout the years. There are two major design requirements limiting the engineering possibilities: 1) the limited available space and 2) the need to hamper eddy currents flowing in the structures. Six unidirectionally wound glass-fibre composite rings (˜5 m diameter and ˜300 mm cross section) are the final design choice. The rings will withstand the maximum hoop stresses <500 MPa at room temperature conditions. Although retightening or replacing the pre-compression rings in case of malfunctioning is possible, they have to sustain the load during the entire 20 years of machine operation. The present paper summarizes the pre-compression ring R&D carried out during several years. In particular, we will address the composite choice and mechanical characterization, assessment of creep or stress relaxation phenomena, sub-sized rings testing and the optimal ring fabrication processes that have led to the present final design.

  18. ITER in-vessel system design and performance

    NASA Astrophysics Data System (ADS)

    Parker, R. R.

    2000-03-01

    The article reviews the design and performance of the in-vessel components of ITER as developed for the Engineering Design Activities (EDA) Final Design Report. The double walled vacuum vessel is the first confinement boundary and is designed to maintain its integrity under all normal and off-normal conditions, e.g. the most intense vertical displacement events (VDEs) and seismic events. The shielding blanket consists of modules connected to a toroidal backplate by flexible connectors which allow differential displacements due to temperature non-uniformities. Breeding blanket modules replace the shield modules for the Enhanced Performance Phase. The divertor concept is based on a cassette structure which is convenient for remote installation and removal. High heat flux (HHF) components are mechanically attached and can be removed and replaced in the hot cell. Operation of the divertor is based on achieving partially detached plasma conditions along and near the separatrix. Nominal heat loads of 5-10 MW/m2 are expected on the target. These are accommodated by HHF technology developed during the EDA. Disruptions and VDEs can lead to melting of the first wall armour but no damage to the underlying structure. Stresses in the main structural components remain within allowable ranges for all postulated disruption and seismic events.

  19. High-Level Performance Modeling of SAR Systems

    NASA Technical Reports Server (NTRS)

    Chen, Curtis

    2006-01-01

    SAUSAGE (Still Another Utility for SAR Analysis that s General and Extensible) is a computer program for modeling (see figure) the performance of synthetic- aperture radar (SAR) or interferometric synthetic-aperture radar (InSAR or IFSAR) systems. The user is assumed to be familiar with the basic principles of SAR imaging and interferometry. Given design parameters (e.g., altitude, power, and bandwidth) that characterize a radar system, the software predicts various performance metrics (e.g., signal-to-noise ratio and resolution). SAUSAGE is intended to be a general software tool for quick, high-level evaluation of radar designs; it is not meant to capture all the subtleties, nuances, and particulars of specific systems. SAUSAGE was written to facilitate the exploration of engineering tradeoffs within the multidimensional space of design parameters. Typically, this space is examined through an iterative process of adjusting the values of the design parameters and examining the effects of the adjustments on the overall performance of the system at each iteration. The software is designed to be modular and extensible to enable consideration of a variety of operating modes and antenna beam patterns, including, for example, strip-map and spotlight SAR acquisitions, polarimetry, burst modes, and squinted geometries.

  20. Designing synthetic biology.

    PubMed

    Agapakis, Christina M

    2014-03-21

    Synthetic biology is frequently defined as the application of engineering design principles to biology. Such principles are intended to streamline the practice of biological engineering, to shorten the time required to design, build, and test synthetic gene networks. This streamlining of iterative design cycles can facilitate the future construction of biological systems for a range of applications in the production of fuels, foods, materials, and medicines. The promise of these potential applications as well as the emphasis on design has prompted critical reflection on synthetic biology from design theorists and practicing designers from many fields, who can bring valuable perspectives to the discipline. While interdisciplinary connections between biologists and engineers have built synthetic biology via the science and the technology of biology, interdisciplinary collaboration with artists, designers, and social theorists can provide insight on the connections between technology and society. Such collaborations can open up new avenues and new principles for research and design, as well as shed new light on the challenging context-dependence-both biological and social-that face living technologies at many scales. This review is inspired by the session titled "Design and Synthetic Biology: Connecting People and Technology" at Synthetic Biology 6.0 and covers a range of literature on design practice in synthetic biology and beyond. Critical engagement with how design is used to shape the discipline opens up new possibilities for how we might design the future of synthetic biology.

  1. An Exploratory Study of Cost Engineering in Axiomatic Design: Creation of the Cost Model Based on an FR-DP Map

    NASA Technical Reports Server (NTRS)

    Lee, Taesik; Jeziorek, Peter

    2004-01-01

    Large complex projects cost large sums of money throughout their life cycle for a variety of reasons and causes. For such large programs, the credible estimation of the project cost, a quick assessment of the cost of making changes, and the management of the project budget with effective cost reduction determine the viability of the project. Cost engineering that deals with these issues requires a rigorous method and systematic processes. This paper introduces a logical framework to a&e effective cost engineering. The framework is built upon Axiomatic Design process. The structure in the Axiomatic Design process provides a good foundation to closely tie engineering design and cost information together. The cost framework presented in this paper is a systematic link between the functional domain (FRs), physical domain (DPs), cost domain (CUs), and a task/process-based model. The FR-DP map relates a system s functional requirements to design solutions across all levels and branches of the decomposition hierarchy. DPs are mapped into CUs, which provides a means to estimate the cost of design solutions - DPs - from the cost of the physical entities in the system - CUs. The task/process model describes the iterative process ot-developing each of the CUs, and is used to estimate the cost of CUs. By linking the four domains, this framework provides a superior traceability from requirements to cost information.

  2. System engineering techniques for establishing balanced design and performance guidelines for the advanced telerobotic testbed

    NASA Technical Reports Server (NTRS)

    Zimmerman, W. F.; Matijevic, J. R.

    1987-01-01

    Novel system engineering techniques have been developed and applied to establishing structured design and performance objectives for the Telerobotics Testbed that reduce technical risk while still allowing the testbed to demonstrate an advancement in state-of-the-art robotic technologies. To estblish the appropriate tradeoff structure and balance of technology performance against technical risk, an analytical data base was developed which drew on: (1) automation/robot-technology availability projections, (2) typical or potential application mission task sets, (3) performance simulations, (4) project schedule constraints, and (5) project funding constraints. Design tradeoffs and configuration/performance iterations were conducted by comparing feasible technology/task set configurations against schedule/budget constraints as well as original program target technology objectives. The final system configuration, task set, and technology set reflected a balanced advancement in state-of-the-art robotic technologies, while meeting programmatic objectives and schedule/cost constraints.

  3. A Systems Engineering Approach to Architecture Development

    NASA Technical Reports Server (NTRS)

    Di Pietro, David A.

    2014-01-01

    Architecture development is conducted prior to system concept design when there is a need to determine the best-value mix of systems that works collectively in specific scenarios and time frames to accomplish a set of mission area objectives. While multiple architecture frameworks exist, they often require use of unique taxonomies and data structures. In contrast, this presentation characterizes architecture development using terminology widely understood within the systems engineering community. Using a notional civil space architecture example, it employs a multi-tier framework to describe the enterprise level architecture and illustrates how results of lower tier, mission area architectures integrate into the enterprise architecture. It also presents practices for conducting effective mission area architecture studies, including establishing the trade space, developing functions and metrics, evaluating the ability of potential design solutions to meet the required functions, and expediting study execution through the use of iterative design cycles.

  4. A Systems Engineering Approach to Architecture Development

    NASA Technical Reports Server (NTRS)

    Di Pietro, David A.

    2015-01-01

    Architecture development is often conducted prior to system concept design when there is a need to determine the best-value mix of systems that works collectively in specific scenarios and time frames to accomplish a set of mission area objectives. While multiple architecture frameworks exist, they often require use of unique taxonomies and data structures. In contrast, this paper characterizes architecture development using terminology widely understood within the systems engineering community. Using a notional civil space architecture example, it employs a multi-tier framework to describe the enterprise level architecture and illustrates how results of lower tier, mission area architectures integrate into the enterprise architecture. It also presents practices for conducting effective mission area architecture studies, including establishing the trade space, developing functions and metrics, evaluating the ability of potential design solutions to meet the required functions, and expediting study execution through the use of iterative design cycles.

  5. A Systems Engineering Approach to Architecture Development

    NASA Technical Reports Server (NTRS)

    Di Pietro, David A.

    2015-01-01

    Architecture development is often conducted prior to system concept design when there is a need to determine the best-value mix of systems that works collectively in specific scenarios and time frames to accomplish a set of mission area objectives. While multiple architecture frameworks exist, they often require use of unique taxonomies and data structures. In contrast, this paper characterizes architecture development using terminology widely understood within the systems engineering community. Using a notional civil space architecture example, it employs a multi-tier framework to describe the enterprise level architecture and illustrates how results of lower tier, mission area architectures integrate into the enterprise architecture. It also presents practices for conducting effective mission area architecture studies, including establishing the trade space, developing functions and metrics, evaluating the ability of potential design solutions to meet the required functions, and expediting study execution through the use of iterative design cycles

  6. Variable aperture-based ptychographical iterative engine method

    NASA Astrophysics Data System (ADS)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.

  7. LOW-ENGINE-FRICTION TECHNOLOGY FOR ADVANCED NATURAL-GAS RECIPROCATING ENGINES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Victor W. Wong; Tian Tian; Grant Smedley

    2004-09-30

    This program aims at improving the efficiency of advanced natural-gas reciprocating engines (ANGRE) by reducing piston/ring assembly friction without major adverse effects on engine performance, such as increased oil consumption and emissions. An iterative process of simulation, experimentation and analysis, are being followed towards achieving the goal of demonstrating a complete optimized low-friction engine system. To date, a detailed set of piston/ring dynamic and friction models have been developed and applied that illustrated the fundamental relationships between design parameters and friction losses. Various low-friction strategies and ring-design concepts have been explored, and engine experiments have been done on a full-scalemore » Waukesha VGF F18 in-line 6 cylinder power generation engine rated at 370 kW at 1800 rpm. Current accomplishments include designing and testing ring-packs using a subtle top-compression-ring profile (skewed barrel design), lowering the tension of the oil-control ring, employing a negative twist to the scraper ring to control oil consumption. Initial test data indicate that piston ring-pack friction was reduced by 35% by lowering the oil-control ring tension alone, which corresponds to a 1.5% improvement in fuel efficiency. Although small in magnitude, this improvement represents a first step towards anticipated aggregate improvements from other strategies. Other ring-pack design strategies to lower friction have been identified, including reduced axial distance between the top two rings, tilted top-ring groove. Some of these configurations have been tested and some await further evaluation. Colorado State University performed the tests and Waukesha Engine Dresser, Inc. provided technical support. Key elements of the continuing work include optimizing the engine piston design, application of surface and material developments in conjunction with improved lubricant properties, system modeling and analysis, and continued technology demonstration in an actual full-sized reciprocating natural-gas engine.« less

  8. Installation and Testing of ITER Integrated Modeling and Analysis Suite (IMAS) on DIII-D

    NASA Astrophysics Data System (ADS)

    Lao, L.; Kostuk, M.; Meneghini, O.; Smith, S.; Staebler, G.; Kalling, R.; Pinches, S.

    2017-10-01

    A critical objective of the ITER Integrated Modeling Program is the development of IMAS to support ITER plasma operation and research activities. An IMAS framework has been established based on the earlier work carried out within the EU. It consists of a physics data model and a workflow engine. The data model is capable of representing both simulation and experimental data and is applicable to ITER and other devices. IMAS has been successfully installed on a local DIII-D server using a flexible installer capable of managing the core data access tools (Access Layer and Data Dictionary) and optionally the Kepler workflow engine and coupling tools. A general adaptor for OMFIT (a workflow engine) is being built for adaptation of any analysis code to IMAS using a new IMAS universal access layer (UAL) interface developed from an existing OMFIT EU Integrated Tokamak Modeling UAL. Ongoing work includes development of a general adaptor for EFIT and TGLF based on this new UAL that can be readily extended for other physics codes within OMFIT. Work supported by US DOE under DE-FC02-04ER54698.

  9. Sensitivity based coupling strengths in complex engineering systems

    NASA Technical Reports Server (NTRS)

    Bloebaum, C. L.; Sobieszczanski-Sobieski, J.

    1993-01-01

    The iterative design scheme necessary for complex engineering systems is generally time consuming and difficult to implement. Although a decomposition approach results in a more tractable problem, the inherent couplings make establishing the interdependencies of the various subsystems difficult. Another difficulty lies in identifying the most efficient order of execution for the subsystem analyses. The paper describes an approach for determining the dependencies that could be suspended during the system analysis with minimal accuracy losses, thereby reducing the system complexity. A new multidisciplinary testbed is presented, involving the interaction of structures, aerodynamics, and performance disciplines. Results are presented to demonstrate the effectiveness of the system reduction scheme.

  10. Performance evaluation approach for the supercritical helium cold circulators of ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaghela, H.; Sarkar, B.; Bhattacharya, R.

    2014-01-29

    The ITER project design foresees Supercritical Helium (SHe) forced flow cooling for the main cryogenic components, namely, the superconducting (SC) magnets and cryopumps (CP). Therefore, cold circulators have been selected to provide the required SHe mass flow rate to cope with specific operating conditions and technical requirements. Considering the availability impacts of such machines, it has been decided to perform evaluation tests of the cold circulators at operating conditions prior to the series production in order to minimize the project technical risks. A proposal has been conceptualized, evaluated and simulated to perform representative tests of the full scale SHe coldmore » circulators. The objectives of the performance tests include the validation of normal operating condition, transient and off-design operating modes as well as the efficiency measurement. A suitable process and instrumentation diagram of the test valve box (TVB) has been developed to implement the tests at the required thermodynamic conditions. The conceptual engineering design of the TVB has been developed along with the required thermal analysis for the normal operating conditions to support the performance evaluation of the SHe cold circulator.« less

  11. Data Integration Tool: Permafrost Data Debugging

    NASA Astrophysics Data System (ADS)

    Wilcox, H.; Schaefer, K. M.; Jafarov, E. E.; Pulsifer, P. L.; Strawhacker, C.; Yarmey, L.; Basak, R.

    2017-12-01

    We developed a Data Integration Tool (DIT) to significantly speed up the time of manual processing needed to translate inconsistent, scattered historical permafrost data into files ready to ingest directly into the Global Terrestrial Network-Permafrost (GTN-P). The United States National Science Foundation funded this project through the National Snow and Ice Data Center (NSIDC) with the GTN-P to improve permafrost data access and discovery. We leverage this data to support science research and policy decisions. DIT is a workflow manager that divides data preparation and analysis into a series of steps or operations called widgets (https://github.com/PermaData/DIT). Each widget does a specific operation, such as read, multiply by a constant, sort, plot, and write data. DIT allows the user to select and order the widgets as desired to meet their specific needs, incrementally interact with and evolve the widget workflows, and save those workflows for reproducibility. Taking ideas from visual programming found in the art and design domain, debugging and iterative design principles from software engineering, and the scientific data processing and analysis power of Fortran and Python it was written for interactive, iterative data manipulation, quality control, processing, and analysis of inconsistent data in an easily installable application. DIT was used to completely translate one dataset (133 sites) that was successfully added to GTN-P, nearly translate three datasets (270 sites), and is scheduled to translate 10 more datasets ( 1000 sites) from the legacy inactive site data holdings of the Frozen Ground Data Center (FGDC). Iterative development has provided the permafrost and wider scientific community with an extendable tool designed specifically for the iterative process of translating unruly data.

  12. Evaluation of ITER MSE Viewing Optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, S; Lerner, S; Morris, K

    2007-03-26

    The Motional Stark Effect (MSE) diagnostic on ITER determines the local plasma current density by measuring the polarization angle of light resulting from the interaction of a high energy neutral heating beam and the tokamak plasma. This light signal has to be transmitted from the edge and core of the plasma to a polarization analyzer located in the port plug. The optical system should either preserve the polarization information, or it should be possible to reliably calibrate any changes induced by the optics. This LLNL Work for Others project for the US ITER Project Office (USIPO) is focused on themore » design of the viewing optics for both the edge and core MSE systems. Several design constraints were considered, including: image quality, lack of polarization aberrations, ease of construction and cost of mirrors, neutron shielding, and geometric layout in the equatorial port plugs. The edge MSE optics are located in ITER equatorial port 3 and view Heating Beam 5, and the core system is located in equatorial port 1 viewing heating beam 4. The current work is an extension of previous preliminary design work completed by the ITER central team (ITER resources were not available to complete a detailed optimization of this system, and then the MSE was assigned to the US). The optimization of the optical systems at this level was done with the ZEMAX optical ray tracing code. The final LLNL designs decreased the ''blur'' in the optical system by nearly an order of magnitude, and the polarization blur was reduced by a factor of 3. The mirror sizes were reduced with an estimated cost savings of a factor of 3. The throughput of the system was greater than or equal to the previous ITER design. It was found that optical ray tracing was necessary to accurately measure the throughput. Metal mirrors, while they can introduce polarization aberrations, were used close to the plasma because of the anticipated high heat, particle, and neutron loads. These mirrors formed an intermediate image that then was relayed out of the port plug with more ideal (dielectric) mirrors. Engineering models of the optics, port plug, and neutral beam geometry were also created, using the CATIA ITER models. Two video conference calls with the USIPO provided valuable design guidelines, such as the minimum distance of the first optic from the plasma. A second focus of the project was the calibration of the system. Several different techniques are proposed, both before and during plasma operation. Fixed and rotatable polarizers would be used to characterize the system in the no-plasma case. Obtaining the full modulation spectrum from the polarization analyzer allows measurement of polarization effects and also MHD plasma phenomena. Light from neutral beam interaction with deuterium gas (no plasma) has been found useful to determine the wavelength of each spatial channel. The status of the optical design for the edge (upper) and core (lower) systems is included in the following figure. Several issues should be addressed by a follow-on study, including whether the optical labyrinth has sufficient neutron shielding and a detailed polarization characterization of actual mirrors.« less

  13. The engineering design of the Tokamak Physics Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, J.A.

    A mission and supporting physics objectives have been developed, which establishes an important role for the Tokamak Physics Experiment (TPX) in developing the physic basis for a future fusion reactor. The design of TPX include advanced physics features, such as shaping and profile control, along with the capability of operating for very long pulses. The development of the superconducting magnets, actively cooled internal hardware, and remote maintenance will be an important technology contribution to future fusion projects, such as ITER. The Conceptual Design and Management Systems for TPX have been developed and reviewed, and the project is beginning Preliminary Design.more » If adequately funded the construction project should be completed in the year 2000.« less

  14. New trends in radiology workstation design

    NASA Astrophysics Data System (ADS)

    Moise, Adrian; Atkins, M. Stella

    2002-05-01

    In the radiology workstation design, the race for adding more features is now morphing into an iterative user centric design with the focus on ergonomics and usability. The extent of the list of features for the radiology workstation used to be one of the most significant factors for a Picture Archiving and Communication System (PACS) vendor's ability to sell the radiology workstation. Not anymore is now very much the same between the major players in the PACS market. How these features work together distinguishes different radiology workstations. Integration (with the PACS/Radiology Information System (RIS) systems, with the 3D tool, Reporting Tool etc.), usability (user specific preferences, advanced display protocols, smart activation of tools etc.) and efficiency (what is the output a radiologist can generate with the workstation) are now core factors for selecting a workstation. This paper discusses these new trends in radiology workstation design. We demonstrate the importance of the interaction between the PACS vendor (software engineers) and the customer (radiologists) during the radiology workstation design. We focus on iterative aspects of the workstation development, such as the presentation of early prototypes to as many representative users as possible during the software development cycle and present the results of a survey of 8 radiologists on designing a radiology workstation.

  15. Predicting Silk Fiber Mechanical Properties through Multiscale Simulation and Protein Design.

    PubMed

    Rim, Nae-Gyune; Roberts, Erin G; Ebrahimi, Davoud; Dinjaski, Nina; Jacobsen, Matthew M; Martín-Moldes, Zaira; Buehler, Markus J; Kaplan, David L; Wong, Joyce Y

    2017-08-14

    Silk is a promising material for biomedical applications, and much research is focused on how application-specific, mechanical properties of silk can be designed synthetically through proper amino acid sequences and processing parameters. This protocol describes an iterative process between research disciplines that combines simulation, genetic synthesis, and fiber analysis to better design silk fibers with specific mechanical properties. Computational methods are used to assess the protein polymer structure as it forms an interconnected fiber network through shearing and how this process affects fiber mechanical properties. Model outcomes are validated experimentally with the genetic design of protein polymers that match the simulation structures, fiber fabrication from these polymers, and mechanical testing of these fibers. Through iterative feedback between computation, genetic synthesis, and fiber mechanical testing, this protocol will enable a priori prediction capability of recombinant material mechanical properties via insights from the resulting molecular architecture of the fiber network based entirely on the initial protein monomer composition. This style of protocol may be applied to other fields where a research team seeks to design a biomaterial with biomedical application-specific properties. This protocol highlights when and how the three research groups (simulation, synthesis, and engineering) should be interacting to arrive at the most effective method for predictive design of their material.

  16. Aerospace engineering design by systematic decomposition and multilevel optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Barthelemy, J. F. M.; Giles, G. L.

    1984-01-01

    A method for systematic analysis and optimization of large engineering systems, by decomposition of a large task into a set of smaller subtasks that is solved concurrently is described. The subtasks may be arranged in hierarchical levels. Analyses are carried out in each subtask using inputs received from other subtasks, and are followed by optimizations carried out from the bottom up. Each optimization at the lower levels is augmented by analysis of its sensitivity to the inputs received from other subtasks to account for the couplings among the subtasks in a formal manner. The analysis and optimization operations alternate iteratively until they converge to a system design whose performance is maximized with all constraints satisfied. The method, which is still under development, is tentatively validated by test cases in structural applications and an aircraft configuration optimization.

  17. Developing sustainable software solutions for bioinformatics by the “ Butterfly” paradigm

    PubMed Central

    Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas

    2014-01-01

    Software design and sustainable software engineering are essential for the long-term development of bioinformatics software. Typical challenges in an academic environment are short-term contracts, island solutions, pragmatic approaches and loose documentation. Upcoming new challenges are big data, complex data sets, software compatibility and rapid changes in data representation. Our approach to cope with these challenges consists of iterative intertwined cycles of development (“ Butterfly” paradigm) for key steps in scientific software engineering. User feedback is valued as well as software planning in a sustainable and interoperable way. Tool usage should be easy and intuitive. A middleware supports a user-friendly Graphical User Interface (GUI) as well as a database/tool development independently. We validated the approach of our own software development and compared the different design paradigms in various software solutions. PMID:25383181

  18. Acceleration of MCNP calculations for small pipes configurations by using Weigth Windows Importance cards created by the SN-3D ATTILA

    NASA Astrophysics Data System (ADS)

    Castanier, Eric; Paterne, Loic; Louis, Céline

    2017-09-01

    In the nuclear engineering, you have to manage time and precision. Especially in shielding design, you have to be more accurate and efficient to reduce cost (shielding thickness optimization), and for this, you use 3D codes. In this paper, we want to see if we can easily applicate the CADIS methods for design shielding of small pipes which go through large concrete walls. We assess the impact of the WW generated by the 3D-deterministic code ATTILA versus WW directly generated by MCNP (iterative and manual process). The comparison is based on the quality of the convergence (estimated relative error (σ), Variance of Variance (VOV) and Figure of Merit (FOM)), on time (computer time + modelling) and on the implement for the engineer.

  19. Development Testing and Subsequent Failure Investigation of a Spring Strut Mechanism

    NASA Technical Reports Server (NTRS)

    Dervan, Jared; Robertson, Brandan; Staab, Lucas; Culberson, Michael; Pellicciotti, Joseph

    2014-01-01

    The NASA Engineering and Safety Center (NESC) and Lockheed Martin (LM) performed random vibration testing on a single spring strut development unit to assess its ability to withstand qualification level random vibration environments. Failure of the strut while exposed to random vibration resulted in a follow-on failure investigation, design changes, and additional development tests. This paper focuses on the results of the failure investigations referenced in detail in the NESC final report [1] including identified lessons learned to aid in future design iterations of the spring strut and to help other mechanism developers avoid similar pitfalls.

  20. Development Testing and Subsequent Failure Investigation of a Spring Strut Mechanism

    NASA Technical Reports Server (NTRS)

    Dervan, Jared; Robertson, Brandon; Staab, Lucas; Culberson, Michael; Pellicciotti, Joseph

    2014-01-01

    The NASA Engineering and Safety Center (NESC) and Lockheed Martin (LM) performed random vibration testing on a single spring strut development unit to assess its ability to withstand qualification level random vibration environments. Failure of the strut while exposed to random vibration resulted in a follow-on failure investigation, design changes, and additional development tests. This paper focuses on the results of the failure investigations referenced in detail in the NESC final report including identified lessons learned to aid in future design iterations of the spring strut and to help other mechanism developers avoid similar pitfalls.

  1. Sizing of complex structure by the integration of several different optimal design algorithms

    NASA Technical Reports Server (NTRS)

    Sobieszczanski, J.

    1974-01-01

    Practical design of large-scale structures can be accomplished with the aid of the digital computer by bringing together in one computer program algorithms of nonlinear mathematical programing and optimality criteria with weight-strength and other so-called engineering methods. Applications of this approach to aviation structures are discussed with a detailed description of how the total problem of structural sizing can be broken down into subproblems for best utilization of each algorithm and for efficient organization of the program into iterative loops. Typical results are examined for a number of examples.

  2. Research in computer science

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1985-01-01

    Synopses are given for NASA supported work in computer science at the University of Virginia. Some areas of research include: error seeding as a testing method; knowledge representation for engineering design; analysis of faults in a multi-version software experiment; implementation of a parallel programming environment; two computer graphics systems for visualization of pressure distribution and convective density particles; task decomposition for multiple robot arms; vectorized incomplete conjugate gradient; and iterative methods for solving linear equations on the Flex/32.

  3. A new model for graduate education and innovation in medical technology.

    PubMed

    Yazdi, Youseph; Acharya, Soumyadipta

    2013-09-01

    We describe a new model of graduate education in bioengineering innovation and design- a year long Master's degree program that educates engineers in the process of healthcare technology innovation for both advanced and low-resource global markets. Students are trained in an iterative "Spiral Innovation" approach that ensures early, staged, and repeated examination of all key elements of a successful medical device. This includes clinical immersion based problem identification and assessment (at Johns Hopkins Medicine and abroad), team based concept and business model development, and project planning based on iterative technical and business plan de-risking. The experiential, project based learning process is closely supported by several core courses in business, design, and engineering. Students in the program work on two team based projects, one focused on addressing healthcare needs in advanced markets and a second focused on low-resource settings. The program recently completed its fourth year of existence, and has graduated 61 students, who have continued on to industry or startups (one half), additional graduate education, or medical school (one third), or our own Global Health Innovation Fellowships. Over the 4 years, the program has sponsored 10 global health teams and 14 domestic/advanced market medtech teams, and launched 5 startups, of which 4 are still active. Projects have attracted over US$2.5M in follow-on awards and grants, that are supporting the continued development of over a dozen projects.

  4. Fusion materials: Technical evaluation of the technology of vandium alloys for use as blanket structural materials in fusion power systems

    NASA Astrophysics Data System (ADS)

    1993-08-01

    The Committee's evaluation of vanadium alloys as a structural material for fusion reactors was constrained by limited data and time. The design of the International Thermonuclear Experimental Reactor is still in the concept stage, so meaningful design requirements were not available. The data on the effect of environment and irradiation on vanadium alloys were sparse, and interpolation of these data were made to select the V-5Cr-5Ti alloy. With an aggressive, fully funded program it is possible to qualify a vanadium alloy as the principal structural material for the ITER blanket in the available 5 to 8-year window. However, the data base for V-5Cr-5Ti is limited and will require an extensive development and test program. Because of the chemical reactivity of vanadium the alloy will be less tolerant of system failures, accidents, and off-normal events than most other candidate blanket structural materials and will require more careful handling during fabrication of hardware. Because of the cost of the material more stringent requirements on processes, and minimal historical working experience, it will cost an order of magnitude to qualify a vanadium alloy for ITER blanket structures than other candidate materials. The use of vanadium is difficult and uncertain; therefore, other options should be explored more thoroughly before a final selection of vanadium is confirmed. The Committee views the risk as being too high to rely solely on vanadium alloys. In viewing the state and nature of the design of the ITER blanket as presented to the Committee, it is obvious that there is a need to move toward integrating fabrication, welding, and materials engineers into the ITER design team. If the vanadium alloy option is to be pursued, a large program needs to be started immediately. The commitment of funding and other resources needs to be firm and consistent with a realistic program plan.

  5. IDC Re-Engineering Phase 2 Iteration E2 Use Case Realizations Version 1.2.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamlet, Benjamin R.; Harris, James M.; Burns, John F.

    2016-12-01

    This document contains 4 use case realizations generated from the model contained in Rational Software Architect. These use case realizations are the current versions of the realizations originally delivered in Elaboration Iteration 2.

  6. Human Factors Interface with Systems Engineering for NASA Human Spaceflights

    NASA Technical Reports Server (NTRS)

    Wong, Douglas T.

    2009-01-01

    This paper summarizes the past and present successes of the Habitability and Human Factors Branch (HHFB) at NASA Johnson Space Center s Space Life Sciences Directorate (SLSD) in including the Human-As-A-System (HAAS) model in many NASA programs and what steps to be taken to integrate the Human-Centered Design Philosophy (HCDP) into NASA s Systems Engineering (SE) process. The HAAS model stresses systems are ultimately designed for the humans; the humans should therefore be considered as a system within the systems. Therefore, the model places strong emphasis on human factors engineering. Since 1987, the HHFB has been engaging with many major NASA programs with much success. The HHFB helped create the NASA Standard 3000 (a human factors engineering practice guide) and the Human Systems Integration Requirements document. These efforts resulted in the HAAS model being included in many NASA programs. As an example, the HAAS model has been successfully introduced into the programmatic and systems engineering structures of the International Space Station Program (ISSP). Success in the ISSP caused other NASA programs to recognize the importance of the HAAS concept. Also due to this success, the HHFB helped update NASA s Systems Engineering Handbook in December 2007 to include HAAS as a recommended practice. Nonetheless, the HAAS model has yet to become an integral part of the NASA SE process. Besides continuing in integrating HAAS into current and future NASA programs, the HHFB will investigate incorporating the Human-Centered Design Philosophy (HCDP) into the NASA SE Handbook. The HCDP goes further than the HAAS model by emphasizing a holistic and iterative human-centered systems design concept.

  7. Iter

    NASA Astrophysics Data System (ADS)

    Iotti, Robert

    2015-04-01

    ITER is an international experimental facility being built by seven Parties to demonstrate the long term potential of fusion energy. The ITER Joint Implementation Agreement (JIA) defines the structure and governance model of such cooperation. There are a number of necessary conditions for such international projects to be successful: a complete design, strong systems engineering working with an agreed set of requirements, an experienced organization with systems and plans in place to manage the project, a cost estimate backed by industry, and someone in charge. Unfortunately for ITER many of these conditions were not present. The paper discusses the priorities in the JIA which led to setting up the project with a Central Integrating Organization (IO) in Cadarache, France as the ITER HQ, and seven Domestic Agencies (DAs) located in the countries of the Parties, responsible for delivering 90%+ of the project hardware as Contributions-in-Kind and also financial contributions to the IO, as ``Contributions-in-Cash.'' Theoretically the Director General (DG) is responsible for everything. In practice the DG does not have the power to control the work of the DAs, and there is not an effective management structure enabling the IO and the DAs to arbitrate disputes, so the project is not really managed, but is a loose collaboration of competing interests. Any DA can effectively block a decision reached by the DG. Inefficiencies in completing design while setting up a competent organization from scratch contributed to the delays and cost increases during the initial few years. So did the fact that the original estimate was not developed from industry input. Unforeseen inflation and market demand on certain commodities/materials further exacerbated the cost increases. Since then, improvements are debatable. Does this mean that the governance model of ITER is a wrong model for international scientific cooperation? I do not believe so. Had the necessary conditions for success been present at the beginning, ITER would be in far better shape. As is, it can provide good lessons to avoid the same problems in the future. The ITER Council is now applying those lessons. A very experienced new Director General has just been appointed. He has instituted a number of drastic changes, but still within the governance of the JIA. Will there changes be effective? Only time will tell, but I am optimistic.

  8. Definition of optical systems payloads

    NASA Technical Reports Server (NTRS)

    Downey, J. A., III

    1981-01-01

    The various phases in the formulation of a major NASA project include the inception of the project, planning of the concept, and the project definition. A baseline configuration is established during the planning stage, which serves as a basis for engineering trade studies. Basic technological problems should be recognized early, and a technological verification plan prepared before development of a project begins. A progressive series of iterations is required during the definition phase, illustrating the complex interdependence of existing subsystems. A systems error budget should be established to assess the overall systems performance, identify key performance drivers, and guide performance trades and iterations around these drivers, thus decreasing final systems requirements. Unnecessary interfaces should be avoided, and reasonable design and cost margins maintained. Certain aspects of the definition of the Advanced X-ray Astrophysics Facility are used as an example.

  9. Incorporating Multi-criteria Optimization and Uncertainty Analysis in the Model-Based Systems Engineering of an Autonomous Surface Craft

    DTIC Science & Technology

    2009-09-01

    SAS Statistical Analysis Software SE Systems Engineering SEP Systems Engineering Process SHP Shaft Horsepower SIGINT Signals Intelligence......management occurs (OSD 2002). The Systems Engineering Process (SEP), displayed in Figure 2, is a comprehensive , iterative and recursive problem

  10. NASA advanced design program. Design and analysis of a radio-controlled flying wing aircraft

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The main challenge of this project was to design an aircraft that will achieve stability while flying without a horizontal tail. The project focused on both the design, analysis and construction of a remotely piloted, elliptical shaped flying wing. The design team was composed of four sub-groups each of which dealt with the different aspects of the design, namely aerodynamics, stability and control, propulsion, and structures. Each member of the team initially researched the background information pertaining to specific facets of the project. Since previous work on this topic was limited, most of the focus of the project was directed towards developing an understanding of the natural instability of the aircraft. Once the design team entered the conceptual stage of the project, a series of compromises had to be made to satisfy the unique requirements of each sub-group. As a result of the numerous calculations and iterations necessary, computers were utilized extensively. In order to visualize the design and layout of the wing, engines and control surfaces, a solid modeling package was used to evaluate optimum design placements. When the design was finalized, construction began with the help of all the members of the project team. The nature of the carbon composite construction process demanded long hours of manual labor. The assembly of the engine systems also required precision hand work. The final product of this project is the Elang, a one-of-a-kind remotely piloted aircraft of composite construction powered by two ducted fan engines.

  11. A Path to Planetary Protection Requirements for Human Exploration: A Literature Review and Systems Engineering Approach

    NASA Technical Reports Server (NTRS)

    Johnson, James E.; Conley, Cassie; Siegel, Bette

    2015-01-01

    As systems, technologies, and plans for the human exploration of Mars and other destinations beyond low Earth orbit begin to coalesce, it is imperative that frequent and early consideration is given to how planetary protection practices and policy will be upheld. While the development of formal planetary protection requirements for future human space systems and operations may still be a few years from fruition, guidance to appropriately influence mission and system design will be needed soon to avoid costly design and operational changes. The path to constructing such requirements is a journey that espouses key systems engineering practices of understanding shared goals, objectives and concerns, identifying key stakeholders, and iterating a draft requirement set to gain community consensus. This paper traces through each of these practices, beginning with a literature review of nearly three decades of publications addressing planetary protection concerns with respect to human exploration. Key goals, objectives and concerns, particularly with respect to notional requirements, required studies and research, and technology development needs have been compiled and categorized to provide a current 'state of knowledge'. This information, combined with the identification of key stakeholders in upholding planetary protection concerns for human missions, has yielded a draft requirement set that might feed future iteration among space system designers, exploration scientists, and the mission operations community. Combining the information collected with a proposed forward path will hopefully yield a mutually agreeable set of timely, verifiable, and practical requirements for human space exploration that will uphold international commitment to planetary protection.

  12. Development of a Mobile Clinical Prediction Tool to Estimate Future Depression Severity and Guide Treatment in Primary Care: User-Centered Design

    PubMed Central

    2018-01-01

    Background Around the world, depression is both under- and overtreated. The diamond clinical prediction tool was developed to assist with appropriate treatment allocation by estimating the 3-month prognosis among people with current depressive symptoms. Delivering clinical prediction tools in a way that will enhance their uptake in routine clinical practice remains challenging; however, mobile apps show promise in this respect. To increase the likelihood that an app-delivered clinical prediction tool can be successfully incorporated into clinical practice, it is important to involve end users in the app design process. Objective The aim of the study was to maximize patient engagement in an app designed to improve treatment allocation for depression. Methods An iterative, user-centered design process was employed. Qualitative data were collected via 2 focus groups with a community sample (n=17) and 7 semistructured interviews with people with depressive symptoms. The results of the focus groups and interviews were used by the computer engineering team to modify subsequent protoypes of the app. Results Iterative development resulted in 3 prototypes and a final app. The areas requiring the most substantial changes following end-user input were related to the iconography used and the way that feedback was provided. In particular, communicating risk of future depressive symptoms proved difficult; these messages were consistently misinterpreted and negatively viewed and were ultimately removed. All participants felt positively about seeing their results summarized after completion of the clinical prediction tool, but there was a need for a personalized treatment recommendation made in conjunction with a consultation with a health professional. Conclusions User-centered design led to valuable improvements in the content and design of an app designed to improve allocation of and engagement in depression treatment. Iterative design allowed us to develop a tool that allows users to feel hope, engage in self-reflection, and motivate them to treatment. The tool is currently being evaluated in a randomized controlled trial. PMID:29685864

  13. Status of DEMO-FNS development

    NASA Astrophysics Data System (ADS)

    Kuteev, B. V.; Shpanskiy, Yu. S.; DEMO-FNS Team

    2017-07-01

    Fusion-fission hybrid facility based on superconducting tokamak DEMO-FNS is developed in Russia for integrated commissioning of steady-state and nuclear fusion technologies at the power level up to 40 MW for fusion and 400 MW for fission reactions. The project status corresponds to the transition from a conceptual design to an engineering one. This facility is considered, in RF, as the main source of technological and nuclear science information, which should complement the ITER research results in the fields of burning plasma physics and control.

  14. Scale-Up: Improving Large Enrollment Physics Courses

    NASA Astrophysics Data System (ADS)

    Beichner, Robert

    1999-11-01

    The Student-Centered Activities for Large Enrollment University Physics (SCALE-UP) project is working to establish a learning environment that will promote increased conceptual understanding, improved problem-solving performance, and greater student satisfaction, while still maintaining class sizes of approximately 100. We are also addressing the new ABET engineering accreditation requirements for inquiry-based learning along with communication and team-oriented skills development. Results of studies of our latest classroom design, plans for future classroom space, and the current iteration of instructional materials will be discussed.

  15. Object-oriented technologies in a multi-mission data system

    NASA Technical Reports Server (NTRS)

    Murphy, Susan C.; Miller, Kevin J.; Louie, John J.

    1993-01-01

    The Operations Engineering Laboratory (OEL) at JPL is developing new technologies that can provide more efficient and productive ways of doing business in flight operations. Over the past three years, we have worked closely with the Multi-Mission Control Team to develop automation tools, providing technology transfer into operations and resulting in substantial cost savings and error reduction. The OEL development philosophy is characterized by object-oriented design, extensive reusability of code, and an iterative development model with active participation of the end users. Through our work, the benefits of object-oriented design became apparent for use in mission control data systems. Object-oriented technologies and how they can be used in a mission control center to improve efficiency and productivity are explained. The current research and development efforts in the JPL Operations Engineering Laboratory are also discussed to architect and prototype a new paradigm for mission control operations based on object-oriented concepts.

  16. Engineering central metabolism - a grand challenge for plant biologists.

    PubMed

    Sweetlove, Lee J; Nielsen, Jens; Fernie, Alisdair R

    2017-05-01

    The goal of increasing crop productivity and nutrient-use efficiency is being addressed by a number of ambitious research projects seeking to re-engineer photosynthetic biochemistry. Many of these projects will require the engineering of substantial changes in fluxes of central metabolism. However, as has been amply demonstrated in simpler systems such as microbes, central metabolism is extremely difficult to rationally engineer. This is because of multiple layers of regulation that operate to maintain metabolic steady state and because of the highly connected nature of central metabolism. In this review we discuss new approaches for metabolic engineering that have the potential to address these problems and dramatically improve the success with which we can rationally engineer central metabolism in plants. In particular, we advocate the adoption of an iterative 'design-build-test-learn' cycle using fast-to-transform model plants as test beds. This approach can be realised by coupling new molecular tools to incorporate multiple transgenes in nuclear and plastid genomes with computational modelling to design the engineering strategy and to understand the metabolic phenotype of the engineered organism. We also envisage that mutagenesis could be used to fine-tune the balance between the endogenous metabolic network and the introduced enzymes. Finally, we emphasise the importance of considering the plant as a whole system and not isolated organs: the greatest increase in crop productivity will be achieved if both source and sink metabolism are engineered. © 2016 The Authors The Plant Journal © 2016 John Wiley & Sons Ltd.

  17. CRISPR/Cas9-coupled recombineering for metabolic engineering of Corynebacterium glutamicum.

    PubMed

    Cho, Jae Sung; Choi, Kyeong Rok; Prabowo, Cindy Pricilia Surya; Shin, Jae Ho; Yang, Dongsoo; Jang, Jaedong; Lee, Sang Yup

    2017-07-01

    Genome engineering of Corynebacterium glutamicum, an important industrial microorganism for amino acids production, currently relies on random mutagenesis and inefficient double crossover events. Here we report a rapid genome engineering strategy to scarlessly knock out one or more genes in C. glutamicum in sequential and iterative manner. Recombinase RecT is used to incorporate synthetic single-stranded oligodeoxyribonucleotides into the genome and CRISPR/Cas9 to counter-select negative mutants. We completed the system by engineering the respective plasmids harboring CRISPR/Cas9 and RecT for efficient curing such that multiple gene targets can be done iteratively and final strains will be free of plasmids. To demonstrate the system, seven different mutants were constructed within two weeks to study the combinatorial deletion effects of three different genes on the production of γ-aminobutyric acid, an industrially relevant chemical of much interest. This genome engineering strategy will expedite metabolic engineering of C. glutamicum. Copyright © 2017 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  18. A tool to convert CAD models for importation into Geant4

    NASA Astrophysics Data System (ADS)

    Vuosalo, C.; Carlsmith, D.; Dasu, S.; Palladino, K.; LUX-ZEPLIN Collaboration

    2017-10-01

    The engineering design of a particle detector is usually performed in a Computer Aided Design (CAD) program, and simulation of the detector’s performance can be done with a Geant4-based program. However, transferring the detector design from the CAD program to Geant4 can be laborious and error-prone. SW2GDML is a tool that reads a design in the popular SOLIDWORKS CAD program and outputs Geometry Description Markup Language (GDML), used by Geant4 for importing and exporting detector geometries. Other methods for outputting CAD designs are available, such as the STEP format, and tools exist to convert these formats into GDML. However, these conversion methods produce very large and unwieldy designs composed of tessellated solids that can reduce Geant4 performance. In contrast, SW2GDML produces compact, human-readable GDML that employs standard geometric shapes rather than tessellated solids. This paper will describe the development and current capabilities of SW2GDML and plans for its enhancement. The aim of this tool is to automate importation of detector engineering models into Geant4-based simulation programs to support rapid, iterative cycles of detector design, simulation, and optimization.

  19. The opto-mechanical design process: from vision to reality

    NASA Astrophysics Data System (ADS)

    Kvamme, E. Todd; Stubbs, David M.; Jacoby, Michael S.

    2017-08-01

    The design process for an opto-mechanical sub-system is discussed from requirements development through test. The process begins with a proper mission understanding and the development of requirements for the system. Preliminary design activities are then discussed with iterative analysis and design work being shared between the design, thermal, and structural engineering personnel. Readiness for preliminary review and the path to a final design review are considered. The value of prototyping and risk mitigation testing is examined with a focus on when it makes sense to execute a prototype test program. System level margin is discussed in general terms, and the practice of trading margin in one area of performance to meet another area is reviewed. Requirements verification and validation is briefly considered. Testing and its relationship to requirements verification concludes the design process.

  20. Multi-Mission System Architecture Platform: Design and Verification of the Remote Engineering Unit

    NASA Technical Reports Server (NTRS)

    Sartori, John

    2005-01-01

    The Multi-Mission System Architecture Platform (MSAP) represents an effort to bolster efficiency in the spacecraft design process. By incorporating essential spacecraft functionality into a modular, expandable system, the MSAP provides a foundation on which future spacecraft missions can be developed. Once completed, the MSAP will provide support for missions with varying objectives, while maintaining a level of standardization that will minimize redesign of general system components. One subsystem of the MSAP, the Remote Engineering Unit (REU), functions by gathering engineering telemetry from strategic points on the spacecraft and providing these measurements to the spacecraft's Command and Data Handling (C&DH) subsystem. Before the MSAP Project reaches completion, all hardware, including the REU, must be verified. However, the speed and complexity of the REU circuitry rules out the possibility of physical prototyping. Instead, the MSAP hardware is designed and verified using the Verilog Hardware Definition Language (HDL). An increasingly popular means of digital design, HDL programming provides a level of abstraction, which allows the designer to focus on functionality while logic synthesis tools take care of gate-level design and optimization. As verification of the REU proceeds, errors are quickly remedied, preventing costly changes during hardware validation. After undergoing the careful, iterative processes of verification and validation, the REU and MSAP will prove their readiness for use in a multitude of spacecraft missions.

  1. Control system design for flexible structures using data models

    NASA Technical Reports Server (NTRS)

    Irwin, R. Dennis; Frazier, W. Garth; Mitchell, Jerrel R.; Medina, Enrique A.; Bukley, Angelia P.

    1993-01-01

    The dynamics and control of flexible aerospace structures exercises many of the engineering disciplines. In recent years there has been considerable research in the developing and tailoring of control system design techniques for these structures. This problem involves designing a control system for a multi-input, multi-output (MIMO) system that satisfies various performance criteria, such as vibration suppression, disturbance and noise rejection, attitude control and slewing control. Considerable progress has been made and demonstrated in control system design techniques for these structures. The key to designing control systems for these structures that meet stringent performance requirements is an accurate model. It has become apparent that theoretically and finite-element generated models do not provide the needed accuracy; almost all successful demonstrations of control system design techniques have involved using test results for fine-tuning a model or for extracting a model using system ID techniques. This paper describes past and ongoing efforts at Ohio University and NASA MSFC to design controllers using 'data models.' The basic philosophy of this approach is to start with a stabilizing controller and frequency response data that describes the plant; then, iteratively vary the free parameters of the controller so that performance measures become closer to satisfying design specifications. The frequency response data can be either experimentally derived or analytically derived. One 'design-with-data' algorithm presented in this paper is called the Compensator Improvement Program (CIP). The current CIP designs controllers for MIMO systems so that classical gain, phase, and attenuation margins are achieved. The center-piece of the CIP algorithm is the constraint improvement technique which is used to calculate a parameter change vector that guarantees an improvement in all unsatisfied, feasible performance metrics from iteration to iteration. The paper also presents a recently demonstrated CIP-type algorithm, called the Model and Data Oriented Computer-Aided Design System (MADCADS), developed for achieving H(sub infinity) type design specifications using data models. Control system design for the NASA/MSFC Single Structure Control Facility are demonstrated for both CIP and MADCADS. Advantages of design-with-data algorithms over techniques that require analytical plant models are also presented.

  2. Engineering of synthetic, stress-responsive yeast promoters

    PubMed Central

    Rajkumar, Arun S.; Liu, Guodong; Bergenholm, David; Arsovska, Dushica; Kristensen, Mette; Nielsen, Jens; Jensen, Michael K.; Keasling, Jay D.

    2016-01-01

    Advances in synthetic biology and our understanding of the rules of promoter architecture have led to the development of diverse synthetic constitutive and inducible promoters in eukaryotes and prokaryotes. However, the design of promoters inducible by specific endogenous or environmental conditions is still rarely undertaken. In this study, we engineered and characterized a set of strong, synthetic promoters for budding yeast Saccharomyces cerevisiae that are inducible under acidic conditions (pH ≤ 3). Using available expression and transcription factor binding data, literature on transcriptional regulation, and known rules of promoter architecture we improved the low-pH performance of the YGP1 promoter by modifying transcription factor binding sites in its upstream activation sequence. The engineering strategy outlined for the YGP1 promoter was subsequently applied to create a response to low pH in the unrelated CCW14 promoter. We applied our best promoter variants to low-pH fermentations, enabling ten-fold increased production of lactic acid compared to titres obtained with the commonly used, native TEF1 promoter. Our findings outline and validate a general strategy to iteratively design and engineer synthetic yeast promoters inducible to environmental conditions or stresses of interest. PMID:27325743

  3. Novel Framework for Reduced Order Modeling of Aero-engine Components

    NASA Astrophysics Data System (ADS)

    Safi, Ali

    The present study focuses on the popular dynamic reduction methods used in design of complex assemblies (millions of Degrees of Freedom) where numerous iterations are involved to achieve the final design. Aerospace manufacturers such as Rolls Royce and Pratt & Whitney are actively seeking techniques that reduce computational time while maintaining accuracy of the models. This involves modal analysis of components with complex geometries to determine the dynamic behavior due to non-linearity and complicated loading conditions. In such a case the sub-structuring and dynamic reduction techniques prove to be an efficient tool to reduce design cycle time. The components whose designs are finalized can be dynamically reduced to mass and stiffness matrices at the boundary nodes in the assembly. These matrices conserve the dynamics of the component in the assembly, and thus avoid repeated calculations during the analysis runs for design modification of other components. This thesis presents a novel framework in terms of modeling and meshing of any complex structure, in this case an aero-engine casing. In this study the affect of meshing techniques on the run time are highlighted. The modal analysis is carried out using an extremely fine mesh to ensure all minor details in the structure are captured correctly in the Finite Element (FE) model. This is used as the reference model, to compare against the results of the reduced model. The study also shows the conditions/criteria under which dynamic reduction can be implemented effectively, proving the accuracy of Criag-Bampton (C.B.) method and limitations of Static Condensation. The study highlights the longer runtime needed to produce the reduced matrices of components compared to the overall runtime of the complete unreduced model. Although once the components are reduced, the assembly run is significantly. Hence the decision to use Component Mode Synthesis (CMS) is to be taken judiciously considering the number of iterations that may be required during the design cycle.

  4. Two conceptual designs of helical fusion reactor FFHR-d1A based on ITER technologies and challenging ideas

    NASA Astrophysics Data System (ADS)

    Sagara, A.; Miyazawa, J.; Tamura, H.; Tanaka, T.; Goto, T.; Yanagi, N.; Sakamoto, R.; Masuzaki, S.; Ohtani, H.; The FFHR Design Group

    2017-08-01

    The Fusion Engineering Research Project (FERP) at the National Institute for Fusion Science (NIFS) is conducting conceptual design activities for the LHD-type helical fusion reactor FFHR-d1A. This paper newly defines two design options, ‘basic’ and ‘challenging.’ Conservative technologies, including those that will be demonstrated in ITER, are chosen in the basic option in which two helical coils are made of continuously wound cable-in-conduit superconductors of Nb3Sn strands, the divertor is composed of water-cooled tungsten monoblocks, and the blanket is composed of water-cooled ceramic breeders. In contrast, new ideas that would possibly be beneficial for making the reactor design more attractive are boldly included in the challenging option in which the helical coils are wound by connecting high-temperature REBCO superconductors using mechanical joints, the divertor is composed of a shower of molten tin jets, and the blanket is composed of molten salt FLiNaBe including Ti powers to increase hydrogen solubility. The main targets of the challenging option are early construction and easy maintenance of a large and three-dimensionally complicated helical structure, high thermal efficiency, and, in particular, realistic feasibility of the helical reactor.

  5. Applications of a direct/iterative design method to complex transonic configurations

    NASA Technical Reports Server (NTRS)

    Smith, Leigh Ann; Campbell, Richard L.

    1992-01-01

    The current study explores the use of an automated direct/iterative design method for the reduction of drag in transport configurations, including configurations with engine nacelles. The method requires the user to choose a proper target-pressure distribution and then develops a corresponding airfoil section. The method can be applied to two-dimensional airfoil sections or to three-dimensional wings. The three cases that are presented show successful application of the method for reducing drag from various sources. The first two cases demonstrate the use of the method to reduce induced drag by designing to an elliptic span-load distribution and to reduce wave drag by decreasing the shock strength for a given lift. In the second case, a body-mounted nacelle is added and the method is successfully used to eliminate increases in wing drag associated with the nacelle addition by designing to an arbitrary pressure distribution as a result of the redesigning of a wing in combination with a given underwing nacelle to clean-wing, target-pressure distributions. These cases illustrate several possible uses of the method for reducing different types of drag. The magnitude of the obtainable drag reduction varies with the constraints of the problem and the configuration to be modified.

  6. Design of the helium cooled lithium lead breeding blanket in CEA: from TBM to DEMO

    NASA Astrophysics Data System (ADS)

    Aiello, G.; Aubert, J.; Forest, L.; Jaboulay, J.-C.; Li Puma, A.; Boccaccini, L. V.

    2017-04-01

    The helium cooled lithium lead (HCLL) blanket concept was originally developed in CEA at the beginning of 2000: it is one of the two European blanket concepts to be tested in ITER in the form of a test blanket module (TBM) and one of the four blanket concepts currently being considered for the DEMOnstration reactor that will follow ITER. The TBM is a highly optimized component for the ITER environment that will provide crucial information for the development of the DEMO blanket, but its design needs to be adapted to the DEMO reactor. With respect to the TBM design, reduction of the steel content in the breeding zone (BZ) is sought in order to maximize tritium breeding reactions. Different options are being studied, with the potential of reaching tritium breeding ratio (TBR) values up to 1.21. At the same time, the design of the back supporting structure (BSS), which is a DEMO specific component that has to support the blanket modules inside the vacuum vessel (VV), is ongoing with the aim of maximizing the shielding power and minimizing pumping power. This implies a re-engineering of the modules’ attachment system. Design changes however, will have an impact on the manufacturing and assembly sequences that are being developed for the HCLL-TBM. Due to the differences in joint configurations, thicknesses to be welded, heat dissipation and the various technical constraints related to the accessibility of the welding tools and implementation of non-destructive examination (NDE), the manufacturing procedure should be adapted and optimized for DEMO design. Laser welding instead of TIG could be an option to reduce distortions. The time-of-flight diffraction (TOFD) technique is being investigated for NDE. Finally, essential information expected from the HCLL-TBM program that will be needed to finalize the DEMO design is discussed.

  7. Cell-free metabolic engineering: Biomanufacturing beyond the cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dudley, QM; Karim, AS; Jewett, MC

    2014-10-15

    Industrial biotechnology and microbial metabolic engineering are poised to help meet the growing demand for sustainable, low-cost commodity chemicals and natural products, yet the fraction of biochemicals amenable to commercial production remains limited. Common problems afflicting the current state-of-the-art include low volumetric productivities, build-up of toxic intermediates or products, and byproduct losses via competing pathways. To overcome these limitations, cell-free metabolic engineering (CFME) is expanding the scope of the traditional bioengineering model by using in vitro ensembles of catalytic proteins prepared from purified enzymes or crude lysates of cells for the production of target products. In recent years, the unprecedentedmore » level of control and freedom of design, relative to in vivo systems, has inspired the development of engineering foundations for cell-free systems. These efforts have led to activation of long enzymatic pathways (>8 enzymes), near theoretical conversion yields, productivities greater than 100 mg L-1 h(-1), reaction scales of >100 L, and new directions in protein purification, spatial organization, and enzyme stability. In the coming years, CFME will offer exciting opportunities to: (i) debug and optimize biosynthetic pathways; (ii) carry out design-build-test iterations without re-engineering organisms; and (iii) perform molecular transformations when bioconversion yields, productivities, or cellular toxicity limit commercial feasibility.« less

  8. Cell-Free Metabolic Engineering: Biomanufacturing beyond the cell

    PubMed Central

    Dudley, Quentin M.; Karim, Ashty S.; Jewett, Michael C.

    2014-01-01

    Industrial biotechnology and microbial metabolic engineering are poised to help meet the growing demand for sustainable, low-cost commodity chemicals and natural products, yet the fraction of biochemicals amenable to commercial production remains limited. Common problems afflicting the current state-of-the-art include low volumetric productivities, build-up of toxic intermediates or products, and byproduct losses via competing pathways. To overcome these limitations, cell-free metabolic engineering (CFME) is expanding the scope of the traditional bioengineering model by using in vitro ensembles of catalytic proteins prepared from purified enzymes or crude lysates of cells for the production of target products. In recent years, the unprecedented level of control and freedom of design, relative to in vivo systems, has inspired the development of engineering foundations for cell-free systems. These efforts have led to activation of long enzymatic pathways (>8 enzymes), near theoretical conversion yields, productivities greater than 100 mg L−1 hr−1, reaction scales of >100L, and new directions in protein purification, spatial organization and enzyme stability. In the coming years, CFME will offer exciting opportunities to (i) debug and optimize biosynthetic pathways, (ii) carry out design-build-test iterations without re-engineering organisms, and (iii) perform molecular transformations when bioconversion yields, productivities, or cellular toxicity limit commercial feasibility. PMID:25319678

  9. ITER Magnet Feeder: Design, Manufacturing and Integration

    NASA Astrophysics Data System (ADS)

    CHEN, Yonghua; ILIN, Y.; M., SU; C., NICHOLAS; BAUER, P.; JAROMIR, F.; LU, Kun; CHENG, Yong; SONG, Yuntao; LIU, Chen; HUANG, Xiongyi; ZHOU, Tingzhi; SHEN, Guang; WANG, Zhongwei; FENG, Hansheng; SHEN, Junsong

    2015-03-01

    The International Thermonuclear Experimental Reactor (ITER) feeder procurement is now well underway. The feeder design has been improved by the feeder teams at the ITER Organization (IO) and the Institute of Plasma Physics, Chinese Academy of Sciences (ASIPP) in the last 2 years along with analyses and qualification activities. The feeder design is being progressively finalized. In addition, the preparation of qualification and manufacturing are well scheduled at ASIPP. This paper mainly presents the design, the overview of manufacturing and the status of integration on the ITER magnet feeders. supported by the National Special Support for R&D on Science and Technology for ITER (Ministry of Public Security of the People's Republic of China-MPS) (No. 2008GB102000)

  10. The Study the Vibration Condition of the Blade of the Gas Turbine Engine with an All-metal Wire Rope Damper in the Area Mount of the Blade to the Disk

    NASA Astrophysics Data System (ADS)

    Melentjev, Vladimir S.; Gvozdev, Alexander S.

    2018-01-01

    Improving the reliability of modern turbine engines is actual task. This is achieved due to prevent a vibration damage of the operating blades. On the department of structure and design of aircraft engines have accumulated a lot of experimental data on the protection of the blades of the gas turbine engine from a vibration. In this paper we proposed a method for calculating the characteristics of wire rope dampers in the root attachment of blade of a gas turbine engine. The method is based on the use of the finite element method and transient analysis. Contact interaction (Lagrange-Euler method) between the compressor blade and the disc of the rotor has been taken into account. Contribution of contact interaction between details in damping of the system was measured. The proposed method provides a convenient way for the iterative selection of the required parameters the wire rope elastic-damping element. This element is able to provide the necessary protection from the vibration for the blade of a gas turbine engine.

  11. Structures for handling high heat fluxes

    NASA Astrophysics Data System (ADS)

    Watson, R. D.

    1990-12-01

    The divertor is reconized as one of the main performance limiting components for ITER. This paper reviews the critical issues for structures that are designed to withstand heat fluxes > 5 MW/m 2. High velocity, sub-cooled water with twisted tape inserts for enhanced heat transfer provides a critical heat flux limit of 40-60 MW/m 2. Uncertainties in physics and engineering heat flux peaking factors require that the design heat flux not exceed 10 MW/m 2 to maintain an adequate burnout safety margin. Armor tiles and heat sink materials must have a well matched thermal expansion coefficient to minimize stresses. The divertor lifetime from sputtering erosion is highly uncertain. The number of disruptions specified for ITER must be reduced to achieve a credible design. In-situ plasma spray repair with thick metallic coatings may reduce the problems of erosion. Runaway electrons in ITER have the potential to melt actively cooled components in a single event. A water leak is a serious accident because of steam reactions with hot carbon, beryllium, or tungsten that can mobilize large amounts of tritium and radioactive elements. If the plasma does not shutdown immediately, the divertor can melt in 1-10 s after a loss of coolant accident. Very high reliability of carbon tile braze joints will be required to achieve adequate safety and performance goals. Most of these critical issues will be addressed in the near future by operation of the Tore Supra pump limiters and the JET pumped divertor. An accurate understanding of the power flow out of edge of a DT burning plasma is essential to successful design of high heat flux components.

  12. Human factors engineering verification and validation for APR1400 computerized control room

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, Y. C.; Moon, H. K.; Kim, J. H.

    2006-07-01

    This paper introduces the Advanced Power Reactor 1400 (APR1400) HFE V and V activities the Korea Hydro Nuclear Plant Co. LTD. (KHNP) has performed for the last 10 years and some of the lessons learned through these activities. The features of APR1400 main control room include large display panel, redundant compact workstations, computer-based procedure, and safety console. Several iterations of human factors evaluations have been performed from small scale proof of concept tests to large scale integrated system tests for identifying human engineering deficiencies in the human system interface design. Evaluations in the proof of concept test were focused onmore » checking the presence of any show stopper problems in the design concept. Later evaluations were mostly for finding design problems and for assuring the resolution of human factors issues of advanced control room. The results of design evaluations were useful not only for refining the control room design, but also for licensing the standard design. Several versions of APR1400 mock-ups with dynamic simulation models of currently operating Korea Standard Nuclear Plant (KSNP) have been used for the evaluations with the participation of operators from KSNP plants. (authors)« less

  13. Tritium proof-of-principle pellet injector: Phase 2

    NASA Astrophysics Data System (ADS)

    Fisher, P. W.; Gouge, M. J.

    1995-03-01

    As part of the International Thermonuclear Engineering Reactor (ITER) plasma fueling development program, Oak Ridge National Laboratory (ORNL) has fabricated a pellet injection system to test the mechanical and thermal properties of extruded tritium. This repeating, single-stage, pneumatic injector, called the Tritium-Proof-of-Principle Phase-2 (TPOP-2) Pellet Injector, has a piston-driven mechanical extruder and is designed to extrude hydrogenic pellets sized for the ITER device. The TPOP-II program has the following development goals: evaluate the feasibility of extruding tritium and DT mixtures for use in future pellet injection systems; determine the mechanical and thermal properties of tritium and DT extrusions; integrate, test and evaluate the extruder in a repeating, single-stage light gas gun sized for the ITER application (pellet diameter approximately 7-8 mm); evaluate options for recycling propellant and extruder exhaust gas; evaluate operability and reliability of ITER prototypical fueling systems in an environment of significant tritium inventory requiring secondary and room containment systems. In initial tests with deuterium feed at ORNL, up to thirteen pellets have been extruded at rates up to 1 Hz and accelerated to speeds of order 1.0-1.1 km/s using hydrogen propellant gas at a supply pressure of 65 bar. The pellets are typically 7.4 mm in diameter and up to 11 mm in length and are the largest cryogenic pellets produced by the fusion program to date. These pellets represent about a 11% density perturbation to ITER. Hydrogenic pellets will be used in ITER to sustain the fusion power in the plasma core and may be crucial in reducing first wall tritium inventories by a process called isotopic fueling where tritium-rich pellets fuel the burning plasma core and deuterium gas fuels the edge.

  14. Aerospace engineering design by systematic decomposition and multilevel optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Giles, G. L.; Barthelemy, J.-F. M.

    1984-01-01

    This paper describes a method for systematic analysis and optimization of large engineering systems, e.g., aircraft, by decomposition of a large task into a set of smaller, self-contained subtasks that can be solved concurrently. The subtasks may be arranged in many hierarchical levels with the assembled system at the top level. Analyses are carried out in each subtask using inputs received from other subtasks, and are followed by optimizations carried out from the bottom up. Each optimization at the lower levels is augmented by analysis of its sensitivity to the inputs received from other subtasks to account for the couplings among the subtasks in a formal manner. The analysis and optimization operations alternate iteratively until they converge to a system design whose performance is maximized with all constraints satisfied. The method, which is still under development, is tentatively validated by test cases in structural applications and an aircraft configuration optimization. It is pointed out that the method is intended to be compatible with the typical engineering organization and the modern technology of distributed computing.

  15. AAL service development loom--from the idea to a marketable business model.

    PubMed

    Kriegel, Johannes; Auinger, Klemens

    2015-01-01

    The Ambient Assisted Living (AAL) market is still in an early stage of development. Previous approaches of comprehensive AAL services are mostly supply-side driven and focused on hardware and software. Usually this type of AAL solutions does not lead to a sustainable success on the market. Research and development increasingly focuses on demand and customer requirements in addition to the social and legal framework. The question is: How can a systematic performance measurement strategy along a service development process support the market-ready design of a concrete business model for AAL service? Within the EU funded research project DALIA (Assistant for Daily Life Activities at Home) an iterative service development process uses an adapted Osterwalder business model canvas. The application of a performance measurement index (PMI) to support the process has been developed and tested. Development of an iterative service development model using a supporting PMI. The PMI framework is developed throughout the engineering of a virtual assistant (AVATAR) as a modular interface to connect informal carers with necessary and useful services. Future research should seek to ensure that the PMI enables meaningful transparency regarding targeting (e.g. innovative AAL service), design (e.g. functional hybrid AAL service) and implementation (e.g. marketable AAL support services). To this end, a further reference to further testing practices is required. The aim must be to develop a weighted PMI in the context of further research, which supports both the service engineering and the subsequent service management process.

  16. solveTruss v1.0: Static, global buckling and frequency analysis of 2D and 3D trusses with Mathematica

    NASA Astrophysics Data System (ADS)

    Ozbasaran, Hakan

    Trusses have an important place amongst engineering structures due to many advantages such as high structural efficiency, fast assembly and easy maintenance. Iterative truss design procedures, which require analysis of a large number of candidate structural systems such as size, shape and topology optimization with stochastic methods, mostly lead the engineer to establish a link between the development platform and external structural analysis software. By increasing number of structural analyses, this (probably slow-response) link may climb to the top of the list of performance issues. This paper introduces a software for static, global member buckling and frequency analysis of 2D and 3D trusses to overcome this problem for Mathematica users.

  17. Detail design of empennage of an unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    Sarker, Md. Samad; Panday, Shoyon; Rasel, Md; Salam, Md. Abdus; Faisal, Kh. Md.; Farabi, Tanzimul Hasan

    2017-12-01

    In order to maintain the operational continuity of air defense systems, unmanned autonomous or remotely controlled unmanned aerial vehicle (UAV) plays a great role as a target for the anti-aircraft weapons. The aerial vehicle must comply with the requirements of high speed, remotely controlled tracking and navigational aids, operational sustainability and sufficient loiter time. It can also be used for aerial reconnaissance, ground surveillance and other intelligence operations. This paper aims to develop a complete tail design of an unmanned aerial vehicle using Systems Engineering approach. The design fulfils the requirements of longitudinal and directional trim, stability and control provided by the horizontal and vertical tail. Tail control surfaces are designed to provide sufficient control of the aircraft in critical conditions. Design parameters obtained from wing design are utilized in the tail design process as required. Through chronological calculations and successive iterations, optimum values of 26 tail design parameters are determined.

  18. Application of CFD to the analysis and design of high-speed inlets

    NASA Technical Reports Server (NTRS)

    Rose, William C.

    1995-01-01

    Over the past seven years, efforts under the present Grant have been aimed at being able to apply modern Computational Fluid Dynamics to the design of high-speed engine inlets. In this report, a review of previous design capabilities (prior to the advent of functioning CFD) was presented and the example of the NASA 'Mach 5 inlet' design was given as the premier example of the historical approach to inlet design. The philosophy used in the Mach 5 inlet design was carried forward in the present study, in which CFD was used to design a new Mach 10 inlet. An example of an inlet redesign was also shown. These latter efforts were carried out using today's state-of-the-art, full computational fluid dynamics codes applied in an iterative man-in-the-loop technique. The potential usefulness of an automated machine design capability using an optimizer code was also discussed.

  19. ECRH System For ITER

    NASA Astrophysics Data System (ADS)

    Darbos, C.; Henderson, M.; Albajar, F.; Bigelow, T.; Bomcelli, T.; Chavan, R.; Denisov, G.; Farina, D.; Gandini, F.; Heidinger, R.; Goodman, T.; Hogge, J. P.; Kajiwara, K.; Kasugai, A.; Kern, S.; Kobayashi, N.; Oda, Y.; Ramponi, G.; Rao, S. L.; Rasmussen, D.; Rzesnicki, T.; Saibene, G.; Sakamoto, K.; Sauter, O.; Scherer, T.; Strauss, D.; Takahashi, K.; Zohm, H.

    2009-11-01

    A 26 MW Electron Cyclotron Heating and Current Drive (EC H&CD) system is to be installed for ITER. The main objectives are to provide, start-up assist, central H&CD and control of MHD activity. These are achieved by a combination of two types of launchers, one located in an equatorial port and the second type in four upper ports. The physics applications are partitioned between the two launchers, based on the deposition location and driven current profiles. The equatorial launcher (EL) will access from the plasma axis to mid radius with a relatively broad profile useful for central heating and current drive applications, while the upper launchers (ULs) will access roughly the outer half of the plasma radius with a very narrow peaked profile for the control of the Neoclassical Tearing Modes (NTM) and sawtooth oscillations. The EC power can be switched between launchers on a time scale as needed by the immediate physics requirements. A revision of all injection angles of all launchers is under consideration for increased EC physics capabilities while relaxing the engineering constraints of both the EL and ULs. A series of design reviews are being planned with the five parties (EU, IN, JA, RF, US) procuring the EC system, the EC community and ITER Organization (IO). The review meetings qualify the design and provide an environment for enhancing performances while reducing costs, simplifying interfaces, predicting technology upgrades and commercial availability. In parallel, the test programs for critical components are being supported by IO and performed by the Domestic Agencies (DAs) for minimizing risks. The wide participation of the DAs provides a broad representation from the EC community, with the aim of collecting all expertise in guiding the EC system optimization. Still a strong relationship between IO and the DA is essential for optimizing the design of the EC system and for the installation and commissioning of all ex-vessel components when several teams from several DAs will be involved together in the tests on the ITER site.

  20. Evolving Reliability and Maintainability Allocations for NASA Ground Systems

    NASA Technical Reports Server (NTRS)

    Munoz, Gisela; Toon, T.; Toon, J.; Conner, A.; Adams, T.; Miranda, D.

    2016-01-01

    This paper describes the methodology and value of modifying allocations to reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) programs subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. This iterative process provided an opportunity for the reliability engineering team to reevaluate allocations as systems moved beyond their conceptual and preliminary design phases. These new allocations are based on updated designs and maintainability characteristics of the components. It was found that trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper discusses the results of reliability and maintainability reallocations made for the GSDO subsystems as the program nears the end of its design phase.

  1. Evolving Reliability and Maintainability Allocations for NASA Ground Systems

    NASA Technical Reports Server (NTRS)

    Munoz, Gisela; Toon, Troy; Toon, Jamie; Conner, Angelo C.; Adams, Timothy C.; Miranda, David J.

    2016-01-01

    This paper describes the methodology and value of modifying allocations to reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) program’s subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. This iterative process provided an opportunity for the reliability engineering team to reevaluate allocations as systems moved beyond their conceptual and preliminary design phases. These new allocations are based on updated designs and maintainability characteristics of the components. It was found that trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper discusses the results of reliability and maintainability reallocations made for the GSDO subsystems as the program nears the end of its design phase.

  2. Evolving Reliability and Maintainability Allocations for NASA Ground Systems

    NASA Technical Reports Server (NTRS)

    Munoz, Gisela; Toon, Jamie; Toon, Troy; Adams, Timothy C.; Miranda, David J.

    2016-01-01

    This paper describes the methodology that was developed to allocate reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) program's subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. Allocating is an iterative process; as systems moved beyond their conceptual and preliminary design phases this provided an opportunity for the reliability engineering team to reevaluate allocations based on updated designs and maintainability characteristics of the components. Trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper will discuss the value of modifying reliability and maintainability allocations made for the GSDO subsystems as the program nears the end of its design phase.

  3. A WEB based approach in biomedical engineering design education.

    PubMed

    Enderle, J D; Browne, A F; Hallowell, M B

    1997-01-01

    As part of the accreditation process for university engineering programs, students are required to complete a minimum number of design credits in their course of study, typically at the senior level. Many call this the capstone course. Engineering design is a course or series of courses that bring together concepts and principles that students learn in their field of study--it involves the integration and extension of material learned in their major toward a specific project. Most often, the student is exposed to system-wide analysis, critique and evaluation for the first time. Design is an iterative, decision making process in which the student optimally applies previously learned material to meet a stated objective. At the University of Connecticut, students work in teams of 3-4 members and work on externally sponsored projects. To facilitate working with sponsors, a WEB based approach is used for reporting the progress on projects. Students are responsible for creating their own WEB sites that support both html and pdf formats. Students provide the following deliverables: weekly progress reports, project statement, specifications, project proposal, interim report, and final report. A senior design homepage also provides links to data books and other resources for use by students. We are also planning distance learning experiences between two campuses so students can work on projects that involve the use of video conferencing.

  4. Evaluating a Web-Based Interface for Internet Telemedicine

    NASA Technical Reports Server (NTRS)

    Lathan, Corinna E.; Newman, Dava J.; Sebrechts, Marc M.; Doarn, Charles R.

    1997-01-01

    The objective is to introduce the usability engineering methodology, heuristic evaluation, to the design and development of a web-based telemedicine system. Using a set of usability criteria, or heuristics, one evaluator examined the Spacebridge to Russia web-site for usability problems. Thirty-four usability problems were found in this preliminary study and all were assigned a severity rating. The value of heuristic analysis in the iterative design of a system is shown because the problems can be fixed before deployment of a system and the problems are of a different nature than those found by actual users of the system. It was therefore determined that there is potential value of heuristic evaluation paired with user testing as a strategy for optimal system performance design.

  5. Prototype of a computer method for designing and analyzing heating, ventilating and air conditioning proportional, electronic control systems

    NASA Astrophysics Data System (ADS)

    Barlow, Steven J.

    1986-09-01

    The Air Force needs a better method of designing new and retrofit heating, ventilating and air conditioning (HVAC) control systems. Air Force engineers currently use manual design/predict/verify procedures taught at the Air Force Institute of Technology, School of Civil Engineering, HVAC Control Systems course. These existing manual procedures are iterative and time-consuming. The objectives of this research were to: (1) Locate and, if necessary, modify an existing computer-based method for designing and analyzing HVAC control systems that is compatible with the HVAC Control Systems manual procedures, or (2) Develop a new computer-based method of designing and analyzing HVAC control systems that is compatible with the existing manual procedures. Five existing computer packages were investigated in accordance with the first objective: MODSIM (for modular simulation), HVACSIM (for HVAC simulation), TRNSYS (for transient system simulation), BLAST (for building load and system thermodynamics) and Elite Building Energy Analysis Program. None were found to be compatible or adaptable to the existing manual procedures, and consequently, a prototype of a new computer method was developed in accordance with the second research objective.

  6. DYNGEN: A program for calculating steady-state and transient performance of turbojet and turbofan engines

    NASA Technical Reports Server (NTRS)

    Sellers, J. F.; Daniele, C. J.

    1975-01-01

    The DYNGEN, a digital computer program for analyzing the steady state and transient performance of turbojet and turbofan engines, is described. The DYNGEN is based on earlier computer codes (SMOTE, GENENG, and GENENG 2) which are capable of calculating the steady state performance of turbojet and turbofan engines at design and off-design operating conditions. The DYNGEN has the combined capabilities of GENENG and GENENG 2 for calculating steady state performance; to these the further capability for calculating transient performance was added. The DYNGEN can be used to analyze one- and two-spool turbojet engines or two- and three-spool turbofan engines without modification to the basic program. A modified Euler method is used by DYNGEN to solve the differential equations which model the dynamics of the engine. This new method frees the programmer from having to minimize the number of equations which require iterative solution. As a result, some of the approximations normally used in transient engine simulations can be eliminated. This tends to produce better agreement when answers are compared with those from purely steady state simulations. The modified Euler method also permits the user to specify large time steps (about 0.10 sec) to be used in the solution of the differential equations. This saves computer execution time when long transients are run. Examples of the use of the program are included, and program results are compared with those from an existing hybrid-computer simulation of a two-spool turbofan.

  7. Assessment and selection of materials for ITER in-vessel components

    NASA Astrophysics Data System (ADS)

    Kalinin, G.; Barabash, V.; Cardella, A.; Dietz, J.; Ioki, K.; Matera, R.; Santoro, R. T.; Tivey, R.; ITER Home Teams

    2000-12-01

    During the international thermonuclear experimental reactor (ITER) engineering design activities (EDA) significant progress has been made in the selection of materials for the in-vessel components of the reactor. This progress is a result of the worldwide collaboration of material scientists and industries which focused their effort on the optimisation of material and component manufacturing and on the investigation of the most critical material properties. Austenitic stainless steels 316L(N)-IG and 316L, nickel-based alloys Inconel 718 and Inconel 625, Ti-6Al-4V alloy and two copper alloys, CuCrZr-IG and CuAl25-IG, have been proposed as reference structural materials, and ferritic steel 430, and austenitic steel 304B7 with the addition of boron have been selected for some specific parts of the ITER in-vessel components. Beryllium, tungsten and carbon fibre composites are considered as plasma facing armour materials. The data base on the properties of all these materials is critically assessed and briefly reviewed in this paper together with the justification of the material selection (e.g., effect of neutron irradiation on the mechanical properties of materials, effect of manufacturing cycle, etc.).

  8. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE PAGES

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...

    2017-09-21

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  9. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  10. A finite element solver for 3-D compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Reddy, K. C.; Reddy, J. N.; Nayani, S.

    1990-01-01

    Computation of the flow field inside a space shuttle main engine (SSME) requires the application of state of the art computational fluid dynamic (CFD) technology. Several computer codes are under development to solve 3-D flow through the hot gas manifold. Some algorithms were designed to solve the unsteady compressible Navier-Stokes equations, either by implicit or explicit factorization methods, using several hundred or thousands of time steps to reach a steady state solution. A new iterative algorithm is being developed for the solution of the implicit finite element equations without assembling global matrices. It is an efficient iteration scheme based on a modified nonlinear Gauss-Seidel iteration with symmetric sweeps. The algorithm is analyzed for a model equation and is shown to be unconditionally stable. Results from a series of test problems are presented. The finite element code was tested for couette flow, which is flow under a pressure gradient between two parallel plates in relative motion. Another problem that was solved is viscous laminar flow over a flat plate. The general 3-D finite element code was used to compute the flow in an axisymmetric turnaround duct at low Mach numbers.

  11. Fusion energy

    NASA Astrophysics Data System (ADS)

    1990-09-01

    The main purpose of the International Thermonuclear Experimental Reactor (ITER) is to develop an experimental fusion reactor through the united efforts of many technologically advanced countries. The ITER terms of reference, issued jointly by the European Community, Japan, the USSR, and the United States, call for an integrated international design activity and constitute the basis of current activities. Joint work on ITER is carried out under the auspices of the International Atomic Energy Agency (IAEA), according to the terms of quadripartite agreement reached between the European Community, Japan, the USSR, and the United States. The site for joint technical work sessions is at the Max Planck Institute of Plasma Physics. Garching, Federal Republic of Germany. The ITER activities have two phases: a definition phase performed in 1988 and the present design phase (1989 to 1990). During the definition phase, a set of ITER technical characteristics and supporting research and development (R and D) activities were developed and reported. The present conceptual design phase of ITER lasts until the end of 1990. The objectives of this phase are to develop the design of ITER, perform a safety and environmental analysis, develop site requirements, define future R and D needs, and estimate cost, manpower, and schedule for construction and operation. A final report will be submitted at the end of 1990. This paper summarizes progress in the ITER program during the 1989 design phase.

  12. Learning to Teach Elementary Science through Iterative Cycles of Enactment in Culturally and Linguistically Diverse Contexts

    ERIC Educational Resources Information Center

    Bottoms, SueAnn I.; Ciechanowski, Kathryn M.; Hartman, Brian

    2015-01-01

    Iterative cycles of enactment embedded in culturally and linguistically diverse contexts provide rich opportunities for preservice teachers (PSTs) to enact core practices of science. This study is situated in the larger Families Involved in Sociocultural Teaching and Science, Technology, Engineering and Mathematics (FIESTAS) project, which weaves…

  13. Fractional watt Vuillemier cryogenic refrigerator program engineering notebook. Volume 1: Thermal analysis

    NASA Technical Reports Server (NTRS)

    Miller, W. S.

    1974-01-01

    The cryogenic refrigerator thermal design calculations establish design approach and basic sizing of the machine's elements. After the basic design is defined, effort concentrates on matching the thermodynamic design with that of the heat transfer devices (heat exchangers and regenerators). Typically, the heat transfer device configurations and volumes are adjusted to improve their heat transfer and pressure drop characteristics. These adjustments imply that changes be made to the active displaced volumes, compensating for the influence of the heat transfer devices on the thermodynamic processes of the working fluid. Then, once the active volumes are changed, the heat transfer devices require adjustment to account for the variations in flows, pressure levels, and heat loads. This iterative process is continued until the thermodynamic cycle parameters match the design of the heat transfer devices. By examing several matched designs, a near-optimum refrigerator is selected.

  14. Measurement of the complex transmittance of large optical elements with Ptychographical Iterative Engine.

    PubMed

    Wang, Hai-Yan; Liu, Cheng; Veetil, Suhas P; Pan, Xing-Chen; Zhu, Jian-Qiang

    2014-01-27

    Wavefront control is a significant parameter in inertial confinement fusion (ICF). The complex transmittance of large optical elements which are often used in ICF is obtained by computing the phase difference of the illuminating and transmitting fields using Ptychographical Iterative Engine (PIE). This can accurately and effectively measure the transmittance of large optical elements with irregular surface profiles, which are otherwise not measurable using commonly used interferometric techniques due to a lack of standard reference plate. Experiments are done with a Continue Phase Plate (CPP) to illustrate the feasibility of this method.

  15. LOW-ENGINE-FRICTION TECHNOLOGY FOR ADVANCED NATURAL-GAS RECIPROCATING ENGINES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Victor Wong; Tian Tian; Luke Moughon

    This program aims at improving the efficiency of advanced natural-gas reciprocating engines (ANGRE) by reducing piston and piston ring assembly friction without major adverse effects on engine performance, such as increased oil consumption and wear. An iterative process of simulation, experimentation and analysis is being followed towards achieving the goal of demonstrating a complete optimized low-friction engine system. To date, a detailed set of piston and piston-ring dynamic and friction models have been developed and applied that illustrate the fundamental relationships among mechanical, surface/material and lubricant design parameters and friction losses. Demonstration of low-friction ring-pack designs in the Waukesha VGFmore » 18GL engine confirmed total engine FEMP (friction mean effective pressure) reduction of 7-10% from the baseline configuration without significantly increasing oil consumption or blow-by flow. This represents a substantial (30-40%) reduction of the ringpack friction alone. The measured FMEP reductions were in good agreement with the model predictions. Further improvements via piston, lubricant, and surface designs offer additional opportunities. Tests of low-friction lubricants are in progress and preliminary results are very promising. The combined analysis of lubricant and surface design indicates that low-viscosity lubricants can be very effective in reducing friction, subject to component wear for extremely thin oils, which can be mitigated with further lubricant formulation and/or engineered surfaces. Hence a combined approach of lubricant design and appropriate wear reduction offers improved potential for minimum engine friction loss. Piston friction studies indicate that a flatter piston with a more flexible skirt, together with optimizing the waviness and film thickness on the piston skirt offer significant friction reduction. Combined with low-friction ring-pack, material and lubricant parameters, a total power cylinder friction reduction of 30-50% is expected, translating to an engine efficiency increase of two percentage points from its current baseline towards the goal of 50% ARES engine efficiency. The design strategies developed in this study have promising potential for application in all modern reciprocating engines as they represent simple, low-cost methods to extract significant fuel savings. The current program has possible spinoffs and applications in other industries as well, including transportation, CHP, and diesel power generation. The progress made in this program has wide engine efficiency implications, and potential deployment of low-friction engine components or lubricants in the near term is possible as current investigations continue.« less

  16. Eliciting design patterns for e-learning systems

    NASA Astrophysics Data System (ADS)

    Retalis, Symeon; Georgiakakis, Petros; Dimitriadis, Yannis

    2006-06-01

    Design pattern creation, especially in the e-learning domain, is a highly complex process that has not been sufficiently studied and formalized. In this paper, we propose a systematic pattern development cycle, whose most important aspects focus on reverse engineering of existing systems in order to elicit features that are cross-validated through the use of appropriate, authentic scenarios. However, an iterative pattern process is proposed that takes advantage of multiple data sources, thus emphasizing a holistic view of the teaching learning processes. The proposed schema of pattern mining has been extensively validated for Asynchronous Network Supported Collaborative Learning (ANSCL) systems, as well as for other types of tools in a variety of scenarios, with promising results.

  17. Three-D Flow Analysis of the Alternate SSME HPOT TAD

    NASA Technical Reports Server (NTRS)

    Kubinski, Cheryl A.

    1993-01-01

    This paper describes the results of numerical flow analyses performed in support of design development of the Space Shuttle Main Engine Alternate High Pressure Oxidizer Turbine Turn-around duct (TAD). The flow domain has been modeled using a 3D, Navier-Stokes, general purpose flow solver. The goal of this effort is to achieve an alternate TAD exit flow distribution which closely matches that of the baseline configuration. 3D Navier Stokes CFD analyses were employed to evaluate numerous candidate geometry modifications to the TAD flowpath in order to achieve this goal. The design iterations are summarized, as well as a description of the computational model, numerical results and the conclusions based on these calculations.

  18. Airbreathing engine selection criteria for SSTO propulsion system

    NASA Astrophysics Data System (ADS)

    Ohkami, Yoshiaki; Maita, Masataka

    1995-02-01

    This paper presents airbreathing engine selection criteria to be applied to the propulsion system of a Single Stage To Orbit (SSTO). To establish the criteria, a relation among three major parameters, i.e., delta-V capability, weight penalty, and effective specific impulse of the engine subsystem, is derived as compared to these parameters of the LH2/LOX rocket engine. The effective specific impulse is a function of the engine I(sub sp) and vehicle thrust-to-drag ratio which is approximated by a function of the vehicle velocity. The weight penalty includes the engine dry weight, cooling subsystem weight. The delta-V capability is defined by the velocity region starting from the minimum operating velocity up to the maximum velocity. The vehicle feasibility is investigated in terms of the structural and propellant weights, which requires an iteration process adjusting the system parameters. The system parameters are computed by iteration based on the Newton-Raphson method. It has been concluded that performance in the higher velocity region is extremely important so that the airbreathing engines are required to operate beyond the velocity equivalent to the rocket engine exhaust velocity (approximately 4500 m/s).

  19. Development of FAST.Farm: A New Multiphysics Engineering Tool for Wind Farm Design and Analysis: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jonkman, Jason; Annoni, Jennifer; Hayman, Greg

    2017-01-01

    This paper presents the development of FAST.Farm, a new multiphysics tool applicable to engineering problems in research and industry involving wind farm performance and cost optimization that is needed to address the current underperformance, failures, and expenses plaguing the wind industry. Achieving wind cost-of-energy targets - which requires improvements in wind farm performance and reliability, together with reduced uncertainty and expenditures - has been eluded by the complicated nature of the wind farm design problem, especially the sophisticated interaction between atmospheric phenomena and wake dynamics and array effects. FAST.Farm aims to balance the need for accurate modeling of the relevantmore » physics for predicting power performance and loads while maintaining low computational cost to support a highly iterative and probabilistic design process and system-wide optimization. FAST.Farm makes use of FAST to model the aero-hydro-servo-elastics of distinct turbines in the wind farm, and it is based on some of the principles of the Dynamic Wake Meandering (DWM) model, but avoids many of the limitations of existing DWM implementations.« less

  20. Iterative algorithm-guided design of massive strain libraries, applied to itaconic acid production in yeast.

    PubMed

    Young, Eric M; Zhao, Zheng; Gielesen, Bianca E M; Wu, Liang; Benjamin Gordon, D; Roubos, Johannes A; Voigt, Christopher A

    2018-05-09

    Metabolic engineering requires multiple rounds of strain construction to evaluate alternative pathways and enzyme concentrations. Optimizing multigene pathways stepwise or by randomly selecting enzymes and expression levels is inefficient. Here, we apply methods from design of experiments (DOE) to guide the construction of strain libraries from which the maximum information can be extracted without sampling every possible combination. We use Saccharomyces cerevisiae as a host for a novel six-gene pathway to itaconic acid, selected by comparing alternative shunt pathways that bypass the mitochondrial TCA cycle. The pathway is distinctive for the use of acetylating acetaldehyde dehydrogenase to increase cytosolic acetyl-CoA pools, a bacterial enzyme to synthesize citrate in the cytosol, and an itaconic acid exporter. Precise control over the expression of each gene is enabled by a set of promoter-terminator pairs that span a 174-fold range. Two large combinatorial libraries (160 variants, 2.4Mb and 32 variants, 0.6Mb) are designed where the expression levels are selected by statistical methods (I-optimal response surface methodology, full factorial, or Plackett-Burman) with the intent of extracting different types of guiding information after the screen. This is applied to the design of a third library (24 variants, 0.5Mb) intended to alleviate a bottleneck in cis-aconitate decarboxylase (CAD) expression. The top strain produces 815mg/l itaconic acid, a 4-fold improvement over the initial strain achieved by iteratively balancing pathway expression. Including a methylated product in the total, the strain produces 1.3g/l combined itaconic acids. Further, a regression analysis of the libraries reveals the optimal expression level of CAD as well as pairwise interdependencies between genes that result in increased titer and purity of itaconic acid. This work demonstrates adapting algorithmic design strategies to guide automated yeast strain construction and learn information after each iteration. Copyright © 2018. Published by Elsevier Inc.

  1. A free interactive matching program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J.-F. Ostiguy

    1999-04-16

    For physicists and engineers involved in the design and analysis of beamlines (transfer lines or insertions) the lattice function matching problem is central and can be time-consuming because it involves constrained nonlinear optimization. For such problems convergence can be difficult to obtain in general without expert human intervention. Over the years, powerful codes have been developed to assist beamline designers. The canonical example is MAD (Methodical Accelerator Design) developed at CERN by Christophe Iselin. MAD, through a specialized command language, allows one to solve a wide variety of problems, including matching problems. Although in principle, the MAD command interpreter canmore » be run interactively, in practice the solution of a matching problem involves a sequence of independent trial runs. Unfortunately, but perhaps not surprisingly, there still exists relatively few tools exploiting the resources offered by modern environments to assist lattice designer with this routine and repetitive task. In this paper, we describe a fully interactive lattice matching program, written in C++ and assembled using freely available software components. An important feature of the code is that the evolution of the lattice functions during the nonlinear iterative process can be graphically monitored in real time; the user can dynamically interrupt the iterations at will to introduce new variables, freeze existing ones into their current state and/or modify constraints. The program runs under both UNIX and Windows NT.« less

  2. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.

    2014-08-21

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and representmore » the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ–ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.« less

  3. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    NASA Astrophysics Data System (ADS)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.

    2014-08-01

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ-ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.

  4. Experimentation in software engineering

    NASA Technical Reports Server (NTRS)

    Basili, V. R.; Selby, R. W.; Hutchens, D. H.

    1986-01-01

    Experimentation in software engineering supports the advancement of the field through an iterative learning process. In this paper, a framework for analyzing most of the experimental work performed in software engineering over the past several years is presented. A variety of experiments in the framework is described and their contribution to the software engineering discipline is discussed. Some useful recommendations for the application of the experimental process in software engineering are included.

  5. Critical Design Issues of Tokamak Cooling Water System of ITER's Fusion Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Seokho H; Berry, Jan

    U.S. ITER is responsible for the design, engineering, and procurement of the Tokamak Cooling Water System (TCWS). The TCWS transfers heat generated in the Tokamak to cooling water during nominal pulsed operation 850 MW at up to 150 C and 4.2 MPa water pressure. This water contains radionuclides because impurities (e.g., tritium) diffuse from in-vessel components and the vacuum vessel by water baking at 200 240 C at up to 4.4MPa, and corrosion products become activated by neutron bombardment. The system is designated as safety important class (SIC) and will be fabricated to comply with the French Order concerning nuclearmore » pressure equipment (December 2005) and the EU Pressure Equipment Directive using ASME Section VIII, Div 2 design codes. The complexity of the TCWS design and fabrication presents unique challenges. Conceptual design of this one-of-a-kind cooling system has been completed with several issues that need to be resolved to move to next stage of the design. Those issues include flow balancing between over hundreds of branch pipelines in parallel to supply cooling water to blankets, determination of optimum flow velocity while minimizing the potential for cavitation damage, design for freezing protection for cooling water flowing through cryostat (freezing) environment, requirements for high-energy piping design, and electromagnetic impact to piping and components. Although the TCWS consists of standard commercial components such as piping with valves and fittings, heat exchangers, and pumps, complex requirements present interesting design challenges. This paper presents a brief description of TCWS conceptual design and critical design issues that need to be resolved.« less

  6. Development of a Mobile Clinical Prediction Tool to Estimate Future Depression Severity and Guide Treatment in Primary Care: User-Centered Design.

    PubMed

    Wachtler, Caroline; Coe, Amy; Davidson, Sandra; Fletcher, Susan; Mendoza, Antonette; Sterling, Leon; Gunn, Jane

    2018-04-23

    Around the world, depression is both under- and overtreated. The diamond clinical prediction tool was developed to assist with appropriate treatment allocation by estimating the 3-month prognosis among people with current depressive symptoms. Delivering clinical prediction tools in a way that will enhance their uptake in routine clinical practice remains challenging; however, mobile apps show promise in this respect. To increase the likelihood that an app-delivered clinical prediction tool can be successfully incorporated into clinical practice, it is important to involve end users in the app design process. The aim of the study was to maximize patient engagement in an app designed to improve treatment allocation for depression. An iterative, user-centered design process was employed. Qualitative data were collected via 2 focus groups with a community sample (n=17) and 7 semistructured interviews with people with depressive symptoms. The results of the focus groups and interviews were used by the computer engineering team to modify subsequent protoypes of the app. Iterative development resulted in 3 prototypes and a final app. The areas requiring the most substantial changes following end-user input were related to the iconography used and the way that feedback was provided. In particular, communicating risk of future depressive symptoms proved difficult; these messages were consistently misinterpreted and negatively viewed and were ultimately removed. All participants felt positively about seeing their results summarized after completion of the clinical prediction tool, but there was a need for a personalized treatment recommendation made in conjunction with a consultation with a health professional. User-centered design led to valuable improvements in the content and design of an app designed to improve allocation of and engagement in depression treatment. Iterative design allowed us to develop a tool that allows users to feel hope, engage in self-reflection, and motivate them to treatment. The tool is currently being evaluated in a randomized controlled trial. ©Caroline Wachtler, Amy Coe, Sandra Davidson, Susan Fletcher, Antonette Mendoza, Leon Sterling, Jane Gunn. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 23.04.2018.

  7. Characterization of a New Mach 9 Nozzle for the HEAT Hypersonic Wind Tunnel

    NASA Astrophysics Data System (ADS)

    Baccarella, D.; Passaro, A.; Caredda, P.; Cristofolini, A.; Neretti, G.; Granciu, V. M.; Schettino, A.; Battista, F.; D'Ambrosio, D.

    2009-01-01

    A new Mach 9 contoured nozzle to use with air was designed and realized at Alta SpA with the aim to produce a uniform core flow with a diameter of at least 80 mm. The design was iteratively carried out using engineering codes and CFD simulations by CIRA. The characterization activity was carried out mapping the complete test section in terms of pitot pressure and total enthalpy and measuring the pressure and heat flux distribution on the nozzle internal walls. The flow before the convergent was characterized by means of total pressure measurements and spectroscopy. A numerical rebuilding of the test was performed by CIRA and PoliTO and compared with experimental data. The paper will briefly describe the design phase and will present all the characterization results.

  8. Collaborating with human factors when designing an electronic textbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ratner, J.A.; Zadoks, R.I.; Attaway, S.W.

    The development of on-line engineering textbooks presents new challenges to authors to effectively integrate text and tools in an electronic environment. By incorporating human factors principles of interface design and cognitive psychology early in the design process, a team at Sandia National Laboratories was able to make the end product more usable and shorten the prototyping and editing phases. A critical issue was simultaneous development of paper and on-line versions of the textbook. In addition, interface consistency presented difficulties with distinct goals and limitations for each media. Many of these problems were resolved swiftly with human factors input using templates,more » style guides and iterative usability testing of both paper and on-line versions. Writing style continuity was also problematic with numerous authors contributing to the text.« less

  9. Rocketdyne LOX bearing tester program

    NASA Technical Reports Server (NTRS)

    Keba, J. E.; Beatty, R. F.

    1988-01-01

    The cause, or causes, for the Space Shuttle Main Engine ball wear were unknown, however, several mechanisms were suspected. Two testers were designed and built for operation in liquid oxygen to empirically gain insight into the problems and iterate solutions in a timely and cost efficient manner independent of engine testing. Schedules and test plans were developed that defined a test matrix consisting of parametric variations of loading, cooling or vapor margin, cage lubrication, material, and geometry studies. Initial test results indicated that the low pressure pump thrust bearing surface distress is a function of high axial load. Initial high pressure turbopump bearing tests produced the wear phenomenon observed in the turbopump and identified an inadequate vapor margin problem and a coolant flowrate sensitivity issue. These tests provided calibration data of analytical model predictions to give high confidence in the positive impact of future turbopump design modification for flight. Various modifications will be evaluated in these testers, since similar turbopump conditions can be produced and the benefit of the modification will be quantified in measured wear life comparisons.

  10. ACTIVIS: Visual Exploration of Industry-Scale Deep Neural Network Models.

    PubMed

    Kahng, Minsuk; Andrews, Pierre Y; Kalro, Aditya; Polo Chau, Duen Horng

    2017-08-30

    While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved ACTIVIS, an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance- and subset-level. ACTIVIS has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how ACTIVIS may work with different models.

  11. Optimizing longwall mine layouts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minkel, M.J.

    1996-12-31

    Before spending the time to design an underground mine in detail, the mining engineer should be assured of the economic viability of the location of the layout. This has historically been a trial-and-error, iterative process. Traditional underground mine planning usually bases the layout on the geological characteristics of a deposit such as minimum seam height, quality, and the absence of faults. Whether one attempts to make a decision manually. or use traditional mine planning software, the process works something like this: First you build geological model. Then you impose a {open_quotes}best guess{close_quotes} as to which geological layers will become partmore » of the mined product, or will influence mining. Next you place your design where you believe is the best location to make a mine. Then you select equipment which you believe will cost-effectively mine the area. Finally, you schedule your equipment selection through the design over the mine life, run financial analyses and see if the rate of return is acceptable. If the NPV is acceptable, the design is accepted. If the NPV is not acceptable, the engineer has to restart the cycle of redesigning the layout, rescheduling the equipment, and restudying the economics again.« less

  12. Fully Automated Detection of Cloud and Aerosol Layers in the CALIPSO Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Vaughan, Mark A.; Powell, Kathleen A.; Kuehn, Ralph E.; Young, Stuart A.; Winker, David M.; Hostetler, Chris A.; Hunt, William H.; Liu, Zhaoyan; McGill, Matthew J.; Getzewich, Brian J.

    2009-01-01

    Accurate knowledge of the vertical and horizontal extent of clouds and aerosols in the earth s atmosphere is critical in assessing the planet s radiation budget and for advancing human understanding of climate change issues. To retrieve this fundamental information from the elastic backscatter lidar data acquired during the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) mission, a selective, iterated boundary location (SIBYL) algorithm has been developed and deployed. SIBYL accomplishes its goals by integrating an adaptive context-sensitive profile scanner into an iterated multiresolution spatial averaging scheme. This paper provides an in-depth overview of the architecture and performance of the SIBYL algorithm. It begins with a brief review of the theory of target detection in noise-contaminated signals, and an enumeration of the practical constraints levied on the retrieval scheme by the design of the lidar hardware, the geometry of a space-based remote sensing platform, and the spatial variability of the measurement targets. Detailed descriptions are then provided for both the adaptive threshold algorithm used to detect features of interest within individual lidar profiles and the fully automated multiresolution averaging engine within which this profile scanner functions. The resulting fusion of profile scanner and averaging engine is specifically designed to optimize the trade-offs between the widely varying signal-to-noise ratio of the measurements and the disparate spatial resolutions of the detection targets. Throughout the paper, specific algorithm performance details are illustrated using examples drawn from the existing CALIPSO dataset. Overall performance is established by comparisons to existing layer height distributions obtained by other airborne and space-based lidars.

  13. A high throughput mechanical screening device for cartilage tissue engineering.

    PubMed

    Mohanraj, Bhavana; Hou, Chieh; Meloni, Gregory R; Cosgrove, Brian D; Dodge, George R; Mauck, Robert L

    2014-06-27

    Articular cartilage enables efficient and near-frictionless load transmission, but suffers from poor inherent healing capacity. As such, cartilage tissue engineering strategies have focused on mimicking both compositional and mechanical properties of native tissue in order to provide effective repair materials for the treatment of damaged or degenerated joint surfaces. However, given the large number design parameters available (e.g. cell sources, scaffold designs, and growth factors), it is difficult to conduct combinatorial experiments of engineered cartilage. This is particularly exacerbated when mechanical properties are a primary outcome, given the long time required for testing of individual samples. High throughput screening is utilized widely in the pharmaceutical industry to rapidly and cost-effectively assess the effects of thousands of compounds for therapeutic discovery. Here we adapted this approach to develop a high throughput mechanical screening (HTMS) system capable of measuring the mechanical properties of up to 48 materials simultaneously. The HTMS device was validated by testing various biomaterials and engineered cartilage constructs and by comparing the HTMS results to those derived from conventional single sample compression tests. Further evaluation showed that the HTMS system was capable of distinguishing and identifying 'hits', or factors that influence the degree of tissue maturation. Future iterations of this device will focus on reducing data variability, increasing force sensitivity and range, as well as scaling-up to even larger (96-well) formats. This HTMS device provides a novel tool for cartilage tissue engineering, freeing experimental design from the limitations of mechanical testing throughput. © 2013 Published by Elsevier Ltd.

  14. Design and optimization of a novel bio-loom to weave melt-spun absorbable polymers for bone tissue engineering.

    PubMed

    Gilmore, Jordon; Burg, Timothy; Groff, Richard E; Burg, Karen J L

    2017-08-01

    Bone graft procedures are currently among the most common surgical procedures performed worldwide, but due to high risk of complication and lack of viable donor tissue, there exists a need to develop alternatives for bone defect healing. Tissue engineering, for example, combining biocompatible scaffolds with mesenchymal stem cells to achieve new bone growth, is a possible solution. Recent work has highlighted the potential for woven polymer meshes to serve as bone tissue engineering scaffolds; since, scaffolds can be iteratively designed by adjusting weave settings, material types, and mesh parameters. However, there are a number of material and system challenges preventing the implementation of such a tissue engineering strategy. Fiber compliance, tensile strength, brittleness, cross-sectional geometry, and size present specific challenges for using traditional textile weaving methods. In the current work, two potential scaffold materials, melt-spun poly-l-lactide, and poly-l-lactide-co-ε-caprolactone, were investigated. An automated bio-loom was engineered and built to weave these materials. The bio-loom was used to successfully demonstrate the weaving of these difficult-to-handle fiber types into various mesh configurations and material combinations. The dobby-loom design, adapted with an air jet weft placement system, warp tension control system, and automated collection spool, provides minimal damage to the polymer fibers while overcoming the physical constraints presented by the inherent material structure. © 2016 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 105B: 1342-1351, 2017. © 2016 Wiley Periodicals, Inc.

  15. Designing Real-time Decision Support for Trauma Resuscitations

    PubMed Central

    Yadav, Kabir; Chamberlain, James M.; Lewis, Vicki R.; Abts, Natalie; Chawla, Shawn; Hernandez, Angie; Johnson, Justin; Tuveson, Genevieve; Burd, Randall S.

    2016-01-01

    Background Use of electronic clinical decision support (eCDS) has been recommended to improve implementation of clinical decision rules. Many eCDS tools, however, are designed and implemented without taking into account the context in which clinical work is performed. Implementation of the pediatric traumatic brain injury (TBI) clinical decision rule at one Level I pediatric emergency department includes an electronic questionnaire triggered when ordering a head computed tomography using computerized physician order entry (CPOE). Providers use this CPOE tool in less than 20% of trauma resuscitation cases. A human factors engineering approach could identify the implementation barriers that are limiting the use of this tool. Objectives The objective was to design a pediatric TBI eCDS tool for trauma resuscitation using a human factors approach. The hypothesis was that clinical experts will rate a usability-enhanced eCDS tool better than the existing CPOE tool for user interface design and suitability for clinical use. Methods This mixed-methods study followed usability evaluation principles. Pediatric emergency physicians were surveyed to identify barriers to using the existing eCDS tool. Using standard trauma resuscitation protocols, a hierarchical task analysis of pediatric TBI evaluation was developed. Five clinical experts, all board-certified pediatric emergency medicine faculty members, then iteratively modified the hierarchical task analysis until reaching consensus. The software team developed a prototype eCDS display using the hierarchical task analysis. Three human factors engineers provided feedback on the prototype through a heuristic evaluation, and the software team refined the eCDS tool using a rapid prototyping process. The eCDS tool then underwent iterative usability evaluations by the five clinical experts using video review of 50 trauma resuscitation cases. A final eCDS tool was created based on their feedback, with content analysis of the evaluations performed to ensure all concerns were identified and addressed. Results Among 26 EPs (76% response rate), the main barriers to using the existing tool were that the information displayed is redundant and does not fit clinical workflow. After the prototype eCDS tool was developed based on the trauma resuscitation hierarchical task analysis, the human factors engineers rated it to be better than the CPOE tool for nine of 10 standard user interface design heuristics on a three-point scale. The eCDS tool was also rated better for clinical use on the same scale, in 84% of 50 expert–video pairs, and was rated equivalent in the remainder. Clinical experts also rated barriers to use of the eCDS tool as being low. Conclusions An eCDS tool for diagnostic imaging designed using human factors engineering methods has improved perceived usability among pediatric emergency physicians. PMID:26300010

  16. NIH-IEEE 2015 Strategic Conference on Healthcare Innovations and Point-of-Care Technologies for Prec

    Cancer.gov

    NIH and the Institute for Electrical and Electronics Engineering, Engineering in Medicine and Biology Society (IEEE/EMBS) hosted the third iteration of the Healthcare Innovations and Point-of-Care Technologies Conference last week.

  17. Statistical Engineering in Air Traffic Management Research

    NASA Technical Reports Server (NTRS)

    Wilson, Sara R.

    2015-01-01

    NASA is working to develop an integrated set of advanced technologies to enable efficient arrival operations in high-density terminal airspace for the Next Generation Air Transportation System. This integrated arrival solution is being validated and verified in laboratories and transitioned to a field prototype for an operational demonstration at a major U.S. airport. Within NASA, this is a collaborative effort between Ames and Langley Research Centers involving a multi-year iterative experimentation process. Designing and analyzing a series of sequential batch computer simulations and human-in-the-loop experiments across multiple facilities and simulation environments involves a number of statistical challenges. Experiments conducted in separate laboratories typically have different limitations and constraints, and can take different approaches with respect to the fundamental principles of statistical design of experiments. This often makes it difficult to compare results from multiple experiments and incorporate findings into the next experiment in the series. A statistical engineering approach is being employed within this project to support risk-informed decision making and maximize the knowledge gained within the available resources. This presentation describes a statistical engineering case study from NASA, highlights statistical challenges, and discusses areas where existing statistical methodology is adapted and extended.

  18. Using Approximations to Accelerate Engineering Design Optimization

    NASA Technical Reports Server (NTRS)

    Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.

  19. Design the Cost Approach in Trade-Off's for Structural Components, Illustrated on the Baseline Selection of the Engine Thrust Frame of Ariane 5 ESC-B

    NASA Astrophysics Data System (ADS)

    Appolloni, L.; Juhls, A.; Rieck, U.

    2002-01-01

    Designing for value is one of the very actual upcoming methods for design optimization, which broke into the domain of aerospace engineering in the late 90's. In the frame of designing for value two main design philosophies exist: Design For Cost and Design To Cost. Design To Cost is the iterative redesign of a project until the content of the project meets a given budget. Designing For Cost is the conscious use of engineering process technology to reduce life cycle cost while satisfying, and hopefully exceeding, customer demands. The key to understanding cost, and hence to reducing cost, is the ability to measure cost accurately and to allocate it appropriately to products. Only then can intelligent decisions be made. Therefore the necessity of new methods as "Design For Value" or "Design For Competitiveness", set up with a generally multidisciplinary approach to find an optimized technical solution driven by many parameters, depending on the mission scenario and the customer/market needs. Very often three, but not more than five parametric drivers are sufficient. The more variable exist, the higher is in fact the risk to find just a sub-optimized local and not the global optimum, and the less robust is the found solution against change of input parameters. When the main parameters for optimization have been identified, the system engineer has to communicate them to all design engineers, who shall take care of these assessment variables during the entire design and decision process. The design process which has taken to the definition of the feasible structural concepts for the Engine Thrust Frame of the Ariane 5 Upper Cryogenic Stage ESC-B follows these most actual design philosophy methodologies, and combines a design for cost approach, to a design to cost optimization loop. Ariane 5 is the first member of a family of heavy-lift launchers. It aims to evolve into a family of launchers that responds to the space transportation challenges of the 21st century. New upper stages, along with modifications to the main cryogenic stage and solid boosters, will increase performance and meet demands of a changing market. A two-steps approach was decided for future developments of the launcher upper stage, in order to increase the payload lift capability of Ariane 5. The first step ESC-A is scheduled for first launch in 2002. As later step ESC-B shall grow up to 12 tons in GTO orbit, with multiple restart capability, i.e. re-ignitable engine. Ariane 5 ESC-B first flight is targeted for 2006. It will be loaded with 28 metric tons of liquid oxygen and liquid hydrogen and powered by a new expander cycle engine "Vinci". The Vinci engine will be connected to the tanks of the ESC-B stage via the structure named from the designers ETF, or Engine Thrust Frame. In order to develop a design concept for the ETF component a trade off was performed, based on the most modern system engineering methodologies. This paper will describe the basis of the system engineering approach in the design to cost process, and illustrate such approach as it has been applied during the trade off for the baseline selection of the Engine Thrust Frame of Ariane 5 ESC-B.

  20. A Phenomenographic Investigation of the Ways Engineering Students Experience Innovation

    NASA Astrophysics Data System (ADS)

    Fila, Nicholas David

    Innovation has become an important phenomenon in engineering and engineering education. By developing novel, feasible, viable, and valued solutions to complex technical and human problems, engineers support the economic competitiveness of organizations, make a difference in the lives of users and other stakeholders, drive societal and scientific progress, and obtain key personal benefits. Innovation is also a complex phenomenon. It occurs across a variety of contexts and domains, encompasses numerous phases and activities, and requires unique competency profiles. Despite this complexity, many studies in engineering education focus on specific aspects (e.g., engineering students' abilities to generate original concepts during idea generation), and we still know little about the variety of ways engineering students approach and understand innovation. This study addresses that gap by asking: 1. What are the qualitatively different ways engineering students experience innovation during their engineering projects? 2. What are the structural relationships between the ways engineering students experience innovation? This study utilized phenomenography, a qualitative research method, to explore the above research questions. Thirty-three engineering students were recruited to ensure thorough coverage along four factors suggested by the literature to support differences related to innovation: engineering project experience, academic major, year in school, and gender. Each participant completed a 1-2 hour, semi-structured interview that focused on experiences with and conceptions of innovation. Whole transcripts were analyzed using an eight-stage, iterative, and comparative approach meant to identify a limited number of categories of description (composite ways of experiencing innovation comprised of the experiences of several participants), and the structural relationships between these categories. Phenomenographic analysis revealed eight categories of description that were structured in a semi-hierarchical, two-dimensional outcome space. The first four categories demonstrated a progression toward greater comprehensiveness in both process and focus dimensions. In the process dimension, subsequent categories added increasingly preliminary innovation phases: idea realization, idea generation, problem scoping, and problem finding. In the focus dimension, subsequent categories added key areas engineers considered during innovation: technical, human, and enterprise. The final four categories each incorporated all previous process phases and focus areas, but prioritized different focus areas in sophisticated ways and acknowledged a macro-iterative cycle, i.e., an understanding of how the processes within a single innovation project built upon and contributed to past and future innovation projects. These results demonstrate important differences between engineering students and suggest how they may come to experience innovation in increasingly comprehensive ways. A framework based on the results can be used by educators and researchers to support more robust educational offerings and nuanced research designs that reflect these differences.

  1. Coupled-Flow Simulation of HP-LP Turbines Has Resulted in Significant Fuel Savings

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    2001-01-01

    Our objective was to create a high-fidelity Navier-Stokes computer simulation of the flow through the turbines of a modern high-bypass-ratio turbofan engine. The simulation would have to capture the aerodynamic interactions between closely coupled high- and low-pressure turbines. A computer simulation of the flow in the GE90 turbofan engine's high-pressure (HP) and low-pressure (LP) turbines was created at GE Aircraft Engines under contract with the NASA Glenn Research Center. The three-dimensional steady-state computer simulation was performed using Glenn's average-passage approach named APNASA. The areas upstream and downstream of each blade row mutually interact with each other during engine operation. The embedded blade row operating conditions are modeled since the average passage equations in APNASA actively include the effects of the adjacent blade rows. The turbine airfoils, platforms, and casing are actively cooled by compressor bleed air. Hot gas leaks around the tips of rotors through labyrinth seals. The flow exiting the high work HP turbines is partially transonic and, therefore, has a strong shock system in the transition region. The simulation was done using 121 processors of a Silicon Graphics Origin 2000 (NAS 02K) cluster at the NASA Ames Research Center, with a parallel efficiency of 87 percent in 15 hr. The typical average-passage analysis mesh size per blade row was 280 by 45 by 55, or approx.700,000 grid points. The total number of blade rows was 18 for a combined HP and LP turbine system including the struts in the transition duct and exit guide vane, which contain 12.6 million grid points. Design cycle turnaround time requirements ran typically from 24 to 48 hr of wall clock time. The number of iterations for convergence was 10,000 at 8.03x10(exp -5) sec/iteration/grid point (NAS O2K). Parallel processing by up to 40 processors is required to meet the design cycle time constraints. This is the first-ever flow simulation of an HP and LP turbine. In addition, it includes the struts in the transition duct and exit guide vanes.

  2. A Huygens immersed-finite-element particle-in-cell method for modeling plasma-surface interactions with moving interface

    NASA Astrophysics Data System (ADS)

    Cao, Huijun; Cao, Yong; Chu, Yuchuan; He, Xiaoming; Lin, Tao

    2018-06-01

    Surface evolution is an unavoidable issue in engineering plasma applications. In this article an iterative method for modeling plasma-surface interactions with moving interface is proposed and validated. In this method, the plasma dynamics is simulated by an immersed finite element particle-in-cell (IFE-PIC) method, and the surface evolution is modeled by the Huygens wavelet method which is coupled with the iteration of the IFE-PIC method. Numerical experiments, including prototypical engineering applications, such as the erosion of Hall thruster channel wall, are presented to demonstrate features of this Huygens IFE-PIC method for simulating the dynamic plasma-surface interactions.

  3. The community FabLab platform: applications and implications in biomedical engineering.

    PubMed

    Stephenson, Makeda K; Dow, Douglas E

    2014-01-01

    Skill development in science, technology, engineering and math (STEM) education present one of the most formidable challenges of modern society. The Community FabLab platform presents a viable solution. Each FabLab contains a suite of modern computer numerical control (CNC) equipment, electronics and computing hardware and design, programming, computer aided design (CAD) and computer aided machining (CAM) software. FabLabs are community and educational resources and open to the public. Development of STEM based workforce skills such as digital fabrication and advanced manufacturing can be enhanced using this platform. Particularly notable is the potential of the FabLab platform in STEM education. The active learning environment engages and supports a diversity of learners, while the iterative learning that is supported by the FabLab rapid prototyping platform facilitates depth of understanding, creativity, innovation and mastery. The product and project based learning that occurs in FabLabs develops in the student a personal sense of accomplishment, self-awareness, command of the material and technology. This helps build the interest and confidence necessary to excel in STEM and throughout life. Finally the introduction and use of relevant technologies at every stage of the education process ensures technical familiarity and a broad knowledge base needed for work in STEM based fields. Biomedical engineering education strives to cultivate broad technical adeptness, creativity, interdisciplinary thought, and an ability to form deep conceptual understanding of complex systems. The FabLab platform is well designed to enhance biomedical engineering education.

  4. Optimisation study of a vehicle bumper subsystem with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.

    2012-10-01

    This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).

  5. ITER Construction—Plant System Integration

    NASA Astrophysics Data System (ADS)

    Tada, E.; Matsuda, S.

    2009-02-01

    This brief paper introduces how the ITER will be built in the international collaboration. The ITER Organization plays a central role in constructing ITER and leading it into operation. Since most of the ITER components are to be provided in-kind from the member countries, integral project management should be scoped in advance of real work. Those include design, procurement, system assembly, testing, licensing and commissioning of ITER.

  6. Development of the ITER magnetic diagnostic set and specification.

    PubMed

    Vayakis, G; Arshad, S; Delhom, D; Encheva, A; Giacomin, T; Jones, L; Patel, K M; Pérez-Lasala, M; Portales, M; Prieto, D; Sartori, F; Simrock, S; Snipes, J A; Udintsev, V S; Watts, C; Winter, A; Zabeo, L

    2012-10-01

    ITER magnetic diagnostics are now in their detailed design and R&D phase. They have passed their conceptual design reviews and a working diagnostic specification has been prepared aimed at the ITER project requirements. This paper highlights specific design progress, in particular, for the in-vessel coils, steady state sensors, saddle loops and divertor sensors. Key changes in the measurement specifications, and a working concept of software and electronics are also outlined.

  7. Front-end antenna system design for the ITER low-field-side reflectometer system using GENRAY ray tracing.

    PubMed

    Wang, G; Doyle, E J; Peebles, W A

    2016-11-01

    A monostatic antenna array arrangement has been designed for the microwave front-end of the ITER low-field-side reflectometer (LFSR) system. This paper presents details of the antenna coupling coefficient analyses performed using GENRAY, a 3-D ray tracing code, to evaluate the plasma height accommodation capability of such an antenna array design. Utilizing modeled data for the plasma equilibrium and profiles for the ITER baseline and half-field scenarios, a design study was performed for measurement locations varying from the plasma edge to inside the top of the pedestal. A front-end antenna configuration is recommended for the ITER LFSR system based on the results of this coupling analysis.

  8. In Praise of Numerical Computation

    NASA Astrophysics Data System (ADS)

    Yap, Chee K.

    Theoretical Computer Science has developed an almost exclusively discrete/algebraic persona. We have effectively shut ourselves off from half of the world of computing: a host of problems in Computational Science & Engineering (CS&E) are defined on the continuum, and, for them, the discrete viewpoint is inadequate. The computational techniques in such problems are well-known to numerical analysis and applied mathematics, but are rarely discussed in theoretical algorithms: iteration, subdivision and approximation. By various case studies, I will indicate how our discrete/algebraic view of computing has many shortcomings in CS&E. We want embrace the continuous/analytic view, but in a new synthesis with the discrete/algebraic view. I will suggest a pathway, by way of an exact numerical model of computation, that allows us to incorporate iteration and approximation into our algorithms’ design. Some recent results give a peek into how this view of algorithmic development might look like, and its distinctive form suggests the name “numerical computational geometry” for such activities.

  9. An iterative method for tri-level quadratic fractional programming problems using fuzzy goal programming approach

    NASA Astrophysics Data System (ADS)

    Kassa, Semu Mitiku; Tsegay, Teklay Hailay

    2017-08-01

    Tri-level optimization problems are optimization problems with three nested hierarchical structures, where in most cases conflicting objectives are set at each level of hierarchy. Such problems are common in management, engineering designs and in decision making situations in general, and are known to be strongly NP-hard. Existing solution methods lack universality in solving these types of problems. In this paper, we investigate a tri-level programming problem with quadratic fractional objective functions at each of the three levels. A solution algorithm has been proposed by applying fuzzy goal programming approach and by reformulating the fractional constraints to equivalent but non-fractional non-linear constraints. Based on the transformed formulation, an iterative procedure is developed that can yield a satisfactory solution to the tri-level problem. The numerical results on various illustrative examples demonstrated that the proposed algorithm is very much promising and it can also be used to solve larger-sized as well as n-level problems of similar structure.

  10. Incorporating prototyping and iteration into intervention development: a case study of a dining hall-based intervention.

    PubMed

    McClain, Arianna D; Hekler, Eric B; Gardner, Christopher D

    2013-01-01

    Previous research from the fields of computer science and engineering highlight the importance of an iterative design process (IDP) to create more creative and effective solutions. This study describes IDP as a new method for developing health behavior interventions and evaluates the effectiveness of a dining hall-based intervention developed using IDP on college students' eating behavior and values. participants were 458 students (52.6% female, age = 19.6 ± 1.5 years [M ± SD]). The intervention was developed via an IDP parallel process. A cluster-randomized controlled study compared differences in eating behavior among students in 4 university dining halls (2 intervention, 2 control). The final intervention was a multicomponent, point-of-selection marketing campaign. Students in the intervention dining halls consumed significantly less junk food and high-fat meat and increased their perceived importance of eating a healthful diet relative to the control group. IDP may be valuable for the development of behavior change interventions.

  11. Aerodynamic optimization by simultaneously updating flow variables and design parameters

    NASA Technical Reports Server (NTRS)

    Rizk, M. H.

    1990-01-01

    The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.

  12. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  13. Directed combinatorial mutagenesis of Escherichia coli for complex phenotype engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Rongming; Liang, Liya; Garst, Andrew D.

    Strain engineering for industrial production requires a targeted improvement of multiple complex traits, which range from pathway flux to tolerance to mixed sugar utilization. Here, we report the use of an iterative CRISPR EnAbled Trackable genome Engineering (iCREATE) method to engineer rapid glucose and xylose co-consumption and tolerance to hydrolysate inhibitors in E. coli. Deep mutagenesis libraries were rationally designed, constructed, and screened to target ~40,000 mutations across 30 genes. These libraries included global and high-level regulators that regulate global gene expression, transcription factors that play important roles in genome-level transcription, enzymes that function in the sugar transport system, NAD(P)Hmore » metabolism, and the aldehyde reduction system. Specific mutants that conferred increased growth in mixed sugars and hydrolysate tolerance conditions were isolated, confirmed, and evaluated for changes in genome-wide expression levels. As a result, we tested the strain with positive combinatorial mutations for 3-hydroxypropionic acid (3HP) production under high furfural and high acetate hydrolysate fermentation, which demonstrated a 7- and 8-fold increase in 3HP productivity relative to the parent strain, respectively.« less

  14. Directed combinatorial mutagenesis of Escherichia coli for complex phenotype engineering

    DOE PAGES

    Liu, Rongming; Liang, Liya; Garst, Andrew D.; ...

    2018-03-29

    Strain engineering for industrial production requires a targeted improvement of multiple complex traits, which range from pathway flux to tolerance to mixed sugar utilization. Here, we report the use of an iterative CRISPR EnAbled Trackable genome Engineering (iCREATE) method to engineer rapid glucose and xylose co-consumption and tolerance to hydrolysate inhibitors in E. coli. Deep mutagenesis libraries were rationally designed, constructed, and screened to target ~40,000 mutations across 30 genes. These libraries included global and high-level regulators that regulate global gene expression, transcription factors that play important roles in genome-level transcription, enzymes that function in the sugar transport system, NAD(P)Hmore » metabolism, and the aldehyde reduction system. Specific mutants that conferred increased growth in mixed sugars and hydrolysate tolerance conditions were isolated, confirmed, and evaluated for changes in genome-wide expression levels. As a result, we tested the strain with positive combinatorial mutations for 3-hydroxypropionic acid (3HP) production under high furfural and high acetate hydrolysate fermentation, which demonstrated a 7- and 8-fold increase in 3HP productivity relative to the parent strain, respectively.« less

  15. Enhanced optical alignment of a digital micro mirror device through Bayesian adaptive exploration

    NASA Astrophysics Data System (ADS)

    Wynne, Kevin B.; Knuth, Kevin H.; Petruccelli, Jonathan

    2017-12-01

    As the use of Digital Micro Mirror Devices (DMDs) becomes more prevalent in optics research, the ability to precisely locate the Fourier "footprint" of an image beam at the Fourier plane becomes a pressing need. In this approach, Bayesian adaptive exploration techniques were employed to characterize the size and position of the beam on a DMD located at the Fourier plane. It couples a Bayesian inference engine with an inquiry engine to implement the search. The inquiry engine explores the DMD by engaging mirrors and recording light intensity values based on the maximization of the expected information gain. Using the data collected from this exploration, the Bayesian inference engine updates the posterior probability describing the beam's characteristics. The process is iterated until the beam is located to within the desired precision. This methodology not only locates the center and radius of the beam with remarkable precision but accomplishes the task in far less time than a brute force search. The employed approach has applications to system alignment for both Fourier processing and coded aperture design.

  16. A Methodology for Improving Active Learning Engineering Courses with a Large Number of Students and Teachers through Feedback Gathering and Iterative Refinement

    ERIC Educational Resources Information Center

    Estévez-Ayres, Iria; Alario-Hoyos, Carlos; Pérez-Sanagustín, Mar; Pardo, Abelardo; Crespo-García, Raquel M.; Leony, Derick; Parada G., Hugo A.; Delgado-Kloos, Carlos

    2015-01-01

    In the last decade, engineering education has evolved in many ways to meet society demands. Universities offer more flexible curricula and put a lot of effort on the acquisition of professional engineering skills by the students. In many universities, the courses in the first years of different engineering degrees share program and objectives,…

  17. An iterative analytical technique for the design of interplanetary direct transfer trajectories including perturbations

    NASA Astrophysics Data System (ADS)

    Parvathi, S. P.; Ramanan, R. V.

    2018-06-01

    An iterative analytical trajectory design technique that includes perturbations in the departure phase of the interplanetary orbiter missions is proposed. The perturbations such as non-spherical gravity of Earth and the third body perturbations due to Sun and Moon are included in the analytical design process. In the design process, first the design is obtained using the iterative patched conic technique without including the perturbations and then modified to include the perturbations. The modification is based on, (i) backward analytical propagation of the state vector obtained from the iterative patched conic technique at the sphere of influence by including the perturbations, and (ii) quantification of deviations in the orbital elements at periapsis of the departure hyperbolic orbit. The orbital elements at the sphere of influence are changed to nullify the deviations at the periapsis. The analytical backward propagation is carried out using the linear approximation technique. The new analytical design technique, named as biased iterative patched conic technique, does not depend upon numerical integration and all computations are carried out using closed form expressions. The improved design is very close to the numerical design. The design analysis using the proposed technique provides a realistic insight into the mission aspects. Also, the proposed design is an excellent initial guess for numerical refinement and helps arrive at the four distinct design options for a given opportunity.

  18. Topology-optimized metasurfaces: impact of initial geometric layout.

    PubMed

    Yang, Jianji; Fan, Jonathan A

    2017-08-15

    Topology optimization is a powerful iterative inverse design technique in metasurface engineering and can transform an initial layout into a high-performance device. With this method, devices are optimized within a local design phase space, making the identification of suitable initial geometries essential. In this Letter, we examine the impact of initial geometric layout on the performance of large-angle (75 deg) topology-optimized metagrating deflectors. We find that when conventional metasurface designs based on dielectric nanoposts are used as initial layouts for topology optimization, the final devices have efficiencies around 65%. In contrast, when random initial layouts are used, the final devices have ultra-high efficiencies that can reach 94%. Our numerical experiments suggest that device topologies based on conventional metasurface designs may not be suitable to produce ultra-high-efficiency, large-angle metasurfaces. Rather, initial geometric layouts with non-trivial topologies and shapes are required.

  19. A New High-Speed Oil-Free Turbine Engine Rotordynamic Simulator Test Rig

    NASA Technical Reports Server (NTRS)

    Howard, Samuel A.

    2007-01-01

    A new test rig has been developed for simulating high-speed turbomachinery rotor systems using Oil-Free foil air bearing technology. Foil air bearings have been used in turbomachinery, primarily air cycle machines, for the past four decades to eliminate the need for oil lubrication. The goal of applying this bearing technology to other classes of turbomachinery has prompted the fabrication of this test rig. The facility gives bearing designers the capability to test potential bearing designs with shafts that simulate the rotating components of a target machine without the high cost of building "make-and-break" hardware. The data collected from this rig can be used to make design changes to the shaft and bearings in subsequent design iterations. This paper describes the new test rig and demonstrates its capabilities through the initial run with a simulated shaft system.

  20. Design Optimization Programmable Calculators versus Campus Computers.

    ERIC Educational Resources Information Center

    Savage, Michael

    1982-01-01

    A hypothetical design optimization problem and technical information on the three design parameters are presented. Although this nested iteration problem can be solved on a computer (flow diagram provided), this article suggests that several hand held calculators can be used to perform the same design iteration. (SK)

  1. The Applied Mathematics for Power Systems (AMPS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chertkov, Michael

    2012-07-24

    Increased deployment of new technologies, e.g., renewable generation and electric vehicles, is rapidly transforming electrical power networks by crossing previously distinct spatiotemporal scales and invalidating many traditional approaches for designing, analyzing, and operating power grids. This trend is expected to accelerate over the coming years, bringing the disruptive challenge of complexity, but also opportunities to deliver unprecedented efficiency and reliability. Our Applied Mathematics for Power Systems (AMPS) Center will discover, enable, and solve emerging mathematics challenges arising in power systems and, more generally, in complex engineered networks. We will develop foundational applied mathematics resulting in rigorous algorithms and simulation toolboxesmore » for modern and future engineered networks. The AMPS Center deconstruction/reconstruction approach 'deconstructs' complex networks into sub-problems within non-separable spatiotemporal scales, a missing step in 20th century modeling of engineered networks. These sub-problems are addressed within the appropriate AMPS foundational pillar - complex systems, control theory, and optimization theory - and merged or 'reconstructed' at their boundaries into more general mathematical descriptions of complex engineered networks where important new questions are formulated and attacked. These two steps, iterated multiple times, will bridge the growing chasm between the legacy power grid and its future as a complex engineered network.« less

  2. The Design Implementation Framework: Iterative Design from the Lab to the Classroom

    ERIC Educational Resources Information Center

    Stone, Melissa L.; Kent, Kevin M.; Roscoe, Rod D.; Corley, Kathleen M.; Allen, Laura K.; McNamara, Danielle S.

    2017-01-01

    This chapter explores three broad principles of user-centered design methodologies: participatory design, iteration, and usability considerations. The authors highlight the importance of considering teachers as a prominent type of ITS end user, by describing the barriers teachers face as users and their role in educational technology design. To…

  3. Progress in the Design and Development of the ITER Low-Field Side Reflectometer (LFSR) System

    NASA Astrophysics Data System (ADS)

    Doyle, E. J.; Wang, G.; Peebles, W. A.; US LFSR Team

    2015-11-01

    The US has formed a team, comprised of personnel from PPPL, ORNL, GA and UCLA, to develop the LFSR system for ITER. The LFSR system will contribute to the measurement of a number of plasma parameters on ITER, including edge plasma electron density profiles, monitor Edge Localized Modes (ELMs) and L-H transitions, and provide physics measurements relating to high frequency instabilities, plasma flows, and other density transients. An overview of the status of design activities and component testing for the system will be presented. Since the 2011 conceptual design review, the number of microwave transmission lines (TLs) and antennas has been reduced from twelve (12) to seven (7) due to space constraint in the ITER Tokamak Port Plug. This change has required a reconfiguration and recalculation of the performance of the front-end antenna design, which now includes use of monostatic transmission lines and antennas. Work supported by US ITER/PPPL Subcontracts S013252-C and S012340, and PO 4500051400 from GA to UCLA.

  4. Concepts for the magnetic design of the MITICA neutral beam test facility ion accelerator.

    PubMed

    Chitarin, G; Agostinetti, P; Marconato, N; Marcuzzi, D; Sartori, E; Serianni, G; Sonato, P

    2012-02-01

    The megavolt ITER injector concept advancement neutral injector test facility will be constituted by a RF-driven negative ion source and by an electrostatic Accelerator, designed to produce a negative Ion with a specific energy up to 1 MeV. The beam is then neutralized in order to obtain a focused 17 MW neutral beam. The magnetic configuration inside the accelerator is of crucial importance for the achievement of a good beam efficiency, with the early deflection of the co-extracted and stripped electrons, and also of the required beam optic quality, with the correction of undesired ion beamlet deflections. Several alternative magnetic design concepts have been considered, comparing in detail the magnetic and beam optics simulation results, evidencing the advantages and drawbacks of each solution both from the physics and engineering point of view.

  5. Conceptual Design and Analysis of Cold Mass Support of the CS3U Feeder for the ITER

    NASA Astrophysics Data System (ADS)

    Zhu, Yinfeng; Song, Yuntao; Zhang, Yuanbin; Wang, Zhongwei

    2013-06-01

    In the International Thermonuclear Experimental Reactor (ITER) project, the feeders are one of the most important and critical systems. To convey the power supply and the coolant for the central solenoid (CS) magnet, 6 sets of CS feeders are employed, which consist mainly of an in-cryostat feeder (ICF), a cryostat feed-through (CFT), an S-bend box (SBB), and a coil terminal box (CTB). To compensate the displacements of the internal components of the CS feeders during operation, sliding cold mass supports consisting of a sled plate, a cylindrical support, a thermal shield, and an external ring are developed. To check the strength of the developed cold mass supports of the CS3U feeder, electromagnetic analysis of the two superconducting busbars is performed by using the CATIA V5 and ANSYS codes based on parametric technology. Furthermore, the thermal-structural coupling analysis is performed based on the obtained results, except for the stress concentration, and the max. stress intensity is lower than the allowable stress of the selected material. It is found that the conceptual design of the cold mass support can satisfy the required functions under the worst case of normal working conditions. All these performed activities will provide a firm technical basis for the engineering design and development of cold mass supports.

  6. Accuracy Quantification of the Loci-CHEM Code for Chamber Wall Heat Transfer in a GO2/GH2 Single Element Injector Model Problem

    NASA Technical Reports Server (NTRS)

    West, Jeff; Westra, Doug; Lin, Jeff; Tucker, Kevin

    2006-01-01

    A robust rocket engine combustor design and development process must include tools which can accurately predict the multi-dimensional thermal environments imposed on solid surfaces by the hot combustion products. Currently, empirical methods used in the design process are typically one dimensional and do not adequately account for the heat flux rise rate in the near-injector region of the chamber. Computational Fluid Dynamics holds promise to meet the design tool requirement, but requires accuracy quantification, or validation, before it can be confidently applied in the design process. This effort presents the beginning of such a validation process for the Loci-CHEM CFD code. The model problem examined here is a gaseous oxygen (GO2)/gaseous hydrogen (GH2) shear coaxial single element injector operating at a chamber pressure of 5.42 MPa. The GO2/GH2 propellant combination in this geometry represents one the simplest rocket model problems and is thus foundational to subsequent validation efforts for more complex injectors. Multiple steady state solutions have been produced with Loci-CHEM employing different hybrid grids and two-equation turbulence models. Iterative convergence for each solution is demonstrated via mass conservation, flow variable monitoring at discrete flow field locations as a function of solution iteration and overall residual performance. A baseline hybrid was used and then locally refined to demonstrate grid convergence. Solutions were obtained with three variations of the k-omega turbulence model.

  7. Accuracy Quantification of the Loci-CHEM Code for Chamber Wall Heat Fluxes in a G02/GH2 Single Element Injector Model Problem

    NASA Technical Reports Server (NTRS)

    West, Jeff; Westra, Doug; Lin, Jeff; Tucker, Kevin

    2006-01-01

    A robust rocket engine combustor design and development process must include tools which can accurately predict the multi-dimensional thermal environments imposed on solid surfaces by the hot combustion products. Currently, empirical methods used in the design process are typically one dimensional and do not adequately account for the heat flux rise rate in the near-injector region of the chamber. Computational Fluid Dynamics holds promise to meet the design tool requirement, but requires accuracy quantification, or validation, before it can be confidently applied in the design process. This effort presents the beginning of such a validation process for the Loci- CHEM CPD code. The model problem examined here is a gaseous oxygen (GO2)/gaseous hydrogen (GH2) shear coaxial single element injector operating at a chamber pressure of 5.42 MPa. The GO2/GH2 propellant combination in this geometry represents one the simplest rocket model problems and is thus foundational to subsequent validation efforts for more complex injectors. Multiple steady state solutions have been produced with Loci-CHEM employing different hybrid grids and two-equation turbulence models. Iterative convergence for each solution is demonstrated via mass conservation, flow variable monitoring at discrete flow field locations as a function of solution iteration and overall residual performance. A baseline hybrid grid was used and then locally refined to demonstrate grid convergence. Solutions were also obtained with three variations of the k-omega turbulence model.

  8. IDC Re-Engineering Phase 2 Glossary Version 1.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, Christopher J.; Harris, James M.

    2017-01-01

    This document contains the glossary of terms used for the IDC Re-Engineering Phase 2 project. This version was created for Iteration E3. The IDC applies automatic processing methods in order to produce, archive, and distribute standard IDC products on behalf of all States Parties.

  9. Improving Access to Care for Warfighters: Virtual Worlds Technology to Enhance Primary Care Training in Post-Traumatic Stress and Motivational Interviewing

    DTIC Science & Technology

    2017-10-01

    chronic mental and physical health problems. Therefore, the project aims to: (1) iteratively design a new web-based PTS and Motivational Interviewing...result in missed opportunities to intervene to prevent chronic mental and physical health problems. The project aims are to: (1) iteratively design a new...intervene to prevent chronic mental and physical health problems. We propose to: (1) Iteratively design a new web-based PTS and Motivational

  10. Developing stochastic model of thrust and flight dynamics for small UAVs

    NASA Astrophysics Data System (ADS)

    Tjhai, Chandra

    This thesis presents a stochastic thrust model and aerodynamic model for small propeller driven UAVs whose power plant is a small electric motor. First a model which relates thrust generated by a small propeller driven electric motor as a function of throttle setting and commanded engine RPM is developed. A perturbation of this model is then used to relate the uncertainty in throttle and engine RPM commanded to the error in the predicted thrust. Such a stochastic model is indispensable in the design of state estimation and control systems for UAVs where the performance requirements of the systems are specied in stochastic terms. It is shown that thrust prediction models for small UAVs are not a simple, explicit functions relating throttle input and RPM command to thrust generated. Rather they are non-linear, iterative procedures which depend on a geometric description of the propeller and mathematical model of the motor. A detailed derivation of the iterative procedure is presented and the impact of errors which arise from inaccurate propeller and motor descriptions are discussed. Validation results from a series of wind tunnel tests are presented. The results show a favorable statistical agreement between the thrust uncertainty predicted by the model and the errors measured in the wind tunnel. The uncertainty model of aircraft aerodynamic coefficients developed based on wind tunnel experiment will be discussed at the end of this thesis.

  11. On the safety of ITER accelerators.

    PubMed

    Li, Ge

    2013-01-01

    Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate -1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER.

  12. On the safety of ITER accelerators

    PubMed Central

    Li, Ge

    2013-01-01

    Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate −1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER. PMID:24008267

  13. Next Generation Wind Turbine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheraghi, S. Hossein; Madden, Frank

    The goal of this collaborative effort between Western New England University's College of Engineering and FloDesign Wind Turbine (FDWT) Corporation to wok on a novel areodynamic concept that could potentially lead to the next generation of wind turbines. Analytical studies and early scale model tests of FDWT's Mixer/Ejector Wind Turbine (MEWT) concept, which exploits jet-age advanced fluid dynamics, indicate that the concept has the potential to significantly reduce the cost of electricity over conventional Horizontal Axis Wind Turbines while reducing land usage. This project involved the design, fabrication, and wind tunnel testing of components of MEWT to provide the researchmore » and engineering data necessary to validate the design iterations and optimize system performance. Based on these tests, a scale model prototype called Briza was designed, fabricated, installed and tested on a portable tower to investigate and improve the design system in real world conditions. The results of these scale prototype efforts were very promising and have contributed significantly to FDWT's ongoing development of a product scale wind turbine for deployment in multiple locations around the U.S. This research was mutually beneficial to Western New England University, FDWT, and the DOE by utilizing over 30 student interns and a number of faculty in all efforts. It brought real-world wind turbine experience into the classroom to further enhance the Green Engineering Program at WNEU. It also provided on-the-job training to many students, improving their future employment opportunities, while also providing valuable information to further advance FDWT's mixer-ejector wind turbine technology, creating opportunities for future project innovation and job creation.« less

  14. Gaussian beam and physical optics iteration technique for wideband beam waveguide feed design

    NASA Technical Reports Server (NTRS)

    Veruttipong, W.; Chen, J. C.; Bathker, D. A.

    1991-01-01

    The Gaussian beam technique has become increasingly popular for wideband beam waveguide (BWG) design. However, it is observed that the Gaussian solution is less accurate for smaller mirrors (approximately less than 30 lambda in diameter). Therefore, a high-performance wideband BWG design cannot be achieved by using the Gaussian beam technique alone. This article demonstrates a new design approach by iterating Gaussian beam and BWG parameters simultaneously at various frequencies to obtain a wideband BWG. The result is further improved by comparing it with physical optics results and repeating the iteration.

  15. Design, fabrication and control of origami robots

    NASA Astrophysics Data System (ADS)

    Rus, Daniela; Tolley, Michael T.

    2018-06-01

    Origami robots are created using folding processes, which provide a simple approach to fabricating a wide range of robot morphologies. Inspired by biological systems, engineers have started to explore origami folding in combination with smart material actuators to enable intrinsic actuation as a means to decouple design from fabrication complexity. The built-in crease structure of origami bodies has the potential to yield compliance and exhibit many soft body properties. Conventional fabrication of robots is generally a bottom-up assembly process with multiple low-level steps for creating subsystems that include manual operations and often multiple iterations. By contrast, natural systems achieve elegant designs and complex functionalities using top-down parallel transformation approaches such as folding. Folding in nature creates a wide spectrum of complex morpho-functional structures such as proteins and intestines and enables the development of structures such as flowers, leaves and insect wings. Inspired by nature, engineers have started to explore folding powered by embedded smart material actuators to create origami robots. The design and fabrication of origami robots exploits top-down, parallel transformation approaches to achieve elegant designs and complex functionalities. In this Review, we first introduce the concept of origami robotics and then highlight advances in design principles, fabrication methods, actuation, smart materials and control algorithms. Applications of origami robots for a variety of devices are investigated, and future directions of the field are discussed, examining both challenges and opportunities.

  16. Low-Engine-Friction Technology for Advanced Natural-Gas Reciprocating Engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Victor Wong; Tian Tian; G. Smedley

    This program aims at improving the efficiency of advanced natural-gas reciprocating engines (ANGRE) by reducing piston and piston ring assembly friction without major adverse effects on engine performance, such as increased oil consumption and wear. An iterative process of simulation, experimentation and analysis has been followed towards achieving the goal of demonstrating a complete optimized low-friction engine system. In this program, a detailed set of piston and piston-ring dynamic and friction models have been adapted and applied that illustrate the fundamental relationships among mechanical, surface/material and lubricant design parameters and friction losses. Demonstration of low-friction ring-pack designs in the Waukeshamore » VGF 18GL engine confirmed ring-pack friction reduction of 30-40%, which translates to total engine FEMP (friction mean effective pressure) reduction of 7-10% from the baseline configuration without significantly increasing oil consumption or blow-by flow. The study on surface textures, including roughness characteristics, cross hatch patterns, dimples and grooves have shown that even relatively small-scale changes can have a large effect on ring/liner friction, in some cases reducing FMEP by as much as 30% from a smooth surface case. The measured FMEP reductions were in good agreement with the model predictions. The combined analysis of lubricant and surface design indicates that low-viscosity lubricants can be very effective in reducing friction, subject to component wear for extremely thin oils, which can be mitigated with further lubricant formulation and/or engineered surfaces. Hence a combined approach of lubricant design and appropriate wear reduction offers improved potential for minimum engine friction loss. Testing of low-friction lubricants showed that total engine FMEP reduced by up to {approx}16.5% from the commercial reference oil without significantly increasing oil consumption or blow-by flow. Piston friction studies indicate that a flatter piston with a more flexible skirt, together with optimizing the waviness and film thickness on the piston skirt offer significant friction reduction. Combined with low-friction ring-pack, material and lubricant parameters, a total power cylinder friction reduction of 30-50% is expected, translating to an engine efficiency increase of two percentage points from its current baseline towards the goal of 50% ARES engine efficiency. The design strategies developed in this study have promising potential for application in all modern reciprocating engines as they represent simple, low-cost methods to extract significant fuel savings. The current program has possible spinoffs and applications in other industries as well, including transportation, CHP, and diesel power generation. The progress made in this program has wide engine efficiency implications, and potential deployment of low-friction engine components or lubricants in the near term is quite possible.« less

  17. Development and Evaluation of an Intuitive Operations Planning Process

    DTIC Science & Technology

    2006-03-01

    designed to be iterative and also prescribes the way in which iterations should occur. On the other hand, participants’ perceived level of trust and...16 4. DESIGN AND METHOD OF THE EXPERIMENTAL EVALUATION OF THE INTUITIVE PLANNING PROCESS...20 4.1.3 Design

  18. Iterative optimization method for design of quantitative magnetization transfer imaging experiments.

    PubMed

    Levesque, Ives R; Sled, John G; Pike, G Bruce

    2011-09-01

    Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.

  19. Human factors engineering and design validation for the redesigned follitropin alfa pen injection device.

    PubMed

    Mahony, Mary C; Patterson, Patricia; Hayward, Brooke; North, Robert; Green, Dawne

    2015-05-01

    To demonstrate, using human factors engineering (HFE), that a redesigned, pre-filled, ready-to-use, pre-asembled follitropin alfa pen can be used to administer prescribed follitropin alfa doses safely and accurately. A failure modes and effects analysis identified hazards and harms potentially caused by use errors; risk-control measures were implemented to ensure acceptable device use risk management. Participants were women with infertility, their significant others, and fertility nurse (FN) professionals. Preliminary testing included 'Instructions for Use' (IFU) and pre-validation studies. Validation studies used simulated injections in a representative use environment; participants received prior training on pen use. User performance in preliminary testing led to IFU revisions and a change to outer needle cap design to mitigate needle stick potential. In the first validation study (49 users, 343 simulated injections), in the FN group, one observed critical use error resulted in a device design modification and another in an IFU change. A second validation study tested the mitigation strategies; previously reported use errors were not repeated. Through an iterative process involving a series of studies, modifications were made to the pen design and IFU. Simulated-use testing demonstrated that the redesigned pen can be used to administer follitropin alfa effectively and safely.

  20. Investigation of REST-Class Hypersonic Inlet Designs

    NASA Technical Reports Server (NTRS)

    Gollan, Rowan; Ferlemann, Paul G.

    2011-01-01

    Rectangular-to-elliptical shape-transition (REST) inlets are of interest for use on scramjet engines because they are efficient and integrate well with the forebody of a planar vehicle. The classic design technique by Smart for these inlets produces an efficient inlet but the complex three-dimensional viscous effects are only approximately included. Certain undesirable viscous features often occur in these inlets. In the present work, a design toolset has been developed which allows for rapid design of REST-class inlet geometries and the subsequent Navier-Stokes analysis of the inlet performance. This gives the designer feedback on the complex viscous effects at each design iteration. This new tool is applied to design an inlet for on-design operation at Mach 8. The tool allows for rapid investigation of design features that was previously not possible. The outcome is that the inlet shape can be modified to affect aspects of the flow field in a positive way. In one particular example, the boundary layer build-up on the bodyside of the inlet was reduced by 20% of the thickness associated with the classically designed inlet shape.

  1. Development of a Twin-spool Turbofan Engine Simulation Using the Toolbox for Modeling and Analysis of Thermodynamic Systems (T-MATS)

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia M.; Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Johathan S.

    2014-01-01

    The Toolbox for Modeling and Analysis of Thermodynamic Systems (T-MATS) is a tool that has been developed to allow a user to build custom models of systems governed by thermodynamic principles using a template to model each basic process. Validation of this tool in an engine model application was performed through reconstruction of the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) (v2) using the building blocks from the T-MATS (v1) library. In order to match the two engine models, it was necessary to address differences in several assumptions made in the two modeling approaches. After these modifications were made, validation of the engine model continued by integrating both a steady-state and dynamic iterative solver with the engine plant and comparing results from steady-state and transient simulation of the T-MATS and C-MAPSS models. The results show that the T-MATS engine model was accurate within 3 of the C-MAPSS model, with inaccuracy attributed to the increased dimension of the iterative solver solution space required by the engine model constructed using the T-MATS library. This demonstrates that, given an understanding of the modeling assumptions made in T-MATS and a baseline model, the T-MATS tool provides a viable option for constructing a computational model of a twin-spool turbofan engine that may be used in simulation studies.

  2. Integrated identification, modeling and control with applications

    NASA Astrophysics Data System (ADS)

    Shi, Guojun

    This thesis deals with the integration of system design, identification, modeling and control. In particular, six interdisciplinary engineering problems are addressed and investigated. Theoretical results are established and applied to structural vibration reduction and engine control problems. First, the data-based LQG control problem is formulated and solved. It is shown that a state space model is not necessary to solve this problem; rather a finite sequence from the impulse response is the only model data required to synthesize an optimal controller. The new theory avoids unnecessary reliance on a model, required in the conventional design procedure. The infinite horizon model predictive control problem is addressed for multivariable systems. The basic properties of the receding horizon implementation strategy is investigated and the complete framework for solving the problem is established. The new theory allows the accommodation of hard input constraints and time delays. The developed control algorithms guarantee the closed loop stability. A closed loop identification and infinite horizon model predictive control design procedure is established for engine speed regulation. The developed algorithms are tested on the Cummins Engine Simulator and desired results are obtained. A finite signal-to-noise ratio model is considered for noise signals. An information quality index is introduced which measures the essential information precision required for stabilization. The problems of minimum variance control and covariance control are formulated and investigated. Convergent algorithms are developed for solving the problems of interest. The problem of the integrated passive and active control design is addressed in order to improve the overall system performance. A design algorithm is developed, which simultaneously finds: (i) the optimal values of the stiffness and damping ratios for the structure, and (ii) an optimal output variance constrained stabilizing controller such that the active control energy is minimized. A weighted q-Markov COVER method is introduced for identification with measurement noise. The result is use to develop an iterative closed loop identification/control design algorithm. The effectiveness of the algorithm is illustrated by experimental results.

  3. Analytical Formulation for Sizing and Estimating the Dimensions and Weight of Wind Turbine Hub and Drivetrain Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Y.; Parsons, T.; King, R.

    This report summarizes the theory, verification, and validation of a new sizing tool for wind turbine drivetrain components, the Drivetrain Systems Engineering (DriveSE) tool. DriveSE calculates the dimensions and mass properties of the hub, main shaft, main bearing(s), gearbox, bedplate, transformer if up-tower, and yaw system. The level of fi¬ delity for each component varies depending on whether semiempirical parametric or physics-based models are used. The physics-based models have internal iteration schemes based on system constraints and design criteria. Every model is validated against available industry data or finite-element analysis. The verification and validation results show that the models reasonablymore » capture primary drivers for the sizing and design of major drivetrain components.« less

  4. Development and Testing of a Methane/Oxygen Catalytic Microtube Ignition System for Rocket Propulsion

    NASA Technical Reports Server (NTRS)

    Deans, Matthew

    2012-01-01

    This study sought to develop a catalytic ignition advanced torch system with a unique catalyst microtube design that could serve as a low energy alternative or redundant system for the ignition of methane and oxygen rockets. Development and testing of iterations of hardware was carried out to create a system that could operate at altitude and produce a torch. A unique design was created that initiated ignition via the catalyst and then propagated into external staged ignition. This system was able to meet the goals of operating across a range of atmospheric and altitude conditions with power inputs on the order of 20 to 30 watts with chamber pressures and mass flow rates typical of comparable ignition systems for a 100 lbf engine.

  5. Development and Testing of a Methane/Oxygen Catalytic Microtube Ignition System for Rocket Propulsion

    NASA Technical Reports Server (NTRS)

    Deans, Matthew C.; Schneider, Steven J.

    2012-01-01

    This study sought to develop a catalytic ignition advanced torch system with a unique catalyst microtube design that could serve as a low energy alternative or redundant system for the ignition of methane and oxygen rockets. Development and testing of iterations of hardware was carried out to create a system that could operate at altitude and produce a torch. A unique design was created that initiated ignition via the catalyst and then propagated into external staged ignition. This system was able to meet the goals of operating across a range of atmospheric and altitude conditions with power inputs on the order of 20 to 30 watts with chamber pressures and mass flow rates typical of comparable ignition systems for a 100 Ibf engine.

  6. Supporting interoperability of collaborative networks through engineering of a service-based Mediation Information System (MISE 2.0)

    NASA Astrophysics Data System (ADS)

    Benaben, Frederick; Mu, Wenxin; Boissel-Dallier, Nicolas; Barthe-Delanoe, Anne-Marie; Zribi, Sarah; Pingaud, Herve

    2015-08-01

    The Mediation Information System Engineering project is currently finishing its second iteration (MISE 2.0). The main objective of this scientific project is to provide any emerging collaborative situation with methods and tools to deploy a Mediation Information System (MIS). MISE 2.0 aims at defining and designing a service-based platform, dedicated to initiating and supporting the interoperability of collaborative situations among potential partners. This MISE 2.0 platform implements a model-driven engineering approach to the design of a service-oriented MIS dedicated to supporting the collaborative situation. This approach is structured in three layers, each providing their own key innovative points: (i) the gathering of individual and collaborative knowledge to provide appropriate collaborative business behaviour (key point: knowledge management, including semantics, exploitation and capitalisation), (ii) deployment of a mediation information system able to computerise the previously deduced collaborative processes (key point: the automatic generation of collaborative workflows, including connection with existing devices or services) (iii) the management of the agility of the obtained collaborative network of organisations (key point: supervision of collaborative situations and relevant exploitation of the gathered data). MISE covers business issues (through BPM), technical issues (through an SOA) and agility issues of collaborative situations (through EDA).

  7. An Object Model for a Rocket Engine Numerical Simulator

    NASA Technical Reports Server (NTRS)

    Mitra, D.; Bhalla, P. N.; Pratap, V.; Reddy, P.

    1998-01-01

    Rocket Engine Numerical Simulator (RENS) is a packet of software which numerically simulates the behavior of a rocket engine. Different parameters of the components of an engine is the input to these programs. Depending on these given parameters the programs output the behaviors of those components. These behavioral values are then used to guide the design of or to diagnose a model of a rocket engine "built" by a composition of these programs simulating different components of the engine system. In order to use this software package effectively one needs to have a flexible model of a rocket engine. These programs simulating different components then should be plugged into this modular representation. Our project is to develop an object based model of such an engine system. We are following an iterative and incremental approach in developing the model, as is the standard practice in the area of object oriented design and analysis of softwares. This process involves three stages: object modeling to represent the components and sub-components of a rocket engine, dynamic modeling to capture the temporal and behavioral aspects of the system, and functional modeling to represent the transformational aspects. This article reports on the first phase of our activity under a grant (RENS) from the NASA Lewis Research center. We have utilized Rambaugh's object modeling technique and the tool UML for this purpose. The classes of a rocket engine propulsion system are developed and some of them are presented in this report. The next step, developing a dynamic model for RENS, is also touched upon here. In this paper we will also discuss the advantages of using object-based modeling for developing this type of an integrated simulator over other tools like an expert systems shell or a procedural language, e.g., FORTRAN. Attempts have been made in the past to use such techniques.

  8. ITER Central Solenoid Module Fabrication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, John

    The fabrication of the modules for the ITER Central Solenoid (CS) has started in a dedicated production facility located in Poway, California, USA. The necessary tools have been designed, built, installed, and tested in the facility to enable the start of production. The current schedule has first module fabrication completed in 2017, followed by testing and subsequent shipment to ITER. The Central Solenoid is a key component of the ITER tokamak providing the inductive voltage to initiate and sustain the plasma current and to position and shape the plasma. The design of the CS has been a collaborative effort betweenmore » the US ITER Project Office (US ITER), the international ITER Organization (IO) and General Atomics (GA). GA’s responsibility includes: completing the fabrication design, developing and qualifying the fabrication processes and tools, and then completing the fabrication of the seven 110 tonne CS modules. The modules will be shipped separately to the ITER site, and then stacked and aligned in the Assembly Hall prior to insertion in the core of the ITER tokamak. A dedicated facility in Poway, California, USA has been established by GA to complete the fabrication of the seven modules. Infrastructure improvements included thick reinforced concrete floors, a diesel generator for backup power, along with, cranes for moving the tooling within the facility. The fabrication process for a single module requires approximately 22 months followed by five months of testing, which includes preliminary electrical testing followed by high current (48.5 kA) tests at 4.7K. The production of the seven modules is completed in a parallel fashion through ten process stations. The process stations have been designed and built with most stations having completed testing and qualification for carrying out the required fabrication processes. The final qualification step for each process station is achieved by the successful production of a prototype coil. Fabrication of the first ITER module is in progress. The seven modules will be individually shipped to Cadarache, France upon their completion. This paper describes the processes and status of the fabrication of the CS Modules for ITER.« less

  9. Development of NASA's Sample Cartridge Assembly: Design, Thermal Analysis, and Testing

    NASA Technical Reports Server (NTRS)

    O'Connor, Brian; Hernandez, Deborah; Duffy, James

    2015-01-01

    NASA's Sample Cartridge Assembly (SCA) project is responsible for designing and validating a payload that contains a materials research sample in a sealed environment. The SCA will be heated in the European Space Agency's (ESA) Low Gradient Furnace (LGF) that is housed inside the Material Science Research Rack (MSRR) located in the International Space Station (ISS). Sintered metals and crystal growth experiments in microgravity are examples of some of the types of materials research that may be performed with a SCA. The project's approach has been to use thermal models to guide the SCA through several design iterations. Various layouts of the SCA components were explored to meet the science and engineering requirements, and testing has been done to help prove the design. This paper will give an overview of the SCA design. It will show how thermal analysis is used to support the project. Also some testing that has been completed will also be discussed, including changes that were made to the thermal profile used during brazing.

  10. Using Failure Mode and Effects Analysis to design a comfortable automotive driver seat.

    PubMed

    Kolich, Mike

    2014-07-01

    Given enough time and use, all designs will fail. There are no fail-free designs. This is especially true when it comes to automotive seating comfort where the characteristics and preferences of individual customers are many and varied. To address this problem, individuals charged with automotive seating comfort development have, traditionally, relied on iterative and, as a result, expensive build-test cycles. Cost pressures being placed on today's vehicle manufacturers have necessitated the search for more efficient alternatives. This contribution aims to fill this need by proposing the application of an analytical technique common to engineering circles (but new to seating comfort development), namely Design Failure Mode and Effects Analysis (DFMEA). An example is offered to describe how development teams can use this systematic and disciplined approach to highlight potential seating comfort failure modes, reduce their risk, and bring capable designs to life. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  11. Biotechnology and genetic engineering in the new drug development. Part III. Biocatalysis, metabolic engineering and molecular modelling.

    PubMed

    Stryjewska, Agnieszka; Kiepura, Katarzyna; Librowski, Tadeusz; Lochyński, Stanisław

    2013-01-01

    Industrial biotechnology has been defined as the use and application of biotechnology for the sustainable processing and production of chemicals, materials and fuels. It makes use of biocatalysts such as microbial communities, whole-cell microorganisms or purified enzymes. In the review these processes are described. Drug design is an iterative process which begins when a chemist identifies a compound that displays an interesting biological profile and ends when both the activity profile and the chemical synthesis of the new chemical entity are optimized. Traditional approaches to drug discovery rely on a stepwise synthesis and screening program for large numbers of compounds to optimize activity profiles. Over the past ten to twenty years, scientists have used computer models of new chemical entities to help define activity profiles, geometries and relativities. This article introduces inter alia the concepts of molecular modelling and contains references for further reading.

  12. Reducing neural network training time with parallel processing

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1995-01-01

    Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.

  13. Predicting scientific oral presentation scores in a high school photonics science, technology, engineering and mathematics (STEM) program

    NASA Astrophysics Data System (ADS)

    Gilchrist, Pamela O.; Carpenter, Eric D.; Gray-Battle, Asia

    2014-07-01

    A hybrid teacher professional development, student science technology mathematics and engineering pipeline enrichment program was operated by the reporting research group for the past 3 years. Overall, the program has reached 69 students from 13 counties in North Carolina and 57 teachers from 30 counties spread over a total of five states. Quantitative analysis of oral presentations given by participants at a program event is provided. Scores from multiple raters were averaged and used as a criterion in several regression analyses. Overall it was revealed that student grade point averages, most advanced science course taken, extra quality points earned in their most advanced science course taken, and posttest scores on a pilot research design survey were significant predictors of student oral presentation scores. Rationale for findings, opportunities for future research, and implications for the iterative development of the program are discussed.

  14. Thermo-mechanical analysis of ITER first mirrors and its use for the ITER equatorial visible∕infrared wide angle viewing system optical design.

    PubMed

    Joanny, M; Salasca, S; Dapena, M; Cantone, B; Travère, J M; Thellier, C; Fermé, J J; Marot, L; Buravand, O; Perrollaz, G; Zeile, C

    2012-10-01

    ITER first mirrors (FMs), as the first components of most ITER optical diagnostics, will be exposed to high plasma radiation flux and neutron load. To reduce the FMs heating and optical surface deformation induced during ITER operation, the use of relevant materials and cooling system are foreseen. The calculations led on different materials and FMs designs and geometries (100 mm and 200 mm) show that the use of CuCrZr and TZM, and a complex integrated cooling system can limit efficiently the FMs heating and reduce their optical surface deformation under plasma radiation flux and neutron load. These investigations were used to evaluate, for the ITER equatorial port visible∕infrared wide angle viewing system, the impact of the FMs properties change during operation on the instrument main optical performances. The results obtained are presented and discussed.

  15. OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.; Gray, Justin S.

    2012-01-01

    The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.

  16. Performance Simulation & Engineering Analysis/Design and Verification of a Shock Mitigation System for a Rover Landing on Mars

    NASA Astrophysics Data System (ADS)

    Ullio, Roberto; Gily, Alessandro; Jones, Howard; Geelen, Kelly; Larranaga, Jonan

    2014-06-01

    In the frame of the ESA Mars Robotic Exploration Preparation (MREP) programme and within its Technology Development Plan [1] the activity "E913- 007MM Shock Mitigation Operating Only at Touch- down by use of minimalist/dispensable Hardware" (SMOOTH) was conducted under the framework of Rover technologies and to support the ESA MREP Mars Precision Lander (MPL) Phase A system study with the objectives to:• study the behaviour of the Sample Fetching Rover (SFR) landing on Mars on its wheels• investigate and implement into the design of the SFR Locomotion Sub-System (LSS) an impact energy absorption system (SMOOTH)• verify by simulation the performances of SMOOTH The main purpose of this paper is to present the obtained numerical simulation results and to explain how these results have been utilized first to iterate on the design of the SMOOTH concept and then to validate its performances.

  17. Performance assessment of the antenna setup for the ITER plasma position reflectometry in-vessel systems.

    PubMed

    Varela, P; Belo, J H; Quental, P B

    2016-11-01

    The design of the in-vessel antennas for the ITER plasma position reflectometry diagnostic is very challenging due to the need to cope both with the space restrictions inside the vacuum vessel and with the high mechanical and thermal loads during ITER operation. Here, we present the work carried out to assess and optimise the design of the antenna. We show that the blanket modules surrounding the antenna strongly modify its characteristics and need to be considered from the early phases of the design. We also show that it is possible to optimise the antenna performance, within the design restrictions.

  18. Dual genetic selection of synthetic riboswitches in Escherichia coli.

    PubMed

    Nomura, Yoko; Yokobayashi, Yohei

    2014-01-01

    This chapter describes a genetic selection strategy to engineer synthetic riboswitches that can chemically regulate gene expression in Escherichia coli. Riboswitch libraries are constructed by randomizing the nucleotides that potentially comprise an expression platform and fused to the hybrid selection/screening marker tetA-gfpuv. Iterative ON and OFF selections are performed under appropriate conditions that favor the survival or the growth of the cells harboring the desired riboswitches. After the selection, rapid screening of individual riboswitch clones is performed by measuring GFPuv fluorescence without subcloning. This optimized dual genetic selection strategy can be used to rapidly develop synthetic riboswitches without detailed computational design or structural knowledge.

  19. The development of the Final Approach Spacing Tool (FAST): A cooperative controller-engineer design approach

    NASA Technical Reports Server (NTRS)

    Lee, Katharine K.; Davis, Thomas J.

    1995-01-01

    Historically, the development of advanced automation for air traffic control in the United States has excluded the input of the air traffic controller until the need of the development process. In contrast, the development of the Final Approach Spacing Tool (FAST), for the terminal area controller, has incorporated the end-user in early, iterative testing. This paper describes a cooperative between the controller and the developer to create a tool which incorporates the complexity of the air traffic controller's job. This approach to software development has enhanced the usability of FAST and has helped smooth the introduction of FAST into the operational environment.

  20. Low-Cost, Net-Shape Ceramic Radial Turbine Program

    DTIC Science & Technology

    1985-05-01

    PROGRAM ELEMENT. PROJECT. TASK Garrett Turbine Engine Company AE OKUI UBR 111 South 34th Street, P.O. Box 2517 Phoenix, Arizona 85010 %I. CONTROLLING...processing iterations. Program management and materials characterization were conducted at Garrett Turbine Engine Company (GTEC), test bar and rotor...automotive gas turbine engine rotor development efforts at ACC. xvii PREFACE This is the final technical report of the Low-Cost, Net- Shape Ceramic

  1. Novel aspects of plasma control in ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphreys, D.; Jackson, G.; Walker, M.

    2015-02-15

    ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g., current profile regulation, tearing mode (TM) suppression), control mathematics (e.g., algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g., methods for management of highly subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less

  2. Novel aspects of plasma control in ITER

    DOE PAGES

    Humphreys, David; Ambrosino, G.; de Vries, Peter; ...

    2015-02-12

    ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g. current profile regulation, tearing mode suppression (TM)), control mathematics (e.g. algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g. methods for management of highly-subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Finally, issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less

  3. Prospects for Advanced Tokamak Operation of ITER

    NASA Astrophysics Data System (ADS)

    Neilson, George H.

    1996-11-01

    Previous studies have identified steady-state (or "advanced") modes for ITER, based on reverse-shear profiles and significant bootstrap current. A typical example has 12 MA of plasma current, 1,500 MW of fusion power, and 100 MW of heating and current-drive power. The implementation of these and other steady-state operating scenarios in the ITER device is examined in order to identify key design modifications that can enhance the prospects for successfully achieving advanced tokamak operating modes in ITER compatible with a single null divertor design. In particular, we examine plasma configurations that can be achieved by the ITER poloidal field system with either a monolithic central solenoid (as in the ITER Interim Design), or an alternate "hybrid" central solenoid design which provides for greater flexibility in the plasma shape. The increased control capability and expanded operating space provided by the hybrid central solenoid allows operation at high triangularity (beneficial for improving divertor performance through control of edge-localized modes and for increasing beta limits), and will make it much easier for ITER operators to establish an optimum startup trajectory leading to a high-performance, steady-state scenario. Vertical position control is examined because plasmas made accessible by the hybrid central solenoid can be more elongated and/or less well coupled to the conducting structure. Control of vertical-displacements using the external PF coils remains feasible over much of the expanded operating space. Further work is required to define the full spectrum of axisymmetric plasma disturbances requiring active control In addition to active axisymmetric control, advanced tokamak modes in ITER may require active control of kink modes on the resistive time scale of the conducting structure. This might be accomplished in ITER through the use of active control coils external to the vacuum vessel which are actuated by magnetic sensors near the first wall. The enhanced shaping and positioning flexibility provides a range of options for reducing the ripple-induced losses of fast alpha particles--a major limitation on ITER steady-state modes. An alternate approach that we are pursuing in parallel is the inclusion of ferromagnetic inserts to reduce the toroidal field ripple within the plasma chamber. The inclusion of modest design changes such as the hybrid central solenoid, active control coils for kink modes, and ferromagnetic inserts for TF ripple reduction show can greatly increase the flexibility to accommodate advance tokamak operation in ITER. Increased flexibility is important because the optimum operating scenario for ITER cannot be predicted with certainty. While low-inductance, reverse shear modes appear attractive for steady-state operation, high-inductance, high-beta modes are also viable candidates, and it is important that ITER have the flexibility to explore both these, and other, operating regimes.

  4. Engine Power Turbine and Propulsion Pod Arrangement Study

    NASA Technical Reports Server (NTRS)

    Robuck, Mark; Zhang, Yiyi

    2014-01-01

    A study has been conducted for NASA Glenn Research Center under contract NNC10BA05B, Task NNC11TA80T to identify beneficial arrangements of the turboshaft engine, transmissions and related systems within the propulsion pod nacelle of NASA's Large Civil Tilt-Rotor 2nd iteration (LCTR2) vehicle. Propulsion pod layouts were used to investigate potential advantages, disadvantages, as well as constraints of various arrangements assuming front or aft shafted engines. Results from previous NASA LCTR2 propulsion system studies and tasks performed by Boeing under NASA contracts are used as the basis for this study. This configuration consists of two Fixed Geometry Variable Speed Power Turbine Engines and related drive and rotor systems (per nacelle) arranged in tilting nacelles near the wing tip. Entry-into-service (EIS) 2035 technology is assumed for both the engine and drive systems. The variable speed rotor system changes from 100 percent speed for hover to 54 percent speed for cruise by the means of a two speed gearbox concept developed under previous NASA contracts. Propulsion and drive system configurations that resulted in minimum vehicle gross weight were identified in previous work and used here. Results reported in this study illustrate that a forward shafted engine has a slight weight benefit over an aft shafted engine for the LCTR2 vehicle. Although the aft shafted engines provide a more controlled and centered CG (between hover and cruise), the length of the long rotor shaft and complicated engine exhaust arrangement outweighed the potential benefits. A Multi-Disciplinary Analysis and Optimization (MDAO) approach for transmission sizing was also explored for this study. This tool offers quick analysis of gear loads, bearing lives, efficiencies, etc., through use of commercially available RomaxDESIGNER software. The goal was to create quick methods to explore various concept models. The output results from RomaxDESIGNER have been successfully linked to Boeing spreadsheets that generate gear tooth geometry in Catia 3D environment. Another initial goal was to link information from RomaxDESIGNER (such as hp, rpm, gear ratio) to populate Boeing's parametric weight spreadsheet and create an automated method to estimate drive system weight. This was only partially achieved due to the variety of weight models, number of manual inputs, and qualitative assessments required. A simplified weight spreadsheet was used with data inputs from RomaxDESIGNER along with manual inputs to perform rough weight calculations.

  5. Interaction design challenges and solutions for ALMA operations monitoring and control

    NASA Astrophysics Data System (ADS)

    Pietriga, Emmanuel; Cubaud, Pierre; Schwarz, Joseph; Primet, Romain; Schilling, Marcus; Barkats, Denis; Barrios, Emilio; Vila Vilaro, Baltasar

    2012-09-01

    The ALMA radio-telescope, currently under construction in northern Chile, is a very advanced instrument that presents numerous challenges. From a software perspective, one critical issue is the design of graphical user interfaces for operations monitoring and control that scale to the complexity of the system and to the massive amounts of data users are faced with. Early experience operating the telescope with only a few antennas has shown that conventional user interface technologies are not adequate in this context. They consume too much screen real-estate, require many unnecessary interactions to access relevant information, and fail to provide operators and astronomers with a clear mental map of the instrument. They increase extraneous cognitive load, impeding tasks that call for quick diagnosis and action. To address this challenge, the ALMA software division adopted a user-centered design approach. For the last two years, astronomers, operators, software engineers and human-computer interaction researchers have been involved in participatory design workshops, with the aim of designing better user interfaces based on state-of-the-art visualization techniques. This paper describes the process that led to the development of those interface components and to a proposal for the science and operations console setup: brainstorming sessions, rapid prototyping, joint implementation work involving software engineers and human-computer interaction researchers, feedback collection from a broader range of users, further iterations and testing.

  6. Two-dimensional over-all neutronics analysis of the ITER device

    NASA Astrophysics Data System (ADS)

    Zimin, S.; Takatsu, Hideyuki; Mori, Seiji; Seki, Yasushi; Satoh, Satoshi; Tada, Eisuke; Maki, Koichi

    1993-07-01

    The present work attempts to carry out a comprehensive neutronics analysis of the International Thermonuclear Experimental Reactor (ITER) developed during the Conceptual Design Activities (CDA). The two-dimensional cylindrical over-all calculational models of ITER CDA device including the first wall, blanket, shield, vacuum vessel, magnets, cryostat and support structures were developed for this purpose with a help of the DOGII code. Two dimensional DOT 3.5 code with the FUSION-40 nuclear data library was employed for transport calculations of neutron and gamma ray fluxes, tritium breeding ratio (TBR), and nuclear heating in reactor components. The induced activity calculational code CINAC was employed for the calculations of exposure dose rate after reactor shutdown around the ITER CDA device. The two-dimensional over-all calculational model includes the design specifics such as the pebble bed Li2O/Be layered blanket, the thin double wall vacuum vessel, the concrete cryostat integrated with the over-all ITER design, the top maintenance shield plug, the additional ring biological shield placed under the top cryostat lid around the above-mentioned top maintenance shield plug etc. All the above-mentioned design specifics were included in the employed calculational models. Some alternative design options, such as the water-rich shielding blanket instead of lithium-bearing one, the additional biological shield plug at the top zone between the poloidal field (PF) coil No. 5, and the maintenance shield plug, were calculated as well. Much efforts have been focused on analyses of obtained results. These analyses aimed to obtain necessary recommendations on improving the ITER CDA design.

  7. An overview of ITER diagnostics (invited)

    NASA Astrophysics Data System (ADS)

    Young, Kenneth M.; Costley, A. E.; ITER-JCT Home Team; ITER Diagnostics Expert Group

    1997-01-01

    The requirements for plasma measurements for operating and controlling the ITER device have now been determined. Initial criteria for the measurement quality have been set, and the diagnostics that might be expected to achieve these criteria have been chosen. The design of the first set of diagnostics to achieve these goals is now well under way. The design effort is concentrating on the components that interact most strongly with the other ITER systems, particularly the vacuum vessel, blankets, divertor modules, cryostat, and shield wall. The relevant details of the ITER device and facility design and specific examples of diagnostic design to provide the necessary measurements are described. These designs have to take account of the issues associated with very high 14 MeV neutron fluxes and fluences, nuclear heating, high heat loads, and high mechanical forces that can arise during disruptions. The design work is supported by an extensive research and development program, which to date has concentrated on the effects these levels of radiation might cause on diagnostic components. A brief outline of the organization of the diagnostic development program is given.

  8. A Fully Non-metallic Gas Turbine Engine Enabled by Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Grady, Joseph E.

    2014-01-01

    The Non-Metallic Gas Turbine Engine project, funded by NASA Aeronautics Research Institute (NARI), represents the first comprehensive evaluation of emerging materials and manufacturing technologies that will enable fully nonmetallic gas turbine engines. This will be achieved by assessing the feasibility of using additive manufacturing technologies for fabricating polymer matrix composite (PMC) and ceramic matrix composite (CMC) gas turbine engine components. The benefits of the proposed effort include: 50 weight reduction compared to metallic parts, reduced manufacturing costs due to less machining and no tooling requirements, reduced part count due to net shape single component fabrication, and rapid design change and production iterations. Two high payoff metallic components have been identified for replacement with PMCs and will be fabricated using fused deposition modeling (FDM) with high temperature capable polymer filaments. The first component is an acoustic panel treatment with a honeycomb structure with an integrated back sheet and perforated front sheet. The second component is a compressor inlet guide vane. The CMC effort, which is starting at a lower technology readiness level, will use a binder jet process to fabricate silicon carbide test coupons and demonstration articles. The polymer and ceramic additive manufacturing efforts will advance from monolithic materials toward silicon carbide and carbon fiber reinforced composites for improved properties. Microstructural analysis and mechanical testing will be conducted on the PMC and CMC materials. System studies will assess the benefits of fully nonmetallic gas turbine engine in terms of fuel burn, emissions, reduction of part count, and cost. The proposed effort will be focused on a small 7000 lbf gas turbine engine. However, the concepts are equally applicable to large gas turbine engines. The proposed effort includes a multidisciplinary, multiorganization NASA - industry team that includes experts in ceramic materials and CMCs, polymers and PMCs, structural engineering, additive manufacturing, engine design and analysis, and system analysis.

  9. Experiments on water detritiation and cryogenic distillation at TLK; Impact on ITER fuel cycle subsystems interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cristescu, I.; Cristescu, I. R.; Doerr, L.

    2008-07-15

    The ITER Isotope Separation System (ISS) and Water Detritiation System (WDS) should be integrated in order to reduce potential chronic tritium emissions from the ISS. This is achieved by routing the top (protium) product from the ISS to a feed point near the bottom end of the WDS Liquid Phase Catalytic Exchange (LPCE) column. This provides an additional barrier against ISS emissions and should mitigate the memory effects due to process parameter fluctuations in the ISS. To support the research activities needed to characterize the performances of various components for WDS and ISS processes under various working conditions and configurationsmore » as needed for ITER design, an experimental facility called TRENTA representative of the ITER WDS and ISS protium separation column, has been commissioned and is in operation at TLK The experimental program on TRENTA facility is conducted to provide the necessary design data related to the relevant ITER operating modes. The operation availability and performances of ISS-WDS have impact on ITER fuel cycle subsystems with consequences on the design integration. The preliminary experimental data on TRENTA facility are presented. (authors)« less

  10. Improvements in surface singularity analysis and design methods. [applicable to airfoils

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.

    1979-01-01

    The coupling of the combined source vortex distribution of Green's potential flow function with contemporary numerical techniques is shown to provide accurate, efficient, and stable solutions to subsonic inviscid analysis and design problems for multi-element airfoils. The analysis problem is solved by direct calculation of the surface singularity distribution required to satisfy the flow tangency boundary condition. The design or inverse problem is solved by an iteration process. In this process, the geometry and the associated pressure distribution are iterated until the pressure distribution most nearly corresponding to the prescribed design distribution is obtained. Typically, five iteration cycles are required for convergence. A description of the analysis and design method is presented, along with supporting examples.

  11. Refractive and relativistic effects on ITER low field side reflectometer design.

    PubMed

    Wang, G; Rhodes, T L; Peebles, W A; Harvey, R W; Budny, R V

    2010-10-01

    The ITER low field side reflectometer faces some unique design challenges, among which are included the effect of relativistic electron temperatures and refraction of probing waves. This paper utilizes GENRAY, a 3D ray tracing code, to investigate these effects. Using a simulated ITER operating scenario, characteristics of the reflected millimeter waves after return to the launch plane are quantified as a function of a range of design parameters, including antenna height, antenna diameter, and antenna radial position. Results for edge/SOL measurement with both O- and X-mode polarizations using proposed antennas are reported.

  12. The role of simulation in the design of a neural network chip

    NASA Technical Reports Server (NTRS)

    Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.

    1993-01-01

    An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.

  13. FPGA architecture and implementation of sparse matrix vector multiplication for the finite element method

    NASA Astrophysics Data System (ADS)

    Elkurdi, Yousef; Fernández, David; Souleimanov, Evgueni; Giannacopoulos, Dennis; Gross, Warren J.

    2008-04-01

    The Finite Element Method (FEM) is a computationally intensive scientific and engineering analysis tool that has diverse applications ranging from structural engineering to electromagnetic simulation. The trends in floating-point performance are moving in favor of Field-Programmable Gate Arrays (FPGAs), hence increasing interest has grown in the scientific community to exploit this technology. We present an architecture and implementation of an FPGA-based sparse matrix-vector multiplier (SMVM) for use in the iterative solution of large, sparse systems of equations arising from FEM applications. FEM matrices display specific sparsity patterns that can be exploited to improve the efficiency of hardware designs. Our architecture exploits FEM matrix sparsity structure to achieve a balance between performance and hardware resource requirements by relying on external SDRAM for data storage while utilizing the FPGAs computational resources in a stream-through systolic approach. The architecture is based on a pipelined linear array of processing elements (PEs) coupled with a hardware-oriented matrix striping algorithm and a partitioning scheme which enables it to process arbitrarily big matrices without changing the number of PEs in the architecture. Therefore, this architecture is only limited by the amount of external RAM available to the FPGA. The implemented SMVM-pipeline prototype contains 8 PEs and is clocked at 110 MHz obtaining a peak performance of 1.76 GFLOPS. For 8 GB/s of memory bandwidth typical of recent FPGA systems, this architecture can achieve 1.5 GFLOPS sustained performance. Using multiple instances of the pipeline, linear scaling of the peak and sustained performance can be achieved. Our stream-through architecture provides the added advantage of enabling an iterative implementation of the SMVM computation required by iterative solution techniques such as the conjugate gradient method, avoiding initialization time due to data loading and setup inside the FPGA internal memory.

  14. Biomimetic modelling.

    PubMed Central

    Vincent, Julian F V

    2003-01-01

    Biomimetics is seen as a path from biology to engineering. The only path from engineering to biology in current use is the application of engineering concepts and models to biological systems. However, there is another pathway: the verification of biological mechanisms by manufacture, leading to an iterative process between biology and engineering in which the new understanding that the engineering implementation of a biological system can bring is fed back into biology, allowing a more complete and certain understanding and the possibility of further revelations for application in engineering. This is a pathway as yet unformalized, and one that offers the possibility that engineers can also be scientists. PMID:14561351

  15. A New Capability for Nuclear Thermal Propulsion Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amiri, Benjamin W.; Nuclear and Radiological Engineering Department, University of Florida, Gainesville, FL 32611; Kapernick, Richard J.

    2007-01-30

    This paper describes a new capability for Nuclear Thermal Propulsion (NTP) design that has been developed, and presents the results of some analyses performed with this design tool. The purpose of the tool is to design to specified mission and material limits, while maximizing system thrust to weight. The head end of the design tool utilizes the ROCket Engine Transient Simulation (ROCETS) code to generate a system design and system design requirements as inputs to the core analysis. ROCETS is a modular system level code which has been used extensively in the liquid rocket engine industry for many years. Themore » core design tool performs high-fidelity reactor core nuclear and thermal-hydraulic design analysis. At the heart of this process are two codes TMSS-NTP and NTPgen, which together greatly automate the analysis, providing the capability to rapidly produce designs that meet all specified requirements while minimizing mass. A PERL based command script, called CORE DESIGNER controls the execution of these two codes, and checks for convergence throughout the process. TMSS-NTP is executed first, to produce a suite of core designs that meet the specified reactor core mechanical, thermal-hydraulic and structural requirements. The suite of designs consists of a set of core layouts and, for each core layout specific designs that span a range of core fuel volumes. NTPgen generates MCNPX models for each of the core designs from TMSS-NTP. Iterative analyses are performed in NTPgen until a reactor design (fuel volume) is identified for each core layout that meets cold and hot operation reactivity requirements and that is zoned to meet a radial core power distribution requirement.« less

  16. Biophysically Inspired Rational Design of Structured Chimeric Substrates for DNAzyme Cascade Engineering

    PubMed Central

    Lakin, Matthew R.; Brown, Carl W.; Horwitz, Eli K.; Fanning, M. Leigh; West, Hannah E.; Stefanovic, Darko; Graves, Steven W.

    2014-01-01

    The development of large-scale molecular computational networks is a promising approach to implementing logical decision making at the nanoscale, analogous to cellular signaling and regulatory cascades. DNA strands with catalytic activity (DNAzymes) are one means of systematically constructing molecular computation networks with inherent signal amplification. Linking multiple DNAzymes into a computational circuit requires the design of substrate molecules that allow a signal to be passed from one DNAzyme to another through programmed biochemical interactions. In this paper, we chronicle an iterative design process guided by biophysical and kinetic constraints on the desired reaction pathways and use the resulting substrate design to implement heterogeneous DNAzyme signaling cascades. A key aspect of our design process is the use of secondary structure in the substrate molecule to sequester a downstream effector sequence prior to cleavage by an upstream DNAzyme. Our goal was to develop a concrete substrate molecule design to achieve efficient signal propagation with maximal activation and minimal leakage. We have previously employed the resulting design to develop high-performance DNAzyme-based signaling systems with applications in pathogen detection and autonomous theranostics. PMID:25347066

  17. Life Support Systems for Lunar Landers

    NASA Technical Reports Server (NTRS)

    Anderson, Molly

    2008-01-01

    Engineers designing life support systems for NASA s next Lunar Landers face unique challenges. As with any vehicle that enables human spaceflight, the needs of the crew drive most of the lander requirements. The lander is also a key element of the architecture NASA will implement in the Constellation program. Many requirements, constraints, or optimization goals will be driven by interfaces with other projects, like the Crew Exploration Vehicle, the Lunar Surface Systems, and the Extravehicular Activity project. Other challenges in the life support system will be driven by the unique location of the vehicle in the environments encountered throughout the mission. This paper examines several topics that may be major design drivers for the lunar lander life support system. There are several functional requirements for the lander that may be different from previous vehicles or programs and recent experience. Some of the requirements or design drivers will change depending on the overall Lander configuration. While the configuration for a lander design is not fixed, designers can examine how these issues would impact their design and be prepared for the quick design iterations required to optimize a spacecraft.

  18. Update on Integrated Optical Design Analyzer

    NASA Technical Reports Server (NTRS)

    Moore, James D., Jr.; Troy, Ed

    2003-01-01

    Updated information on the Integrated Optical Design Analyzer (IODA) computer program has become available. IODA was described in Software for Multidisciplinary Concurrent Optical Design (MFS-31452), NASA Tech Briefs, Vol. 25, No. 10 (October 2001), page 8a. To recapitulate: IODA facilitates multidisciplinary concurrent engineering of highly precise optical instruments. The architecture of IODA was developed by reviewing design processes and software in an effort to automate design procedures. IODA significantly reduces design iteration cycle time and eliminates many potential sources of error. IODA integrates the modeling efforts of a team of experts in different disciplines (e.g., optics, structural analysis, and heat transfer) working at different locations and provides seamless fusion of data among thermal, structural, and optical models used to design an instrument. IODA is compatible with data files generated by the NASTRAN structural-analysis program and the Code V (Registered Trademark) optical-analysis program, and can be used to couple analyses performed by these two programs. IODA supports multiple-load-case analysis for quickly accomplishing trade studies. IODA can also model the transient response of an instrument under the influence of dynamic loads and disturbances.

  19. Simultaneous non-contiguous deletions using large synthetic DNA and site-specific recombinases

    PubMed Central

    Krishnakumar, Radha; Grose, Carissa; Haft, Daniel H.; Zaveri, Jayshree; Alperovich, Nina; Gibson, Daniel G.; Merryman, Chuck; Glass, John I.

    2014-01-01

    Toward achieving rapid and large scale genome modification directly in a target organism, we have developed a new genome engineering strategy that uses a combination of bioinformatics aided design, large synthetic DNA and site-specific recombinases. Using Cre recombinase we swapped a target 126-kb segment of the Escherichia coli genome with a 72-kb synthetic DNA cassette, thereby effectively eliminating over 54 kb of genomic DNA from three non-contiguous regions in a single recombination event. We observed complete replacement of the native sequence with the modified synthetic sequence through the action of the Cre recombinase and no competition from homologous recombination. Because of the versatility and high-efficiency of the Cre-lox system, this method can be used in any organism where this system is functional as well as adapted to use with other highly precise genome engineering systems. Compared to present-day iterative approaches in genome engineering, we anticipate this method will greatly speed up the creation of reduced, modularized and optimized genomes through the integration of deletion analyses data, transcriptomics, synthetic biology and site-specific recombination. PMID:24914053

  20. Pseudo-time methods for constrained optimization problems governed by PDE

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1995-01-01

    In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.

  1. Achievements in the development of the Water Cooled Solid Breeder Test Blanket Module of Japan to the milestones for installation in ITER

    NASA Astrophysics Data System (ADS)

    Tsuru, Daigo; Tanigawa, Hisashi; Hirose, Takanori; Mohri, Kensuke; Seki, Yohji; Enoeda, Mikio; Ezato, Koichiro; Suzuki, Satoshi; Nishi, Hiroshi; Akiba, Masato

    2009-06-01

    As the primary candidate of ITER Test Blanket Module (TBM) to be tested under the leadership of Japan, a water cooled solid breeder (WCSB) TBM is being developed. This paper shows the recent achievements towards the milestones of ITER TBMs prior to the installation, which consist of design integration in ITER, module qualification and safety assessment. With respect to the design integration, targeting the detailed design final report in 2012, structure designs of the WCSB TBM and the interfacing components (common frame and backside shielding) that are placed in a test port of ITER and the layout of the cooling system are presented. As for the module qualification, a real-scale first wall mock-up fabricated by using the hot isostatic pressing method by structural material of reduced activation martensitic ferritic steel, F82H, and flow and irradiation test of the mock-up are presented. As for safety milestones, the contents of the preliminary safety report in 2008 consisting of source term identification, failure mode and effect analysis (FMEA) and identification of postulated initiating events (PIEs) and safety analyses are presented.

  2. Development of a Twin-Spool Turbofan Engine Simulation Using the Toolbox for the Modeling and Analysis of Thermodynamic Systems (T-MATS)

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia M.; Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Jonathan S.

    2014-01-01

    The Toolbox for the Modeling and Analysis of Thermodynamic Systems (T-MATS) is a tool that has been developed to allow a user to build custom models of systems governed by thermodynamic principles using a template to model each basic process. Validation of this tool in an engine model application was performed through reconstruction of the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) (v2) using the building blocks from the T-MATS (v1) library. In order to match the two engine models, it was necessary to address differences in several assumptions made in the two modeling approaches. After these modifications were made, validation of the engine model continued by integrating both a steady-state and dynamic iterative solver with the engine plant and comparing results from steady-state and transient simulation of the T-MATS and C-MAPSS models. The results show that the T-MATS engine model was accurate within 3% of the C-MAPSS model, with inaccuracy attributed to the increased dimension of the iterative solver solution space required by the engine model constructed using the T-MATS library. This demonstrates that, given an understanding of the modeling assumptions made in T-MATS and a baseline model, the T-MATS tool provides a viable option for constructing a computational model of a twin-spool turbofan engine that may be used in simulation studies.

  3. NDARC NASA Design and Analysis of Rotorcraft - Input, Appendix 4

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2016-01-01

    The NDARC code performs design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance analysis, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. The principal tasks (sizing, mission analysis, flight performance analysis) are shown in the figure as boxes with heavy borders. Heavy arrows show control of subordinate tasks. The aircraft description consists of all the information, input and derived, that denes the aircraft. The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. This information can be the result of the sizing task; can come entirely from input, for a fixed model; or can come from the sizing task in a previous case or previous job. The aircraft description information is available to all tasks and all solutions. The sizing task determines the dimensions, power, and weight of a rotorcraft that can perform a specified set of design conditions and missions. The aircraft size is characterized by parameters such as design gross weight, weight empty, rotor radius, and engine power available. The relations between dimensions, power, and weight generally require an iterative solution. From the design flight conditions and missions, the task can determine the total engine power or the rotor radius (or both power and radius can be fixed), as well as the design gross weight, maximum takeoff weight, drive system torque limit, and fuel tank capacity. For each propulsion group, the engine power or the rotor radius can be sized. Missions are defined for the sizing task, and for the mission performance analysis. A mission consists of a number of mission segments, for which time, distance, and fuel burn are evaluated. For the sizing task, certain missions are designated to be used for design gross weight calculations; for transmission sizing; and for fuel tank sizing. The mission parameters include mission takeoff gross weight and useful load. For specified takeoff fuel weight with adjustable segments, the mission time or distance is adjusted so the fuel required for the mission equals the takeoff fuel weight. The mission iteration is on fuel weight or energy. Flight conditions are specified for the sizing task, and for the flight performance analysis. For the sizing task, certain flight conditions are designated to be used for design gross weight calculations; for transmission sizing; for maximum takeoff weight calculations; and for anti-torque or auxiliary thrust rotor sizing. The flight condition parameters include gross weight and useful load. For flight conditions and mission takeoff, the gross weight can be maximized, such that the power required equals the power available. A flight state is defined for each mission segment and each flight condition. The aircraft performance can be analyzed for the specified state, or a maximum effort performance can be identified. The maximum effort is specified in terms of a quantity such as best endurance or best range, and a variable such as speed, rate of climb, or altitude.

  4. An Implementation Methodology and Software Tool for an Entropy Based Engineering Model for Evolving Systems

    DTIC Science & Technology

    2003-06-01

    delivery Data Access (1980s) "What were unit sales in New England last March?" Relational databases (RDBMS), Structured Query Language ( SQL ...macros written in Visual Basic for Applications ( VBA ). 32 Iteration Two: Class Diagram Tech OASIS Export ScriptImport Filter Data ProcessingMethod 1...MS Excel * 1 VBA Macro*1 contains sends data to co nt ai ns executes * * 1 1 contains contains Figure 20. Iteration two class diagram The

  5. Semiannual Report, April 1, 1989 through September 30, 1989 (Institute for Computer Applications in Science and Engineering)

    DTIC Science & Technology

    1990-02-01

    noise. Tobias B. Orloff Work began on developing a high quality rendering algorithm based on the radiosity method. The algorithm is similar to...previous progressive radiosity algorithms except for the following improvements: 1. At each iteration vertex radiosities are computed using a modified scan...line approach, thus eliminating the quadratic cost associated with a ray tracing computation of vortex radiosities . 2. At each iteration the scene is

  6. Iterative algorithms for large sparse linear systems on parallel computers

    NASA Technical Reports Server (NTRS)

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  7. Beyond ITER: neutral beams for a demonstration fusion reactor (DEMO) (invited).

    PubMed

    McAdams, R

    2014-02-01

    In the development of magnetically confined fusion as an economically sustainable power source, International Tokamak Experimental Reactor (ITER) is currently under construction. Beyond ITER is the demonstration fusion reactor (DEMO) programme in which the physics and engineering aspects of a future fusion power plant will be demonstrated. DEMO will produce net electrical power. The DEMO programme will be outlined and the role of neutral beams for heating and current drive will be described. In particular, the importance of the efficiency of neutral beam systems in terms of injected neutral beam power compared to wallplug power will be discussed. Options for improving this efficiency including advanced neutralisers and energy recovery are discussed.

  8. Solar Electric Propulsion Vehicle Design Study for Cargo Transfer to Earth-moon L1

    NASA Technical Reports Server (NTRS)

    Sarver-Verhey, Timothy R.; Kerslake, Thomas W.; Rawlin, Vincent K.; Falck, Robert D.; Dudzinski, Leonard J.; Oleson, Steven R.

    2002-01-01

    A design study for a cargo transfer vehicle using solar electric propulsion was performed for NASA's Revolutionary Aerospace Systems Concepts program. Targeted for 2016, the solar electric propulsion (SEP) transfer vehicle is required to deliver a propellant supply module with a mass of approximately 36 metric tons from Low Earth Orbit to the first Earth-Moon libration point (LL1) within 270 days. Following an examination of propulsion and power technology options, a SEP transfer vehicle design was selected that incorporated large-area (approx. 2700 sq m) thin film solar arrays and a clustered engine configuration of eight 50 kW gridded ion thrusters mounted on an articulated boom. Refinement of the SEP vehicle design was performed iteratively to properly estimate the required xenon propellant load for the out-bound orbit transfer. The SEP vehicle performance, including the xenon propellant estimation, was verified via the SNAP trajectory code. Further efforts are underway to extend this system model to other orbit transfer missions.

  9. Trajectory Optimization for Missions to Small Bodies with a Focus on Scientific Merit.

    PubMed

    Englander, Jacob A; Vavrina, Matthew A; Lim, Lucy F; McFadden, Lucy A; Rhoden, Alyssa R; Noll, Keith S

    2017-01-01

    Trajectory design for missions to small bodies is tightly coupled both with the selection of targets for a mission and with the choice of spacecraft power, propulsion, and other hardware. Traditional methods of trajectory optimization have focused on finding the optimal trajectory for an a priori selection of destinations and spacecraft parameters. Recent research has expanded the field of trajectory optimization to multidisciplinary systems optimization that includes spacecraft parameters. The logical next step is to extend the optimization process to include target selection based not only on engineering figures of merit but also scientific value. This paper presents a new technique to solve the multidisciplinary mission optimization problem for small-bodies missions, including classical trajectory design, the choice of spacecraft power and propulsion systems, and also the scientific value of the targets. This technique, when combined with modern parallel computers, enables a holistic view of the small body mission design process that previously required iteration among several different design processes.

  10. Theoretical models for duct acoustic propagation and radiation

    NASA Technical Reports Server (NTRS)

    Eversman, Walter

    1991-01-01

    The development of computational methods in acoustics has led to the introduction of analysis and design procedures which model the turbofan inlet as a coupled system, simultaneously modeling propagation and radiation in the presence of realistic internal and external flows. Such models are generally large, require substantial computer speed and capacity, and can be expected to be used in the final design stages, with the simpler models being used in the early design iterations. Emphasis is given to practical modeling methods that have been applied to the acoustical design problem in turbofan engines. The mathematical model is established and the simplest case of propagation in a duct with hard walls is solved to introduce concepts and terminologies. An extensive overview is given of methods for the calculation of attenuation in uniform ducts with uniform flow and with shear flow. Subsequent sections deal with numerical techniques which provide an integrated representation of duct propagation and near- and far-field radiation for realistic geometries and flight conditions.

  11. Evaluating and redesigning teaching learning sequences at the introductory physics level

    NASA Astrophysics Data System (ADS)

    Guisasola, Jenaro; Zuza, Kristina; Ametller, Jaume; Gutierrez-Berraondo, José

    2017-12-01

    In this paper we put forward a proposal for the design and evaluation of teaching and learning sequences in upper secondary school and university. We will connect our proposal with relevant contributions on the design of teaching sequences, ground it on the design-based research methodology, and discuss how teaching and learning sequences designed according to our proposal relate to learning progressions. An iterative methodology for evaluating and redesigning the teaching and learning sequence (TLS) is presented. The proposed assessment strategy focuses on three aspects: (a) evaluation of the activities of the TLS, (b) evaluation of learning achieved by students in relation to the intended objectives, and (c) a document for gathering the difficulties found when implementing the TLS to serve as a guide to teachers. Discussion of this guide with external teachers provides feedback used for the TLS redesign. The context of our implementation and evaluation is an innovative calculus-based physics course for first-year engineering and science degree students at the University of the Basque Country.

  12. Aerothermodynamic testing requirements for future space transportation systems

    NASA Technical Reports Server (NTRS)

    Paulson, John W., Jr.; Miller, Charles G., III

    1995-01-01

    Aerothermodynamics, encompassing aerodynamics, aeroheating, and fluid dynamic and physical processes, is the genesis for the design and development of advanced space transportation vehicles. It provides crucial information to other disciplines involved in the development process such as structures, materials, propulsion, and avionics. Sources of aerothermodynamic information include ground-based facilities, computational fluid dynamic (CFD) and engineering computer codes, and flight experiments. Utilization of this triad is required to provide the optimum requirements while reducing undue design conservatism, risk, and cost. This paper discusses the role of ground-based facilities in the design of future space transportation system concepts. Testing methodology is addressed, including the iterative approach often required for the assessment and optimization of configurations from an aerothermodynamic perspective. The influence of vehicle shape and the transition from parametric studies for optimization to benchmark studies for final design and establishment of the flight data book is discussed. Future aerothermodynamic testing requirements including the need for new facilities are also presented.

  13. Not All Wizards Are from Oz: Iterative Design of Intelligent Learning Environments by Communication Capacity Tapering

    ERIC Educational Resources Information Center

    Mavrikis, Manolis; Gutierrez-Santos, Sergio

    2010-01-01

    This paper presents a methodology for the design of intelligent learning environments. We recognise that in the educational technology field, theory development and system-design should be integrated and rely on an iterative process that addresses: (a) the difficulty to elicit precise, concise, and operationalized knowledge from "experts" and (b)…

  14. From Amorphous to Defined: Balancing the Risks of Spiral Development

    DTIC Science & Technology

    2007-04-30

    630 675 720 765 810 855 900 Time (Week) Work started and active PhIt [Requirements,Iter1] : JavelinCalibration work packages1 1 1 Work started and...active PhIt [Technology,Iter1] : JavelinCalibration work packages2 2 2 Work started and active PhIt [Design,Iter1] : JavelinCalibration work packages3 3 3 3...Work started and active PhIt [Manufacturing,Iter1] : JavelinCalibration work packages4 4 Work started and active PhIt [Use,Iter1] : JavelinCalibration

  15. Integrated modeling of plasma ramp-up in DIII-D ITER-like and high bootstrap current scenario discharges

    NASA Astrophysics Data System (ADS)

    Wu, M. Q.; Pan, C. K.; Chan, V. S.; Li, G. Q.; Garofalo, A. M.; Jian, X.; Liu, L.; Ren, Q. L.; Chen, J. L.; Gao, X.; Gong, X. Z.; Ding, S. Y.; Qian, J. P.; Cfetr Physics Team

    2018-04-01

    Time-dependent integrated modeling of DIII-D ITER-like and high bootstrap current plasma ramp-up discharges has been performed with the equilibrium code EFIT, and the transport codes TGYRO and ONETWO. Electron and ion temperature profiles are simulated by TGYRO with the TGLF (SAT0 or VX model) turbulent and NEO neoclassical transport models. The VX model is a new empirical extension of the TGLF turbulent model [Jian et al., Nucl. Fusion 58, 016011 (2018)], which captures the physics of multi-scale interaction between low-k and high-k turbulence from nonlinear gyro-kinetic simulation. This model is demonstrated to accurately model low Ip discharges from the EAST tokamak. Time evolution of the plasma current density profile is simulated by ONETWO with the experimental current ramp-up rate. The general trend of the predicted evolution of the current density profile is consistent with that obtained from the equilibrium reconstruction with Motional Stark effect constraints. The predicted evolution of βN , li , and βP also agrees well with the experiments. For the ITER-like cases, the predicted electron and ion temperature profiles using TGLF_Sat0 agree closely with the experimental measured profiles, and are demonstrably better than other proposed transport models. For the high bootstrap current case, the predicted electron and ion temperature profiles perform better in the VX model. It is found that the SAT0 model works well at high IP (>0.76 MA) while the VX model covers a wider range of plasma current ( IP > 0.6 MA). The results reported in this paper suggest that the developed integrated modeling could be a candidate for ITER and CFETR ramp-up engineering design modeling.

  16. Progress on ion cyclotron range of frequencies heating physics and technology in support of the International Tokamak Experimental Reactor

    NASA Astrophysics Data System (ADS)

    Wilson, J. R.; Bonoli, P. T.

    2015-02-01

    Ion cyclotron range of frequency (ICRF) heating is foreseen as an integral component of the initial ITER operation. The status of ICRF preparations for ITER and supporting research were updated in the 2007 [Gormezano et al., Nucl. Fusion 47, S285 (2007)] report on the ITER physics basis. In this report, we summarize progress made toward the successful application of ICRF power on ITER since that time. Significant advances have been made in support of the technical design by development of new techniques for arc protection, new algorithms for tuning and matching, carrying out experimental tests of more ITER like antennas and demonstration on mockups that the design assumptions are correct. In addition, new applications of the ICRF system, beyond just bulk heating, have been proposed and explored.

  17. Sub-aperture switching based ptychographic iterative engine (sasPIE) method for quantitative imaging

    NASA Astrophysics Data System (ADS)

    Sun, Aihui; Kong, Yan; Jiang, Zhilong; Yu, Wei; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-03-01

    Though ptychographic iterative engine (PIE) has been widely adopted in the quantitative micro-imaging with various illuminations as visible light, X-ray and electron beam, the mechanical inaccuracy in the raster scanning of the sample relative to the illumination always degrades the reconstruction quality seriously and makes the resolution reached much lower than that determined by the numerical aperture of the optical system. To overcome this disadvantage, the sub-aperture switching based PIE method is proposed: the mechanical scanning in the common PIE is replaced by the sub-aperture switching, and the reconstruction error related to the positioning inaccuracy is completely avoided. The proposed technique remarkably improves the reconstruction quality, reduces the complexity of the experimental setup and fundamentally accelerates the data acquisition and reconstruction.

  18. Enhancing the Usability of an Optical Reader System to Support Point-of-Care Rapid Diagnostic Testing: An Iterative Design Approach.

    PubMed

    Hohenstein, Jess; O'Dell, Dakota; Murnane, Elizabeth L; Lu, Zhengda; Erickson, David; Gay, Geri

    2017-11-21

    In today's health care environment, increasing costs and inadequate medical resources have created a worldwide need for more affordable diagnostic tools that are also portable, fast, and easy to use. To address this issue, numerous research and commercial efforts have focused on developing rapid diagnostic technologies; however, the efficacy of existing systems has been hindered by usability problems or high production costs, making them infeasible for deployment in at-home, point-of-care (POC), or resource-limited settings. The aim of this study was to create a low-cost optical reader system that integrates with any smart device and accepts any type of rapid diagnostic test strip to provide fast and accurate data collection, sample analysis, and diagnostic result reporting. An iterative design methodology was employed by a multidisciplinary research team to engineer three versions of a portable diagnostic testing device that were evaluated for usability and overall user receptivity. Repeated design critiques and usability studies identified a number of system requirements and considerations (eg, software compatibility, biomatter contamination, and physical footprint) that we worked to incrementally incorporate into successive system variants. Our final design phase culminated in the development of Tidbit, a reader that is compatible with any Wi-Fi-enabled device and test strip format. The Tidbit includes various features that support intuitive operation, including a straightforward test strip insertion point, external indicator lights, concealed electronic components, and an asymmetric shape, which inherently signals correct device orientation. Usability testing of the Tidbit indicates high usability for potential user communities. This study presents the design process, specification, and user reception of the Tidbit, an inexpensive, easy-to-use, portable optical reader for fast, accurate quantification of rapid diagnostic test results. Usability testing suggests that the reader is usable among and can benefit a wide group of potential users, including in POC contexts. Generally, the methodology of this study demonstrates the importance of testing these types of systems with potential users and exemplifies how iterative design processes can be employed by multidisciplinary research teams to produce compelling technological solutions. ©Jess Hohenstein, Dakota O'Dell, Elizabeth L Murnane, Zhengda Lu, David Erickson, Geri Gay. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 21.11.2017.

  19. Conceptual Design of the ITER ECE Diagnostic - An Update

    NASA Astrophysics Data System (ADS)

    Austin, M. E.; Pandya, H. K. B.; Beno, J.; Bryant, A. D.; Danani, S.; Ellis, R. F.; Feder, R.; Hubbard, A. E.; Kumar, S.; Ouroua, A.; Phillips, P. E.; Rowan, W. L.

    2012-09-01

    The ITER ECE diagnostic has recently been through a conceptual design review for the entire system including front end optics, transmission line, and back-end instruments. The basic design of two viewing lines, each with a single ellipsoidal mirror focussing into the plasma near the midplane of the typical operating scenarios is agreed upon. The location and design of the hot calibration source and the design of the shutter that directs its radiation to the transmission line are issues that need further investigation. In light of recent measurements and discussion, the design of the broadband transmission line is being revisited and new options contemplated. For the instruments, current systems for millimeter wave radiometers and broad-band spectrometers will be adequate for ITER, but the option for employing new state-of-the-art techniques will be left open.

  20. Design and construction of multigenic constructs for plant biotechnology using the GoldenBraid cloning strategy.

    PubMed

    Sarrion-Perdigones, Alejandro; Palaci, Jorge; Granell, Antonio; Orzaez, Diego

    2014-01-01

    GoldenBraid (GB) is an iterative and standardized DNA assembling system specially designed for Multigene Engineering in Plant Synthetic Biology. GB is based on restriction-ligation reactions using type IIS restriction enzymes. GB comprises a collection of standard DNA pieces named "GB parts" and a set of destination plasmids (pDGBs) that incorporate the multipartite assembly of standardized DNA parts. GB reactions are extremely efficient: two transcriptional units (TUs) can be assembled from several basic GBparts in one T-DNA less than 24 h. Moreover, larger assemblies comprising 4-5 TUs are routinely built in less than 2 working weeks. Here we provide a detailed view of the GB methodology. As a practical example, a Bimolecular Fluorescence Complementation construct comprising four TUs in a 12 kb DNA fragment is presented.

  1. Design variables for mechanical properties of bone tissue scaffolds.

    PubMed

    Howk, Daniel; Chu, Tien-Min G

    2006-01-01

    The reconstruction of segmental defect in long bone is a clinical challenge. Multiple surgeries are typically required to restore the structure and function of the affected defect site. In order to overcome this defect a biodegradable bone tissue engineering scaffold is used. This scaffold acts as a carrier of proteins and growth factors, while also supporting the load that the bone would normally sustain, until the natural bone can regenerate in its place. Work was done to optimize an existing solid free-form scaffold design. The goal of the optimization was to increase the porosity of the scaffold while maintaining the strength of a previously-tested prototype design. With this in mind, eight new designs were created. These designs were drawn using CAD software and then through the use of finite element analysis the theoretical ultimate compressive strength of each design was obtained. Each scaffold design was constructed by casting a thermal-curable poly(propylene fumarate)/tricalcium phosphate (PPF/TCP) suspension into wax molds fabricated on inkjet printing rapid prototyping machine. The constructs were then experimentally tested by applying a uniaxial compressive load. The theoretical and experimental values of ultimate compressive strength and specific strength of each design were compared. Theoretically, the best scaffold design produced from this work improved upon the current design by increasing the porosity by 46% and also increasing the ultimate compressive strength by 27%. The experimental data was found to match the theoretical strength in four designs, but deviate from the theoretical strength in five designs. The reasons for the deviations and their relation to the rapid prototyping manufacturing technique were discussed. The results of this work show that it is possible to increase the porosity and strength of a bone tissue engineering scaffold through simple iterations in architectural design.

  2. Research Capabilities for Oil-Free Turbomachinery Expanded by New Rotordynamic Simulator Facility

    NASA Technical Reports Server (NTRS)

    Howard, Samuel A.

    2004-01-01

    A new test rig has been developed for simulating high-speed turbomachinery shafting using Oil-Free foil air bearing technology. Foil air journal bearings are self-acting hydrodynamic bearings with a flexible inner sleeve surface using air as the lubricant. These bearings have been used in turbomachinery, primarily air cycle machines, for the past four decades to eliminate the need for oil lubrication. More recently, interest has been growing in applying foil bearings to aircraft gas turbine engines. They offer potential improvements in efficiency and power density, decreased maintenance costs, and other secondary benefits. The goal of applying foil air bearings to aircraft gas turbine engines prompted the fabrication of this test rig. The facility enables bearing designers to test potential bearing designs with shafts that simulate the rotating components of a target engine without the high cost of building actual flight hardware. The data collected from this rig can be used to make changes to the shaft and bearings in subsequent design iterations. The rest of this article describes the new test rig and demonstrates some of its capabilities with an initial simulated shaft system. The test rig has two support structures, each housing a foil air journal bearing. The structures are designed to accept any size foil journal bearing smaller than 63 mm (2.5 in.) in diameter. The bearing support structures are mounted to a 91- by 152-cm (3- by 5-ft) table and can be separated by as much as 122 cm (4 ft) and as little as 20 cm (8 in.) to accommodate a wide range of shaft sizes. In the initial configuration, a 9.5-cm (3.75-in.) impulse air turbine drives the test shaft. The impulse turbine, as well as virtually any number of "dummy" compressor and turbine disks, can be mounted on the shaft inboard or outboard of the bearings. This flexibility allows researchers to simulate various engine shaft configurations. The bearing support structures include a unique bearing mounting fixture that rotates to accommodate a laserbased alignment system. This can measure the misalignment of the bearing centers in each of 2 translational degrees of freedom and 2 rotational degrees of freedom. In the initial configuration, with roughly a 30.5-cm- (12-in.-) long shaft, two simulated aerocomponent disks, and two 50.8-cm (2-in.) foil journal bearings, the rig can operate at 65,000 rpm at room temperature. The test facility can measure shaft displacements in both the vertical and horizontal directions at each bearing location. Horizontal and vertical structural vibrations are monitored using accelerometers mounted on the bearing support structures. This information is used to determine system rotordynamic response, including critical speeds, mode shapes, orbit size and shape, and potentially the onset of instabilities. Bearing torque can be monitored as well to predict the power loss in the foil bearings. All of this information is fed back and forth between NASA and the foil bearing designers in an iterative fashion to converge on a final bearing and shaft design for a given engine application. In addition to its application development capabilities, the test rig offers several unique capabilities for basic bearing research. Using the laser alignment system mentioned earlier, the facility will be used to map foil air journal bearing performance. A known misalignment of increasing severity will be induced to determine the sensitivity of foil bearings to misalignment. Other future plans include oil-free integral starter generator testing and development, and dynamic load testing of foil journal bearings.

  3. Conceptual Design of the ITER Plasma Control System

    NASA Astrophysics Data System (ADS)

    Snipes, J. A.

    2013-10-01

    The conceptual design of the ITER Plasma Control System (PCS) has been approved and the preliminary design has begun for the 1st plasma PCS. This is a collaboration of many plasma control experts from existing devices to design and test plasma control techniques applicable to ITER on existing machines. The conceptual design considered all phases of plasma operation, ranging from non-active H/He plasmas through high fusion gain inductive DT plasmas to fully non-inductive steady-state operation, to ensure that the PCS control functionality and architecture can satisfy the demands of the ITER Research Plan. The PCS will control plasma equilibrium and density, plasma heat exhaust, a range of MHD instabilities (including disruption mitigation), and the non-inductive current profile required to maintain stable steady-state scenarios. The PCS architecture requires sophisticated shared actuator management and event handling systems to prioritize control goals, algorithms, and actuators according to dynamic control needs and monitor plasma and plant system events to trigger automatic changes in the control algorithms or operational scenario, depending on real-time operating limits and conditions.

  4. Description of the prototype diagnostic residual gas analyzer for ITER.

    PubMed

    Younkin, T R; Biewer, T M; Klepper, C C; Marcus, C

    2014-11-01

    The diagnostic residual gas analyzer (DRGA) system to be used during ITER tokamak operation is being designed at Oak Ridge National Laboratory to measure fuel ratios (deuterium and tritium), fusion ash (helium), and impurities in the plasma. The eventual purpose of this instrument is for machine protection, basic control, and physics on ITER. Prototyping is ongoing to optimize the hardware setup and measurement capabilities. The DRGA prototype is comprised of a vacuum system and measurement technologies that will overlap to meet ITER measurement requirements. Three technologies included in this diagnostic are a quadrupole mass spectrometer, an ion trap mass spectrometer, and an optical penning gauge that are designed to document relative and absolute gas concentrations.

  5. Multidisciplinary optimization of an HSCT wing using a response surface methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giunta, A.A.; Grossman, B.; Mason, W.H.

    1994-12-31

    Aerospace vehicle design is traditionally divided into three phases: conceptual, preliminary, and detailed. Each of these design phases entails a particular level of accuracy and computational expense. While there are several computer programs which perform inexpensive conceptual-level aircraft multidisciplinary design optimization (MDO), aircraft MDO remains prohibitively expensive using preliminary- and detailed-level analysis tools. This occurs due to the expense of computational analyses and because gradient-based optimization requires the analysis of hundreds or thousands of aircraft configurations to estimate design sensitivity information. A further hindrance to aircraft MDO is the problem of numerical noise which occurs frequently in engineering computations. Computermore » models produce numerical noise as a result of the incomplete convergence of iterative processes, round-off errors, and modeling errors. Such numerical noise is typically manifested as a high frequency, low amplitude variation in the results obtained from the computer models. Optimization attempted using noisy computer models may result in the erroneous calculation of design sensitivities and may slow or prevent convergence to an optimal design.« less

  6. RF Pulse Design using Nonlinear Gradient Magnetic Fields

    PubMed Central

    Kopanoglu, Emre; Constable, R. Todd

    2014-01-01

    Purpose An iterative k-space trajectory and radio-frequency (RF) pulse design method is proposed for Excitation using Nonlinear Gradient Magnetic fields (ENiGMa). Theory and Methods The spatial encoding functions (SEFs) generated by nonlinear gradient fields (NLGFs) are linearly dependent in Cartesian-coordinates. Left uncorrected, this may lead to flip-angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a Matching-Pursuit algorithm, and the RF pulse is designed using a Conjugate-Gradient algorithm. Three variants of the proposed approach are given: the full-algorithm, a computationally-cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. Results The method is compared to other iterative (Matching-Pursuit and Conjugate Gradient) and non-iterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity significantly. Conclusion An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. PMID:25203286

  7. Electromagnetic Analysis For The Design Of ITER Diagnostic Port Plugs During Plasma Disruptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, Y

    2014-03-03

    ITER diagnostic port plugs perform many functions including structural support of diagnostic systems under high electromagnetic loads while allowing for diagnostic access to plasma. The design of diagnotic equatorial port plugs (EPP) are largely driven by electromagnetic loads and associate response of EPP structure during plasma disruptions and VDEs. This paper summarizes results of transient electromagnetic analysis using Opera 3d in support of the design activities for ITER diagnostic EPP. A complete distribution of disruption loads on the Diagnostic First Walls (DFWs). Diagnostic Shield Modules (DSMs) and the EPP structure, as well as impact on the system design integration duemore » to electrical contact among various EPP structural components are discussed.« less

  8. Loads specification and embedded plate definition for the ITER cryoline system

    NASA Astrophysics Data System (ADS)

    Badgujar, S.; Benkheira, L.; Chalifour, M.; Forgeas, A.; Shah, N.; Vaghela, H.; Sarkar, B.

    2015-12-01

    ITER cryolines (CLs) are complex network of vacuum-insulated multi and single process pipe lines, distributed in three different areas at ITER site. The CLs will support different operating loads during the machine life-time; either considered as nominal, occasional or exceptional. The major loads, which form the design basis are inertial, pressure, temperature, assembly, magnetic, snow, wind, enforced relative displacement and are put together in loads specification. Based on the defined load combinations, conceptual estimation of reaction loads have been carried out for the lines located inside the Tokamak building. Adequate numbers of embedded plates (EPs) per line have been defined and integrated in the building design. The finalization of building EPs to support the lines, before the detailed design, is one of the major design challenges as the usual logic of the design may alter. At the ITER project level, it was important to finalize EPs to allow adequate design and timely availability of the Tokamak building. The paper describes the single loads, load combinations considered in load specification and the approach for conceptual load estimation and selection of EPs for Toroidal Field (TF) Cryoline as an example by converting the load combinations in two main load categories; pressure and seismic.

  9. Integrating Low-Cost Rapid Usability Testing into Agile System Development of Healthcare IT: A Methodological Perspective.

    PubMed

    Kushniruk, Andre W; Borycki, Elizabeth M

    2015-01-01

    The development of more usable and effective healthcare information systems has become a critical issue. In the software industry methodologies such as agile and iterative development processes have emerged to lead to more effective and usable systems. These approaches highlight focusing on user needs and promoting iterative and flexible development practices. Evaluation and testing of iterative agile development cycles is considered an important part of the agile methodology and iterative processes for system design and re-design. However, the issue of how to effectively integrate usability testing methods into rapid and flexible agile design cycles has remained to be fully explored. In this paper we describe our application of an approach known as low-cost rapid usability testing as it has been applied within agile system development in healthcare. The advantages of the integrative approach are described, along with current methodological considerations.

  10. Physics and technology in the ion-cyclotron range of frequency on Tore Supra and TITAN test facility: implication for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Litaudon, X; Bernard, J. M.; Colas, L.

    2013-01-01

    To support the design of an ITER ion-cyclotron range of frequency heating (ICRH) system and to mitigate risks of operation in ITER, CEA has initiated an ambitious Research & Development program accompanied by experiments on Tore Supra or test-bed facility together with a significant modelling effort. The paper summarizes the recent results in the following areas: Comprehensive characterization (experiments and modelling) of a new Faraday screen concept tested on the Tore Supra antenna. A new model is developed for calculating the ICRH sheath rectification at the antenna vicinity. The model is applied to calculate the local heat flux on Toremore » Supra and ITER ICRH antennas. Full-wave modelling of ITER ICRH heating and current drive scenarios with the EVE code. With 20 MW of power, a current of 400 kA could be driven on axis in the DT scenario. Comparison between DT and DT(3He) scenario is given for heating and current drive efficiencies. First operation of CW test-bed facility, TITAN, designed for ITER ICRH components testing and could host up to a quarter of an ITER antenna. R&D of high permittivity materials to improve load of test facilities to better simulate ITER plasma antenna loading conditions.« less

  11. Seismic Design of ITER Component Cooling Water System-1 Piping

    NASA Astrophysics Data System (ADS)

    Singh, Aditya P.; Jadhav, Mahesh; Sharma, Lalit K.; Gupta, Dinesh K.; Patel, Nirav; Ranjan, Rakesh; Gohil, Guman; Patel, Hiren; Dangi, Jinendra; Kumar, Mohit; Kumar, A. G. A.

    2017-04-01

    The successful performance of ITER machine very much depends upon the effective removal of heat from the in-vessel components and other auxiliary systems during Tokamak operation. This objective will be accomplished by the design of an effective Cooling Water System (CWS). The optimized piping layout design is an important element in CWS design and is one of the major design challenges owing to the factors of large thermal expansion and seismic accelerations; considering safety, accessibility and maintainability aspects. An important sub-system of ITER CWS, Component Cooling Water System-1 (CCWS-1) has very large diameter of pipes up to DN1600 with many intersections to fulfill the process flow requirements of clients for heat removal. Pipe intersection is the weakest link in the layout due to high stress intensification factor. CCWS-1 piping up to secondary confinement isolation valves as well as in-between these isolation valves need to survive a Seismic Level-2 (SL-2) earthquake during the Tokamak operation period to ensure structural stability of the system in the Safe Shutdown Earthquake (SSE) event. This paper presents the design, qualification and optimization of layout of ITER CCWS-1 loop to withstand SSE event combined with sustained and thermal loads as per the load combinations defined by ITER and allowable limits as per ASME B31.3, This paper also highlights the Modal and Response Spectrum Analyses done to find out the natural frequency and system behavior during the seismic event.

  12. Finding the Optimal Guidance for Enhancing Anchored Instruction

    ERIC Educational Resources Information Center

    Zydney, Janet Mannheimer; Bathke, Arne; Hasselbring, Ted S.

    2014-01-01

    This study investigated the effect of different methods of guidance with anchored instruction on students' mathematical problem-solving performance. The purpose of this research was to iteratively design a learning environment to find the optimal level of guidance. Two iterations of the software were compared. The first iteration used explicit…

  13. Status of the Negative Ion Based Heating and Diagnostic Neutral Beams for ITER

    NASA Astrophysics Data System (ADS)

    Schunke, B.; Bora, D.; Hemsworth, R.; Tanga, A.

    2009-03-01

    The current baseline of ITER foresees 2 Heating Neutral Beam (HNB's) systems based on negative ion technology, each accelerating to 1 MeV 40 A of D- and capable of delivering 16.5 MW of D0 to the ITER plasma, with a 3rd HNB injector foreseen as an upgrade option [1]. In addition a dedicated Diagnostic Neutral Beam (DNB) accelerating 60 A of H- to 100 keV will inject ≈15 A equivalent of H0 for charge exchange recombination spectroscopy and other diagnostics. Recently the RF driven negative ion source developed by IPP Garching has replaced the filamented ion source as the reference ITER design. The RF source developed at IPP, which is approximately a quarter scale of the source needed for ITER, is expected to have reduced caesium consumption compared to the filamented arc driven ion source. The RF driven source has demonstrated adequate accelerated D- and H- current densities as well as long-pulse operation [2, 3]. It is foreseen that the HNB's and the DNB will use the same negative ion source. Experiments with a half ITER-size ion source are on-going at IPP and the operation of a full-scale ion source will be demonstrated, at full power and pulse length, in the dedicated Ion Source Test Bed (ISTF), which will be part of the Neutral Beam Test Facility (NBTF), in Padua, Italy. This facility will carry out the necessary R&D for the HNB's for ITER and demonstrate operation of the full-scale HNB beamline. An overview of the current status of the neutral beam (NB) systems and the chosen configuration will be given and the ongoing integration effort into the ITER plant will be highlighted. It will be demonstrated how installation and maintenance logistics have influenced the design, notably the top access scheme facilitating access for maintenance and installation. The impact of the ITER Design Review and recent design change requests (DCRs) will be briefly discussed, including start-up and commissioning issues. The low current hydrogen phase now envisaged for start-up imposed specific requirements for operating the HNB's at full beam power. It has been decided to address the shinethrough issue by installing wall armour protection, which increases the operational space in all scenarios. Other NB related issues identified by the Design Review process will be discussed and the possible changes to the ITER baseline indicated.

  14. Status of the Negative Ion Based Heating and Diagnostic Neutral Beams for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schunke, B.; Bora, D.; Hemsworth, R.

    2009-03-12

    The current baseline of ITER foresees 2 Heating Neutral Beam (HNB's) systems based on negative ion technology, each accelerating to 1 MeV 40 A of D{sup -} and capable of delivering 16.5 MW of D{sup 0} to the ITER plasma, with a 3rd HNB injector foreseen as an upgrade option. In addition a dedicated Diagnostic Neutral Beam (DNB) accelerating 60 A of H{sup -} to 100 keV will inject {approx_equal}15 A equivalent of H{sup 0} for charge exchange recombination spectroscopy and other diagnostics. Recently the RF driven negative ion source developed by IPP Garching has replaced the filamented ion sourcemore » as the reference ITER design. The RF source developed at IPP, which is approximately a quarter scale of the source needed for ITER, is expected to have reduced caesium consumption compared to the filamented arc driven ion source. The RF driven source has demonstrated adequate accelerated D{sup -} and H{sup -} current densities as well as long-pulse operation. It is foreseen that the HNB's and the DNB will use the same negative ion source. Experiments with a half ITER-size ion source are on-going at IPP and the operation of a full-scale ion source will be demonstrated, at full power and pulse length, in the dedicated Ion Source Test Bed (ISTF), which will be part of the Neutral Beam Test Facility (NBTF), in Padua, Italy. This facility will carry out the necessary R and D for the HNB's for ITER and demonstrate operation of the full-scale HNB beamline. An overview of the current status of the neutral beam (NB) systems and the chosen configuration will be given and the ongoing integration effort into the ITER plant will be highlighted. It will be demonstrated how installation and maintenance logistics have influenced the design, notably the top access scheme facilitating access for maintenance and installation. The impact of the ITER Design Review and recent design change requests (DCRs) will be briefly discussed, including start-up and commissioning issues. The low current hydrogen phase now envisaged for start-up imposed specific requirements for operating the HNB's at full beam power. It has been decided to address the shinethrough issue by installing wall armour protection, which increases the operational space in all scenarios. Other NB related issues identified by the Design Review process will be discussed and the possible changes to the ITER baseline indicated.« less

  15. Panel cutting method: new approach to generate panels on a hull in Rankine source potential approximation

    NASA Astrophysics Data System (ADS)

    Choi, Hee-Jong; Chun, Ho-Hwan; Park, Il-Ryong; Kim, Jin

    2011-12-01

    In the present study, a new hull panel generation algorithm, namely panel cutting method, was developed to predict flow phenomena around a ship using the Rankine source potential based panel method, where the iterative method was used to satisfy the nonlinear free surface condition and the trim and sinkage of the ship was taken into account. Numerical computations were performed to investigate the validity of the proposed hull panel generation algorithm for Series 60 (CB=0.60) hull and KRISO container ship (KCS), a container ship designed by Maritime and Ocean Engineering Research Institute (MOERI). The computational results were validated by comparing with the existing experimental data.

  16. A heat transfer model for a hot helium airship

    NASA Astrophysics Data System (ADS)

    Rapert, R. M.

    1987-06-01

    Basic heat transfer empirical and analytic equations are applied to a double envelope airship concept which uses heated Helium in the inner envelope to augment and control gross lift. The convective and conductive terms lead to a linear system of five equations for the concept airship, with the nonlinear radiation terms included by an iterative solution process. The graphed results from FORTRAN program solutions are presented for the variables of interest. These indicate that a simple use of airship engine exhaust heat gives more than a 30 percent increase in gross airship lift. Possibly more than 100 percent increase can be achieved if a 'stream injection' heating system, with associated design problems, is used.

  17. Cruise performance and range prediction reconsidered

    NASA Astrophysics Data System (ADS)

    Torenbeek, Egbert

    1997-05-01

    A unified analytical treatment of the cruise performance of subsonic transport aircraft is derived, valid for gas turbine powerplant installations: turboprop, turbojet and turbofan powered aircraft. Different from the classical treatment the present article deals with compressibility effects on the aerodynamic characteristics. Analytical criteria are derived for optimum cruise lift coefficient and Mach number, with and without constraints on the altitude and engine rating. A simple alternative to the Bréguet range equation is presented which applies to several practical cruising flight techniques: flight at constant altitude and Mach number and stepped cruise/climb. A practical non-iterative procedure for computing mission and reserve fuel loads in the preliminary design stage is proposed.

  18. Radioisotope Stirling Engine Powered Airship for Low Altitude Operation on Venus

    NASA Technical Reports Server (NTRS)

    Colozza, Anthony J.

    2012-01-01

    The feasibility of a Stirling engine powered airship for the near surface exploration of Venus was evaluated. The heat source for the Stirling engine was limited to 10 general purpose heat source (GPHS) blocks. The baseline airship utilized hydrogen as the lifting gas and the electronics and payload were enclosed in a cooled insulated pressure vessel to maintain the internal temperature at 320 K and 1 Bar pressure. The propulsion system consisted of an electric motor driving a propeller. An analysis was set up to size the airship that could operate near the Venus surface based on the available thermal power. The atmospheric conditions on Venus were modeled and used in the analysis. The analysis was an iterative process between sizing the airship to carry a specified payload and the power required to operate the electronics, payload and cooling system as well as provide power to the propulsion system to overcome the drag on the airship. A baseline configuration was determined that could meet the power requirements and operate near the Venus surface. From this baseline design additional trades were made to see how other factors affected the design such as the internal temperature of the payload chamber and the flight altitude. In addition other lifting methods were evaluated such as an evacuated chamber, heated atmospheric gas and augmented heated lifting gas. However none of these methods proved viable.

  19. Least Squares Computations in Science and Engineering

    DTIC Science & Technology

    1994-02-01

    iterative least squares deblurring procedure. Because of the ill-posed characteristics of the deconvolution problem, in the presence of noise , direct...optimization methods. Generally, the problems are accompanied by constraints, such as bound constraints, and the observations are corrupted by noise . The...engineering. This effort has involved interaction with researchers in closed-loop active noise (vibration) control at Phillips Air Force Laboratory

  20. Integrated Collaborative Model in Research and Education with Emphasis on Small Satellite Technology

    DTIC Science & Technology

    1996-01-01

    feedback; the number of iterations in a complete iteration is referred to as loop depth or iteration depth, g (i). A data packet or packet is data...loop depth, g (i)) is either a finite (constant or variable) or an infinite value. 1) Finite loop depth, variable number of iterations Some problems...design time. The time needed for the first packet to leave and a new initial data to be introduced to the iteration is min(R * ( g (k) * (N+I) + k-1

  1. Solution algorithms for nonlinear transient heat conduction analysis employing element-by-element iterative strategies

    NASA Technical Reports Server (NTRS)

    Winget, J. M.; Hughes, T. J. R.

    1985-01-01

    The particular problems investigated in the present study arise from nonlinear transient heat conduction. One of two types of nonlinearities considered is related to a material temperature dependence which is frequently needed to accurately model behavior over the range of temperature of engineering interest. The second nonlinearity is introduced by radiation boundary conditions. The finite element equations arising from the solution of nonlinear transient heat conduction problems are formulated. The finite element matrix equations are temporally discretized, and a nonlinear iterative solution algorithm is proposed. Algorithms for solving the linear problem are discussed, taking into account the form of the matrix equations, Gaussian elimination, cost, and iterative techniques. Attention is also given to approximate factorization, implementational aspects, and numerical results.

  2. An improved Michelson interferometer: smoothing out the rough spots for a more effective teaching tool

    NASA Astrophysics Data System (ADS)

    Eastman, Clarke K.

    2017-08-01

    The Michelson interferometer is a classic tool for demonstrating the wave nature of light, and it is a cornerstone of the optics curriculum. But many students' experiences with this device are higher in frustration than they are in learning. That situation motivated an effort to make aligning the tool less a test of a visual acuity and patience, and more of an introduction to optics phenomena and optical engineering. Key improvements included an added beam-splitter to accommodate multiple observers, a modified telescope to quickly and reliably obtain parallel mirrors, and a series of increasing spectral-width light sources to obtain equal path lengths. This greatly improved students' chances of success, as defined by achieving "white light fringes". When presenting these new features to the students, high importance is placed on understanding why alignment was so difficult with the original design, and why the changes made alignment easier. By exposing the rationale behind the improvements, students can observe the process of problem-solving in an optical engineering scenario. Equally important is the demonstration that solutions can be devised or adapted based on the parts at hand, and that implementations only achieve a highly "polished' state after several design iterations.

  3. An iterative synthetic approach to engineer a high-performing PhoB-specific reporter.

    PubMed

    Stoudenmire, Julie L; Essock-Burns, Tara; Weathers, Erena N; Solaimanpour, Sina; Mrázek, Jan; Stabb, Eric V

    2018-05-11

    Transcriptional reporters are common tools for analyzing either the transcription of a gene of interest or the activity of a specific transcriptional regulator. Unfortunately, the latter application has the shortcoming that native promoters did not evolve as optimal readouts for the activity of a particular regulator. We sought to synthesize an optimized transcriptional reporter for assessing PhoB activity, aiming for maximal "on" expression when PhoB is active, minimal background in the "off" state, and no control elements for other regulators. We designed specific sequences for promoter elements with appropriately spaced PhoB-binding sites, and at nineteen additional intervening nucleotide positions for which we did not predict sequence-specific effects the bases were randomized. Eighty-three such constructs were screened in Vibrio fischeri , enabling us to identify bases at particular randomized positions that significantly correlated with high "on" or low "off" expression. A second round of promoter design rationally constrained thirteen additional positions, leading to a reporter with high PhoB-dependent expression, essentially no background, and no other known regulatory elements. As expressed reporters, we used both stable and destabilized GFP, the latter with a half-life of eighty-one minutes in V. fischeri In culture, PhoB induced the reporter when phosphate was depleted below 10 μM. During symbiotic colonization of its host squid Euprymna scolopes , the reporter indicated heterogeneous phosphate availability in different light-organ microenvironments. Finally, testing this construct in other Proteobacteria demonstrated its broader utility. The results illustrate how a limited ability to predict synthetic promoter-reporter performance can be overcome through iterative screening and re-engineering. IMPORTANCE Transcriptional reporters can be powerful tools for assessing when a particular regulator is active; however, native promoters may not be ideal for this purpose. Optimal reporters should be specific to the regulator being examined and should maximize the difference between "on" and "off" states; however, these properties are distinct from the selective pressures driving the evolution of natural promoters. Synthetic promoters offer a promising alternative, but our understanding often does not enable fully predictive promoter design, and the large number of alternative sequence possibilities can be intractable. In a synthetic promoter region with over thirty-four billion sequence variants, we identified bases correlated with favorable performance by screening only eighty-three candidates, allowing us to rationally constrain our design. We thereby generated an optimized reporter that is induced by PhoB and used it to explore the low-phosphate response of V. fischeri This promoter-design strategy will facilitate the engineering of other regulator-specific reporters. Copyright © 2018 American Society for Microbiology.

  4. Design of ITER divertor VUV spectrometer and prototype test at KSTAR tokamak

    NASA Astrophysics Data System (ADS)

    Seon, Changrae; Hong, Joohwan; Song, Inwoo; Jang, Juhyeok; Lee, Hyeonyong; An, Younghwa; Kim, Bosung; Jeon, Taemin; Park, Jaesun; Choe, Wonho; Lee, Hyeongon; Pak, Sunil; Cheon, MunSeong; Choi, Jihyeon; Kim, Hyeonseok; Biel, Wolfgang; Bernascolle, Philippe; Barnsley, Robin; O'Mullane, Martin

    2017-12-01

    Design and development of the ITER divertor VUV spectrometer have been performed from the year 1998, and it is planned to be installed in the year 2027. Currently, the design of the ITER divertor VUV spectrometer is in the phase of detail design. It is optimized for monitoring of chord-integrated VUV signals from divertor plasmas, chosen to contain representative lines emission from the tungsten as the divertor material, and other impurities. Impurity emission from overall divertor plasmas is collimated through the relay optics onto the entrance slit of a VUV spectrometer with working wavelength range of 14.6-32 nm. To validate the design of the ITER divertor VUV spectrometer, two sets of VUV spectrometers have been developed and tested at KSTAR tokamak. One set of spectrometer without the field mirror employs a survey spectrometer with the wavelength ranging from 14.6 nm to 32 nm, and it provides the same optical specification as the spectrometer part of the ITER divertor VUV spectrometer system. The other spectrometer with the wavelength range of 5-25 nm consists of a commercial spectrometer with a concave grating, and the relay mirrors with the same geometry as the relay mirrors of the ITER divertor VUV spectrometer. From test of these prototypes, alignment method using backward laser illumination could be verified. To validate the feasibility of tungsten emission measurement, furthermore, the tungsten powder was injected in KSTAR plasmas, and the preliminary result could be obtained successfully with regard to the evaluation of photon throughput. Contribution to the Topical Issue "Atomic and Molecular Data and their Applications", edited by Gordon W.F. Drake, Jung-Sik Yoon, Daiji Kato, Grzegorz Karwasz.

  5. Ultra-fast quantitative imaging using ptychographic iterative engine based digital micro-mirror device

    NASA Astrophysics Data System (ADS)

    Sun, Aihui; Tian, Xiaolin; Kong, Yan; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-01-01

    As a lensfree imaging technique, ptychographic iterative engine (PIE) method can provide both quantitative sample amplitude and phase distributions avoiding aberration. However, it requires field of view (FoV) scanning often relying on mechanical translation, which not only slows down measuring speed, but also introduces mechanical errors decreasing both resolution and accuracy in retrieved information. In order to achieve high-accurate quantitative imaging with fast speed, digital micromirror device (DMD) is adopted in PIE for large FoV scanning controlled by on/off state coding by DMD. Measurements were implemented using biological samples as well as USAF resolution target, proving high resolution in quantitative imaging using the proposed system. Considering its fast and accurate imaging capability, it is believed the DMD based PIE technique provides a potential solution for medical observation and measurements.

  6. Evaluating the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.

    In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetrymore » with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.« less

  7. Evaluating the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2013-07-01

    In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.

  8. Avionics for a Small Robotic Inspection Spacecraft

    NASA Technical Reports Server (NTRS)

    Abbott, Larry; Shuler, Robert L., Jr.

    2005-01-01

    A report describes the tentative design of the avionics of the Mini-AERCam -- a proposed 7.5-in. (approximately 19-cm)-diameter spacecraft that would contain three digital video cameras to be used in visual inspection of the exterior of a larger spacecraft (a space shuttle or the International Space Station). The Mini-AERCam would maneuver by use of its own miniature thrusters under radio control by astronauts inside the larger spacecraft. The design of the Mini-AERCam avionics is subject to a number of constraints, most of which can be summarized as severely competing requirements to maximize radiation hardness and maneuvering, image-acquisition, and data-communication capabilities while minimizing cost, size, and power consumption. The report discusses the design constraints, the engineering approach to satisfying the constraints, and the resulting iterations of the design. The report places special emphasis on the design of a flight computer that would (1) acquire position and orientation data from a Global Positioning System receiver and a microelectromechanical gyroscope, respectively; (2) perform all flight-control (including thruster-control) computations in real time; and (3) control video, tracking, power, and illumination systems.

  9. Perl Modules for Constructing Iterators

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt

    2009-01-01

    The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.

  10. Electromagnetic Analysis of ITER Diagnostic Equatorial Port Plugs During Plasma Disruptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Y. Zhai, R. Feder, A. Brooks, M. Ulrickson, C.S. Pitcher and G.D. Loesser

    2012-08-27

    ITER diagnostic port plugs perform many functionsincluding structural support of diagnostic systems under high electromagnetic loads while allowing for diagnostic access to the plasma. The design of diagnostic equatorial port plugs (EPP) are largely driven by electromagnetic loads and associate responses of EPP structure during plasma disruptions and VDEs. This paper summarizes results of transient electromagnetic analysis using Opera 3d in support of the design activities for ITER diagnostic EPP. A complete distribution of disruption loads on the Diagnostic First Walls (DFWs), Diagnostic Shield Modules (DSMs) and the EPP structure, as well as impact on the system design integration duemore » to electrical contact among various EPP structural components are discussed.« less

  11. Observer-based distributed adaptive iterative learning control for linear multi-agent systems

    NASA Astrophysics Data System (ADS)

    Li, Jinsha; Liu, Sanyang; Li, Junmin

    2017-10-01

    This paper investigates the consensus problem for linear multi-agent systems from the viewpoint of two-dimensional systems when the state information of each agent is not available. Observer-based fully distributed adaptive iterative learning protocol is designed in this paper. A local observer is designed for each agent and it is shown that without using any global information about the communication graph, all agents achieve consensus perfectly for all undirected connected communication graph when the number of iterations tends to infinity. The Lyapunov-like energy function is employed to facilitate the learning protocol design and property analysis. Finally, simulation example is given to illustrate the theoretical analysis.

  12. Interdisciplinary Development of an Improved Emergency Department Procedural Work Surface Through Iterative Design and Use Testing in Simulated and Clinical Environments.

    PubMed

    Zhang, Xiao C; Bermudez, Ana M; Reddy, Pranav M; Sarpatwari, Ravi R; Chheng, Darin B; Mezoian, Taylor J; Schwartz, Victoria R; Simmons, Quinneil J; Jay, Gregory D; Kobayashi, Leo

    2017-03-01

    A stable and readily accessible work surface for bedside medical procedures represents a valuable tool for acute care providers. In emergency department (ED) settings, the design and implementation of traditional Mayo stands and related surface devices often limit their availability, portability, and usability, which can lead to suboptimal clinical practice conditions that may affect the safe and effective performance of medical procedures and delivery of patient care. We designed and built a novel, open-source, portable, bedside procedural surface through an iterative development process with use testing in simulated and live clinical environments. The procedural surface development project was conducted between October 2014 and June 2016 at an academic referral hospital and its affiliated simulation facility. An interdisciplinary team of emergency physicians, mechanical engineers, medical students, and design students sought to construct a prototype bedside procedural surface out of off-the-shelf hardware during a collaborative university course on health care design. After determination of end-user needs and core design requirements, multiple prototypes were fabricated and iteratively modified, with early variants featuring undermattress stabilizing supports or ratcheting clamp mechanisms. Versions 1 through 4 underwent 2 hands-on usability-testing simulation sessions; version 5 was presented at a design critique held jointly by a panel of clinical and industrial design faculty for expert feedback. Responding to select feedback elements over several surface versions, investigators arrived at a near-final prototype design for fabrication and use testing in a live clinical setting. This experimental procedural surface (version 8) was constructed and then deployed for controlled usability testing against the standard Mayo stands in use at the study site ED. Clinical providers working in the ED who opted to participate in the study were provided with the prototype surface and just-in-time training on its use when performing bedside procedures. Subjects completed the validated 10-point System Usability Scale postshift for the surface that they had used. The study protocol was approved by the institutional review board. Multiple prototypes and recursive design revisions resulted in a fully functional, portable, and durable bedside procedural surface that featured a stainless steel tray and intuitive hook-and-lock mechanisms for attachment to ED stretcher bed rails. Forty-two control and 40 experimental group subjects participated and completed questionnaires. The median System Usability Scale score (out of 100; higher scores associated with better usability) was 72.5 (interquartile range [IQR] 51.3 to 86.3) for the Mayo stand; the experimental surface was scored at 93.8 (IQR 84.4 to 97.5 for a difference in medians of 17.5 (95% confidence interval 10 to 27.5). Subjects reported several usability challenges with the Mayo stand; the experimental surface was reviewed as easy to use, simple, and functional. In accordance with experimental live environment deployment, questionnaire responses, and end-user suggestions, the project team finalized the design specification for the experimental procedural surface for open dissemination. An iterative, interdisciplinary approach was used to generate, evaluate, revise, and finalize the design specification for a new procedural surface that met all core end-user requirements. The final surface design was evaluated favorably on a validated usability tool against Mayo stands when use tested in simulated and live clinical settings. Copyright © 2016 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  13. Development of a Multilevel Optimization Approach to the Design of Modern Engineering Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Barthelemy, J. F. M.

    1983-01-01

    A general algorithm is proposed which carries out the design process iteratively, starting at the top of the hierarchy and proceeding downward. Each subproblem is optimized separately for fixed controls from higher level subproblems. An optimum sensitivity analysis is then performed which determines the sensitivity of the subproblem design to changes in higher level subproblem controls. The resulting sensitivity derivatives are used to construct constraints which force the controlling subproblems into chosing their own designs so as to improve the lower levels subproblem designs while satisfying their own constraints. The applicability of the proposed algorithm is demonstrated by devising a four-level hierarchy to perform the simultaneous aerodynamic and structural design of a high-performance sailplane wing for maximum cross-country speed. Finally, the concepts discussed are applied to the two-level minimum weight structural design of the sailplane wing. The numerical experiments show that discontinuities in the sensitivity derivatives may delay convergence, but that the algorithm is robust enough to overcome these discontinuities and produce low-weight feasible designs, regardless of whether the optimization is started from the feasible space or the infeasible one.

  14. Why and how Mastering an Incremental and Iterative Software Development Process

    NASA Astrophysics Data System (ADS)

    Dubuc, François; Guichoux, Bernard; Cormery, Patrick; Mescam, Jean Christophe

    2004-06-01

    One of the key issues regularly mentioned in the current software crisis of the space domain is related to the software development process that must be performed while the system definition is not yet frozen. This is especially true for complex systems like launchers or space vehicles.Several more or less mature solutions are under study by EADS SPACE Transportation and are going to be presented in this paper. The basic principle is to develop the software through an iterative and incremental process instead of the classical waterfall approach, with the following advantages:- It permits systematic management and incorporation of requirements changes over the development cycle with a minimal cost. As far as possible the most dimensioning requirements are analyzed and developed in priority for validating very early the architecture concept without the details.- A software prototype is very quickly available. It improves the communication between system and software teams, as it enables to check very early and efficiently the common understanding of the system requirements.- It allows the software team to complete a whole development cycle very early, and thus to become quickly familiar with the software development environment (methodology, technology, tools...). This is particularly important when the team is new, or when the environment has changed since the previous development. Anyhow, it improves a lot the learning curve of the software team.These advantages seem very attractive, but mastering efficiently an iterative development process is not so easy and induces a lot of difficulties such as:- How to freeze one configuration of the system definition as a development baseline, while most of thesystem requirements are completely and naturally unstable?- How to distinguish stable/unstable and dimensioning/standard requirements?- How to plan the development of each increment?- How to link classical waterfall development milestones with an iterative approach: when should theclassical reviews be performed: Software Specification Review? Preliminary Design Review? CriticalDesign Review? Code Review? Etc...Several solutions envisaged or already deployed by EADS SPACE Transportation will be presented, both from a methodological and technological point of view:- How the MELANIE EADS ST internal methodology improves the concurrent engineering activitiesbetween GNC, software and simulation teams in a very iterative and reactive way.- How the CMM approach can help by better formalizing Requirements Management and Planningprocesses.- How the Automatic Code Generation with "certified" tools (SCADE) can still dramatically shorten thedevelopment cycle.Then the presentation will conclude by showing an evaluation of the cost and planning reduction based on a pilot application by comparing figures on two similar projects: one with the classical waterfall process, the other one with an iterative and incremental approach.

  15. Development of the ITER ICH Transmission Line and Matching System

    NASA Astrophysics Data System (ADS)

    Rasmussen, D. A.; Goulding, R. H.; Pesavento, P. V.; Peters, B.; Swain, D. W.; Fredd, E. H.; Hosea, J.; Greenough, N.

    2011-10-01

    The ITER Ion Cyclotron Heating (ICH) System is designed to couple 20 MW of heating power for ion and electron heating. Prototype components for the ITER Ion Cyclotron Heating (ICH) transmission line and matching system are being designed and tested. The ICH transmission lines are pressurized 300 mm diameter coaxial lines with water-cooled aluminum outer conductor and gas-cooled and water-cooled copper inner conductor. Each ICH transmission line is designed to handle 40-55 MHz power at up to 6 MW/line. A total of 8 lines split to 16 antenna inputs on two ICH antennas. Industrial suppliers have designed coaxial transmission line and matching components and prototypes will be manufactured. The prototype components will be qualified on a test stand operating at the full power and pulse length needed for ITER. The matching system must accommodated dynamic changes in the plasma loading due to ELMS and the L to H-mode transition. Passive ELM tolerance will be performed using hybrid couplers and loads, which can absorb the transient reflected power. The system is also designed to compensate for the mutual inductances of the antenna current straps to limit the peak voltages on the antenna array elements.

  16. Conjecture Mapping to Optimize the Educational Design Research Process

    ERIC Educational Resources Information Center

    Wozniak, Helen

    2015-01-01

    While educational design research promotes closer links between practice and theory, reporting its outcomes from iterations across multiple contexts is often constrained by the volumes of data generated, and the context bound nature of the research outcomes. Reports tend to focus on a single iteration of implementation without further research to…

  17. Design optimization of first wall and breeder unit module size for the Indian HCCB blanket module

    NASA Astrophysics Data System (ADS)

    Deepak, SHARMA; Paritosh, CHAUDHURI

    2018-04-01

    The Indian test blanket module (TBM) program in ITER is one of the major steps in the Indian fusion reactor program for carrying out the R&D activities in the critical areas like design of tritium breeding blankets relevant to future Indian fusion devices (ITER relevant and DEMO). The Indian Lead–Lithium Cooled Ceramic Breeder (LLCB) blanket concept is one of the Indian DEMO relevant TBM, to be tested in ITER as a part of the TBM program. Helium-Cooled Ceramic Breeder (HCCB) is an alternative blanket concept that consists of lithium titanate (Li2TiO3) as ceramic breeder (CB) material in the form of packed pebble beds and beryllium as the neutron multiplier. Specifically, attentions are given to the optimization of first wall coolant channel design and size of breeder unit module considering coolant pressure and thermal loads for the proposed Indian HCCB blanket based on ITER relevant TBM and loading conditions. These analyses will help proceeding further in designing blankets for loads relevant to the future fusion device.

  18. In-vessel tritium retention and removal in ITER

    NASA Astrophysics Data System (ADS)

    Federici, G.; Anderl, R. A.; Andrew, P.; Brooks, J. N.; Causey, R. A.; Coad, J. P.; Cowgill, D.; Doerner, R. P.; Haasz, A. A.; Janeschitz, G.; Jacob, W.; Longhurst, G. R.; Nygren, R.; Peacock, A.; Pick, M. A.; Philipps, V.; Roth, J.; Skinner, C. H.; Wampler, W. R.

    Tritium retention inside the vacuum vessel has emerged as a potentially serious constraint in the operation of the International Thermonuclear Experimental Reactor (ITER). In this paper we review recent tokamak and laboratory data on hydrogen, deuterium and tritium retention for materials and conditions which are of direct relevance to the design of ITER. These data, together with significant advances in understanding the underlying physics, provide the basis for modelling predictions of the tritium inventory in ITER. We present the derivation, and discuss the results, of current predictions both in terms of implantation and codeposition rates, and critically discuss their uncertainties and sensitivity to important design and operation parameters such as the plasma edge conditions, the surface temperature, the presence of mixed-materials, etc. These analyses are consistent with recent tokamak findings and show that codeposition of tritium occurs on the divertor surfaces primarily with carbon eroded from a limited area of the divertor near the strike zones. This issue remains an area of serious concern for ITER. The calculated codeposition rates for ITER are relatively high and the in-vessel tritium inventory limit could be reached, under worst assumptions, in approximately a week of continuous operation. We discuss the implications of these estimates on the design, operation and safety of ITER and present a strategy for resolving the issues. We conclude that as long as carbon is used in ITER - and more generically in any other next-step experimental fusion facility fuelled with tritium - the efficient control and removal of the codeposited tritium is essential. There is a critical need to develop and test in situ cleaning techniques and procedures that are beyond the current experience of present-day tokamaks. We review some of the principal methods that are being investigated and tested, in conjunction with the R&D work still required to extrapolate their applicability to ITER. Finally, unresolved issues are identified and recommendations are made on potential R&D avenues for their resolution.

  19. Modified Backtracking Search Optimization Algorithm Inspired by Simulated Annealing for Constrained Engineering Optimization Problems

    PubMed Central

    Wang, Hailong; Sun, Yuqiu; Su, Qinghua; Xia, Xuewen

    2018-01-01

    The backtracking search optimization algorithm (BSA) is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA) to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor (F) is modified based on the Metropolis criterion in simulated annealing. The redesigned F could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive ε-constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed. PMID:29666635

  20. Making Common Sense of Vaccines: An Example of Discussing the Recombinant Attenuated Salmonella Vaccine with the Public.

    PubMed

    Dankel, Dorothy J; Roland, Kenneth L; Fisher, Michael; Brenneman, Karen; Delgado, Ana; Santander, Javier; Baek, Chang-Ho; Clark-Curtiss, Josephine; Strand, Roger; Curtiss, Roy

    2014-01-01

    Researchers have iterated that the future of synthetic biology and biotechnology lies in novel consumer applications of crossing biology with engineering. However, if the new biology's future is to be sustainable, early and serious efforts must be made towards social sustainability. Therefore, the crux of new applications of synthetic biology and biotechnology is public understanding and acceptance. The RASVaccine is a novel recombinant design not found in nature that re-engineers a common bacteria ( Salmonella ) to produce a strong immune response in humans. Synthesis of the RASVaccine has the potential to improve public health as an inexpensive, non-injectable product. But how can scientists move forward to create a dialogue of creating a 'common sense' of this new technology in order to promote social sustainability? This paper delves into public issues raised around these novel technologies and uses the RASVaccine as an example of meeting the public with a common sense of its possibilities and limitations.

  1. Advanced Stirling Duplex Materials Assessment for Potential Venus Mission Heater Head Application

    NASA Technical Reports Server (NTRS)

    Ritzert, Frank; Nathal, Michael V.; Salem, Jonathan; Jacobson, Nathan; Nesbitt, James

    2011-01-01

    This report will address materials selection for components in a proposed Venus lander system. The lander would use active refrigeration to allow Space Science instrumentation to survive the extreme environment that exists on the surface of Venus. The refrigeration system would be powered by a Stirling engine-based system and is termed the Advanced Stirling Duplex (ASD) concept. Stirling engine power conversion in its simplest definition converts heat from radioactive decay into electricity. Detailed design decisions will require iterations between component geometries, materials selection, system output, and tolerable risk. This study reviews potential component requirements against known materials performance. A lower risk, evolutionary advance in heater head materials could be offered by nickel-base superalloy single crystals, with expected capability of approximately 1100C. However, the high temperature requirements of the Venus mission may force the selection of ceramics or refractory metals, which are more developmental in nature and may not have a well-developed database or a mature supporting technology base such as fabrication and joining methods.

  2. Status of the ITER Electron Cyclotron Heating and Current Drive System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darbos, Caroline; Albajar, Ferran; Bonicelli, Tullio

    2015-10-07

    We present that the electron cyclotron (EC) heating and current drive (H&CD) system developed for the ITER is made of 12 sets of high-voltage power supplies feeding 24 gyrotrons connected through 24 transmission lines (TL), to five launchers, four located in upper ports and one at the equatorial level. Nearly all procurements are in-kind, following general ITER philosophy, and will come from Europe, India, Japan, Russia and the USA. The full system is designed to couple to the plasma 20 MW among the 24 MW generated power, at the frequency of 170 GHz, for various physics applications such as plasmamore » start-up, central H&CD and magnetohydrodynamic (MHD) activity control. The design takes present day technology and extends toward high-power continuous operation, which represents a large step forward as compared to the present state of the art. The ITER EC system will be a stepping stone to future EC systems for DEMO and beyond.The development of the EC system is facing significant challenges, which includes not only an advanced microwave system but also compliance with stringent requirements associated with nuclear safety as ITER became the first fusion device licensed as basic nuclear installations as of 9 November 2012. Finally, since the conceptual design of the EC system was established in 2007, the EC system has progressed to a preliminary design stage in 2012 and is now moving forward toward a final design.« less

  3. Fast Acting Eddy Current Driven Valve for Massive Gas Injection on ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyttle, Mark S; Baylor, Larry R; Carmichael, Justin R

    2015-01-01

    Tokamak plasma disruptions present a significant challenge to ITER as they can result in intense heat flux, large forces from halo and eddy currents, and potential first-wall damage from the generation of multi-MeV runaway electrons. Massive gas injection (MGI) of high Z material using fast acting valves is being explored on existing tokamaks and is planned for ITER as a method to evenly distribute the thermal load of the plasma to prevent melting, control the rate of the current decay to minimize mechanical loads, and to suppress the generation of runaway electrons. A fast acting valve and accompanying power supplymore » have been designed and first test articles produced to meet the requirements for a disruption mitigation system on ITER. The test valve incorporates a flyer plate actuator similar to designs deployed on TEXTOR, ASDEX upgrade, and JET [1 3] of a size useful for ITER with special considerations to mitigate the high mechanical forces developed during actuation due to high background magnetic fields. The valve includes a tip design and all-metal valve stem sealing for compatibility with tritium and high neutron and gamma fluxes.« less

  4. Changing the Way We Build Games: A Design-Based Research Study Examining the Implementation of Homemade PowerPoint Games in the Classroom

    ERIC Educational Resources Information Center

    Siko, Jason Paul

    2012-01-01

    This design-based research study examined the effects of a game design project on student test performance, with refinements made to the implementation after each of the three iterations of the study. The changes to the implementation over the three iterations were based on the literature for the three justifications for the use of homemade…

  5. Dragons, Ladybugs, and Softballs: Girls' STEM Engagement with Human-Centered Robotics

    NASA Astrophysics Data System (ADS)

    Gomoll, Andrea; Hmelo-Silver, Cindy E.; Šabanović, Selma; Francisco, Matthew

    2016-12-01

    Early experiences in science, technology, engineering, and math (STEM) are important for getting youth interested in STEM fields, particularly for girls. Here, we explore how an after-school robotics club can provide informal STEM experiences that inspire students to engage with STEM in the future. Human-centered robotics, with its emphasis on the social aspects of science and technology, may be especially important for bringing girls into the STEM pipeline. Using a problem-based approach, we designed two robotics challenges. We focus here on the more extended second challenge, in which participants were asked to imagine and build a telepresence robot that would allow others to explore their space from a distance. This research follows four girls as they engage with human-centered telepresence robotics design. We constructed case studies of these target participants to explore their different forms of engagement and phases of interest development—considering facets of behavioral, social, cognitive, and conceptual-to-consequential engagement as well as stages of interest ranging from triggered interest to well-developed individual interest. The results demonstrated that opportunities to personalize their robots and feedback from peers and facilitators were important motivators. We found both explicit and vicarious engagement and varied interest phases in our group of four focus participants. This first iteration of our project demonstrated that human-centered robotics is a promising approach to getting girls interested and engaged in STEM practices. As we design future iterations of our robotics club environment, we must consider how to harness multiple forms of leadership and engagement without marginalizing students with different working preferences.

  6. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  7. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE PAGES

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...

    2017-03-24

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  8. Handheld emissions detector (HED): overview and development

    NASA Astrophysics Data System (ADS)

    Valentino, George J.; Schimmel, David

    2009-05-01

    Nova Engineering, Cincinnati OH, a division of L-3 Communications (L-3 Nova), under the sponsorship of Program Manager Soldier Warrior (PM-SWAR), Fort Belvoir, VA, has developed a Soldier portable, light-weight, hand-held, geolocation sensor and processing system called the Handheld Emissions Detector (HED). The HED is a broadband custom receiver and processor that allows the user to easily sense, direction find, and locate a broad range of emitters in the user's surrounding area. Now in its second design iteration, the HED incorporates a set of COTS components that are complemented with L-3 Nova custom RF, power, digital, and mechanical components, plus custom embedded and application software. The HED user interfaces are designed to provide complex information in a readily-understandable form, thereby providing actionable results for operators. This paper provides, where possible, the top-level characteristics of the HED as well as the rationale behind its design philosophy along with its applications in both DOD and Commercial markets.

  9. A transatlantic perspective on 20 emerging issues in biological engineering.

    PubMed

    Wintle, Bonnie C; Boehm, Christian R; Rhodes, Catherine; Molloy, Jennifer C; Millett, Piers; Adam, Laura; Breitling, Rainer; Carlson, Rob; Casagrande, Rocco; Dando, Malcolm; Doubleday, Robert; Drexler, Eric; Edwards, Brett; Ellis, Tom; Evans, Nicholas G; Hammond, Richard; Haseloff, Jim; Kahl, Linda; Kuiken, Todd; Lichman, Benjamin R; Matthewman, Colette A; Napier, Johnathan A; ÓhÉigeartaigh, Seán S; Patron, Nicola J; Perello, Edward; Shapira, Philip; Tait, Joyce; Takano, Eriko; Sutherland, William J

    2017-11-14

    Advances in biological engineering are likely to have substantial impacts on global society. To explore these potential impacts we ran a horizon scanning exercise to capture a range of perspectives on the opportunities and risks presented by biological engineering. We first identified 70 potential issues, and then used an iterative process to prioritise 20 issues that we considered to be emerging, to have potential global impact, and to be relatively unknown outside the field of biological engineering. The issues identified may be of interest to researchers, businesses and policy makers in sectors such as health, energy, agriculture and the environment.

  10. Summary of Results from the Risk Management Program for the Mars Microrover Flight Experiment

    NASA Technical Reports Server (NTRS)

    Shishko, Robert; Matijevic, Jacob R.

    2000-01-01

    On 4 July 1997, the Mars Pathfinder landed on the surface of Mars carrying the first planetary rover, known as the Sojourner. Formally known as the Microrover Flight Experiment (MFEX), the Sojourner was a low cost, high-risk technology demonstration, in which new risk management techniques were tried. This paper summarizes the activities and results of the effort to conduct a low-cost, yet meaningful risk management program for the MFEX. The specific activities focused on cost, performance, schedule, and operations risks. Just as the systems engineering process was iterative and produced successive refinements of requirements, designs, etc., so was the risk management process. Qualitative risk assessments were performed first to gain some insights for refining the microrover design and operations concept. These then evolved into more quantitative analyses. Risk management lessons from the manager's perspective is presented for other low-cost, high-risk space missions.

  11. Research on Influencing Factors and Generalized Power of Synthetic Artificial Seismic Wave

    NASA Astrophysics Data System (ADS)

    Jiang, Yanpei

    2018-05-01

    Start your abstract here… In this paper, according to the trigonometric series method, the author adopts different envelope functions and the acceleration design spectrum in Seismic Code For Urban Bridge Design to simulate the seismic acceleration time history which meets the engineering accuracy requirements by modifying and iterating the initial wave. Spectral analysis is carried out to find out the the distribution law of the changing frequencies of the energy of seismic time history and to determine the main factors that affect the acceleration amplitude spectrum and energy spectrum density. The generalized power formula of seismic time history is derived from the discrete energy integral formula and the author studied the changing characteristics of generalized power of the seismic time history under different envelop functions. Examples are analyzed to illustrate that generalized power can measure the seismic performance of bridges.

  12. XSECT: A computer code for generating fuselage cross sections - user's manual

    NASA Technical Reports Server (NTRS)

    Ames, K. R.

    1982-01-01

    A computer code, XSECT, has been developed to generate fuselage cross sections from a given area distribution and wing definition. The cross sections are generated to match the wing definition while conforming to the area requirement. An iterative procedure is used to generate each cross section. Fuselage area balancing may be included in this procedure if desired. The code is intended as an aid for engineers who must first design a wing under certain aerodynamic constraints and then design a fuselage for the wing such that the contraints remain satisfied. This report contains the information necessary for accessing and executing the code, which is written in FORTRAN to execute on the Cyber 170 series computers (NOS operating system) and produces graphical output for a Tektronix 4014 CRT. The LRC graphics software is used in combination with the interface between this software and the PLOT 10 software.

  13. Creating single-copy genetic circuits

    PubMed Central

    Lee, Jeong Wook; Gyorgy, Andras; Cameron, D. Ewen; Pyenson, Nora; Choi, Kyeong Rok; Way, Jeffrey C.; Silver, Pamela A.; Del Vecchio, Domitilla; Collins, James J.

    2017-01-01

    SUMMARY Synthetic biology is increasingly used to develop sophisticated living devices for basic and applied research. Many of these genetic devices are engineered using multi-copy plasmids, but as the field progresses from proof-of-principle demonstrations to practical applications, it is important to develop single-copy synthetic modules that minimize consumption of cellular resources and can be stably maintained as genomic integrants. Here we use empirical design, mathematical modeling and iterative construction and testing to build single-copy, bistable toggle switches with improved performance and reduced metabolic load that can be stably integrated into the host genome. Deterministic and stochastic models led us to focus on basal transcription to optimize circuit performance and helped to explain the resulting circuit robustness across a large range of component expression levels. The design parameters developed here provide important guidance for future efforts to convert functional multi-copy gene circuits into optimized single-copy circuits for practical, real-world use. PMID:27425413

  14. An Analysis of Ripple and Error Fields Induced by a Blanket in the CFETR

    NASA Astrophysics Data System (ADS)

    Yu, Guanying; Liu, Xufeng; Liu, Songlin

    2016-10-01

    The Chinese Fusion Engineering Tokamak Reactor (CFETR) is an important intermediate device between ITER and DEMO. The Water Cooled Ceramic Breeder (WCCB) blanket whose structural material is mainly made of Reduced Activation Ferritic/Martensitic (RAFM) steel, is one of the candidate conceptual blanket design. An analysis of ripple and error field induced by RAFM steel in WCCB is evaluated with the method of static magnetic analysis in the ANSYS code. Significant additional magnetic field is produced by blanket and it leads to an increased ripple field. Maximum ripple along the separatrix line reaches 0.53% which is higher than 0.5% of the acceptable design value. Simultaneously, one blanket module is taken out for heating purpose and the resulting error field is calculated to be seriously against the requirement. supported by National Natural Science Foundation of China (No. 11175207) and the National Magnetic Confinement Fusion Program of China (No. 2013GB108004)

  15. Thermal Analysis of Small Re-Entry Probe

    NASA Technical Reports Server (NTRS)

    Agrawal, Parul; Prabhu, Dinesh K.; Chen, Y. K.

    2012-01-01

    The Small Probe Reentry Investigation for TPS Engineering (SPRITE) concept was developed at NASA Ames Research Center to facilitate arc-jet testing of a fully instrumented prototype probe at flight scale. Besides demonstrating the feasibility of testing a flight-scale model and the capability of an on-board data acquisition system, another objective for this project was to investigate the capability of simulation tools to predict thermal environments of the probe/test article and its interior. This paper focuses on finite-element thermal analyses of the SPRITE probe during the arcjet tests. Several iterations were performed during the early design phase to provide critical design parameters and guidelines for testing. The thermal effects of ablation and pyrolysis were incorporated into the final higher-fidelity modeling approach by coupling the finite-element analyses with a two-dimensional thermal protection materials response code. Model predictions show good agreement with thermocouple data obtained during the arcjet test.

  16. Final Report on ITER Task Agreement 81-08

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard L. Moore

    As part of an ITER Implementing Task Agreement (ITA) between the ITER US Participant Team (PT) and the ITER International Team (IT), the INL Fusion Safety Program was tasked to provide the ITER IT with upgrades to the fusion version of the MELCOR 1.8.5 code including a beryllium dust oxidation model. The purpose of this model is to allow the ITER IT to investigate hydrogen production from beryllium dust layers on hot surfaces inside the ITER vacuum vessel (VV) during in-vessel loss-of-cooling accidents (LOCAs). Also included in the ITER ITA was a task to construct a RELAP5/ATHENA model of themore » ITER divertor cooling loop to model the draining of the loop during a large ex-vessel pipe break followed by an in-vessel divertor break and compare the results to a simular MELCOR model developed by the ITER IT. This report, which is the final report for this agreement, documents the completion of the work scope under this ITER TA, designated as TA 81-08.« less

  17. Crater Morphology of Engineered and Natural Impactors into Planetary Ice

    NASA Astrophysics Data System (ADS)

    Danner, M.; Winglee, R.; Koch, J.

    2017-12-01

    Crater morphology of engineered impactors, such as those proposed for the Europa Kinetic Ice Penetrator (EKIP) mission, varies drastically from that of natural impactors (i.e. Asteroids, meteoroids). Previous work of natural impact craters in ice have been conducted with the intent to bound the thickness of Europa's ice crust; this work focuses on the depth, size, and compressional effects caused by various impactor designs, and the possible effects to the Europan surface. The present work details results from nine projectiles that were dropped on the Taku Glacier, AK at an altitude of 775 meters above surface; three rocks to simulate natural impactors, and six iterations of engineered steel and aluminum penetrator projectiles. Density measurements were taken at various locations within the craters, as well as through a cross section of the crater. Due to altitude restrictions, projectiles remained below terminal velocity. The natural/rock impact craters displayed typical cratering characteristics such as shallow, half meter scale depth, and orthogonal compressional forcing. The engineered projectiles produced impact craters with depths averaging two meters, with crater widths matching the impactor diameters. Compressional waves from the engineered impactors propagated downwards, parallel to direction of impact. Engineered impactors create significantly less lateral fracturing than natural impactors. Due to the EKIP landing mechanism, sampling of pristine ice closer to the lander is possible than previously thought with classical impact theory. Future work is planned to penetrate older, multiyear ice with higher velocity impacts.

  18. Studies on Flat Sandwich-type Self-Powered Detectors for Flux Measurements in ITER Test Blanket Modules

    NASA Astrophysics Data System (ADS)

    Raj, Prasoon; Angelone, Maurizio; Döring, Toralf; Eberhardt, Klaus; Fischer, Ulrich; Klix, Axel; Schwengner, Ronald

    2018-01-01

    Neutron and gamma flux measurements in designated positions in the test blanket modules (TBM) of ITER will be important tasks during ITER's campaigns. As part of the ongoing task on development of nuclear instrumentation for application in European ITER TBMs, experimental investigations on self-powered detectors (SPD) are undertaken. This paper reports the findings of neutron and photon irradiation tests performed with a test SPD in flat sandwich-like geometry. Whereas both neutrons and gammas can be detected with appropriate optimization of geometries, materials and sizes of the components, the present sandwich-like design is more sensitive to gammas than 14 MeV neutrons. Range of SPD current signals achievable under TBM conditions are predicted based on the SPD sensitivities measured in this work.

  19. Adapting Rational Unified Process (RUP) approach in designing a secure e-Tendering model

    NASA Astrophysics Data System (ADS)

    Mohd, Haslina; Robie, Muhammad Afdhal Muhammad; Baharom, Fauziah; Darus, Norida Muhd; Saip, Mohamed Ali; Yasin, Azman

    2016-08-01

    e-Tendering is an electronic processing of the tender document via internet and allow tenderer to publish, communicate, access, receive and submit all tender related information and documentation via internet. This study aims to design the e-Tendering system using Rational Unified Process approach. RUP provides a disciplined approach on how to assign tasks and responsibilities within the software development process. RUP has four phases that can assist researchers to adjust the requirements of various projects with different scope, problem and the size of projects. RUP is characterized as a use case driven, architecture centered, iterative and incremental process model. However the scope of this study only focusing on Inception and Elaboration phases as step to develop the model and perform only three of nine workflows (business modeling, requirements, analysis and design). RUP has a strong focus on documents and the activities in the inception and elaboration phases mainly concern the creation of diagrams and writing of textual descriptions. The UML notation and the software program, Star UML are used to support the design of e-Tendering. The e-Tendering design based on the RUP approach can contribute to e-Tendering developers and researchers in e-Tendering domain. In addition, this study also shows that the RUP is one of the best system development methodology that can be used as one of the research methodology in Software Engineering domain related to secured design of any observed application. This methodology has been tested in various studies in certain domains, such as in Simulation-based Decision Support, Security Requirement Engineering, Business Modeling and Secure System Requirement, and so forth. As a conclusion, these studies showed that the RUP one of a good research methodology that can be adapted in any Software Engineering (SE) research domain that required a few artifacts to be generated such as use case modeling, misuse case modeling, activity diagram, and initial class diagram from a list of requirements as identified earlier by the SE researchers

  20. Design of a -1 MV dc UHV power supply for ITER NBI

    NASA Astrophysics Data System (ADS)

    Watanabe, K.; Yamamoto, M.; Takemoto, J.; Yamashita, Y.; Dairaku, M.; Kashiwagi, M.; Taniguchi, M.; Tobari, H.; Umeda, N.; Sakamoto, K.; Inoue, T.

    2009-05-01

    Procurement of a dc -1 MV power supply system for the ITER neutral beam injector (NBI) is shared by Japan and the EU. The Japan Atomic Energy Agency as the Japan Domestic Agency (JADA) for ITER contributes to the procurement of dc -1 MV ultra-high voltage (UHV) components such as a dc -1 MV generator, a transmission line and a -1 MV insulating transformer for the ITER NBI power supply. The inverter frequency of 150 Hz in the -1 MV power supply and major circuit parameters have been proposed and adopted in the ITER NBI. The dc UHV insulation has been carefully designed since dc long pulse insulation is quite different from conventional ac insulation or dc short pulse systems. A multi-layer insulation structure of the transformer for a long pulse up to 3600 s has been designed with electric field simulation. Based on the simulation the overall dimensions of the dc UHV components have been finalized. A surge energy suppression system is also essential to protect the accelerator from electric breakdowns. The JADA contributes to provide an effective surge suppression system composed of core snubbers and resistors. Input energy into the accelerator from the power supply can be reduced to about 20 J, which satisfies the design criteria of 50 J in total in the case of breakdown at -1 MV.

  1. Aerodynamic optimization by simultaneously updating flow variables and design parameters with application to advanced propeller designs

    NASA Technical Reports Server (NTRS)

    Rizk, Magdi H.

    1988-01-01

    A scheme is developed for solving constrained optimization problems in which the objective function and the constraint function are dependent on the solution of the nonlinear flow equations. The scheme updates the design parameter iterative solutions and the flow variable iterative solutions simultaneously. It is applied to an advanced propeller design problem with the Euler equations used as the flow governing equations. The scheme's accuracy, efficiency and sensitivity to the computational parameters are tested.

  2. Iterative design of one- and two-dimensional FIR digital filters. [Finite duration Impulse Response

    NASA Technical Reports Server (NTRS)

    Suk, M.; Choi, K.; Algazi, V. R.

    1976-01-01

    The paper describes a new iterative technique for designing FIR (finite duration impulse response) digital filters using a frequency weighted least squares approximation. The technique is as easy to implement (via FFT) and as effective in two dimensions as in one dimension, and there are virtually no limitations on the class of filter frequency spectra approximated. An adaptive adjustment of the frequency weight to achieve other types of design approximation such as Chebyshev type design is discussed.

  3. Utility of coupling nonlinear optimization methods with numerical modeling software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, M.J.

    1996-08-05

    Results of using GLO (Global Local Optimizer), a general purpose nonlinear optimization software package for investigating multi-parameter problems in science and engineering is discussed. The package consists of the modular optimization control system (GLO), a graphical user interface (GLO-GUI), a pre-processor (GLO-PUT), a post-processor (GLO-GET), and nonlinear optimization software modules, GLOBAL & LOCAL. GLO is designed for controlling and easy coupling to any scientific software application. GLO runs the optimization module and scientific software application in an iterative loop. At each iteration, the optimization module defines new values for the set of parameters being optimized. GLO-PUT inserts the new parametermore » values into the input file of the scientific application. GLO runs the application with the new parameter values. GLO-GET determines the value of the objective function by extracting the results of the analysis and comparing to the desired result. GLO continues to run the scientific application over and over until it finds the ``best`` set of parameters by minimizing (or maximizing) the objective function. An example problem showing the optimization of material model is presented (Taylor cylinder impact test).« less

  4. GoldenBraid: An Iterative Cloning System for Standardized Assembly of Reusable Genetic Modules

    PubMed Central

    Sarrion-Perdigones, Alejandro; Falconi, Erica Elvira; Zandalinas, Sara I.; Juárez, Paloma; Fernández-del-Carmen, Asun; Granell, Antonio; Orzaez, Diego

    2011-01-01

    Synthetic Biology requires efficient and versatile DNA assembly systems to facilitate the building of new genetic modules/pathways from basic DNA parts in a standardized way. Here we present GoldenBraid (GB), a standardized assembly system based on type IIS restriction enzymes that allows the indefinite growth of reusable gene modules made of standardized DNA pieces. The GB system consists of a set of four destination plasmids (pDGBs) designed to incorporate multipartite assemblies made of standard DNA parts and to combine them binarily to build increasingly complex multigene constructs. The relative position of type IIS restriction sites inside pDGB vectors introduces a double loop (“braid”) topology in the cloning strategy that allows the indefinite growth of composite parts through the succession of iterative assembling steps, while the overall simplicity of the system is maintained. We propose the use of GoldenBraid as an assembly standard for Plant Synthetic Biology. For this purpose we have GB-adapted a set of binary plasmids for A. tumefaciens-mediated plant transformation. Fast GB-engineering of several multigene T-DNAs, including two alternative modules made of five reusable devices each, and comprising a total of 19 basic parts are also described. PMID:21750718

  5. GoldenBraid: an iterative cloning system for standardized assembly of reusable genetic modules.

    PubMed

    Sarrion-Perdigones, Alejandro; Falconi, Erica Elvira; Zandalinas, Sara I; Juárez, Paloma; Fernández-del-Carmen, Asun; Granell, Antonio; Orzaez, Diego

    2011-01-01

    Synthetic Biology requires efficient and versatile DNA assembly systems to facilitate the building of new genetic modules/pathways from basic DNA parts in a standardized way. Here we present GoldenBraid (GB), a standardized assembly system based on type IIS restriction enzymes that allows the indefinite growth of reusable gene modules made of standardized DNA pieces. The GB system consists of a set of four destination plasmids (pDGBs) designed to incorporate multipartite assemblies made of standard DNA parts and to combine them binarily to build increasingly complex multigene constructs. The relative position of type IIS restriction sites inside pDGB vectors introduces a double loop ("braid") topology in the cloning strategy that allows the indefinite growth of composite parts through the succession of iterative assembling steps, while the overall simplicity of the system is maintained. We propose the use of GoldenBraid as an assembly standard for Plant Synthetic Biology. For this purpose we have GB-adapted a set of binary plasmids for A. tumefaciens-mediated plant transformation. Fast GB-engineering of several multigene T-DNAs, including two alternative modules made of five reusable devices each, and comprising a total of 19 basic parts are also described.

  6. The ITER bolometer diagnostic: Status and plansa)

    NASA Astrophysics Data System (ADS)

    Meister, H.; Giannone, L.; Horton, L. D.; Raupp, G.; Zeidner, W.; Grunda, G.; Kalvin, S.; Fischer, U.; Serikov, A.; Stickel, S.; Reichle, R.

    2008-10-01

    A consortium consisting of four EURATOM Associations has been set up to develop the project plan for the full development of the ITER bolometer diagnostic and to continue urgent R&D activities. An overview of the current status is given, including detector development, line-of-sight optimization, performance analysis as well as the design of the diagnostic components and their integration in ITER. This is complemented by the presentation of plans for future activities required to successfully implement the bolometer diagnostic, ranging from the detector development over diagnostic design and prototype testing to RH tools for calibration.

  7. SUMMARY REPORT-FY2006 ITER WORK ACCOMPLISHED

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martovetsky, N N

    2006-04-11

    Six parties (EU, Japan, Russia, US, Korea, China) will build ITER. The US proposed to deliver at least 4 out of 7 modules of the Central Solenoid. Phillip Michael (MIT) and I were tasked by DoE to assist ITER in development of the ITER CS and other magnet systems. We work to help Magnets and Structure division headed by Neil Mitchell. During this visit I worked on the selected items of the CS design and carried out other small tasks, like PF temperature margin assessment.

  8. Iterative LQG Controller Design Through Closed-Loop Identification

    NASA Technical Reports Server (NTRS)

    Hsiao, Min-Hung; Huang, Jen-Kuang; Cox, David E.

    1996-01-01

    This paper presents an iterative Linear Quadratic Gaussian (LQG) controller design approach for a linear stochastic system with an uncertain open-loop model and unknown noise statistics. This approach consists of closed-loop identification and controller redesign cycles. In each cycle, the closed-loop identification method is used to identify an open-loop model and a steady-state Kalman filter gain from closed-loop input/output test data obtained by using a feedback LQG controller designed from the previous cycle. Then the identified open-loop model is used to redesign the state feedback. The state feedback and the identified Kalman filter gain are used to form an updated LQC controller for the next cycle. This iterative process continues until the updated controller converges. The proposed controller design is demonstrated by numerical simulations and experiments on a highly unstable large-gap magnetic suspension system.

  9. Learning to Teach Elementary Science Through Iterative Cycles of Enactment in Culturally and Linguistically Diverse Contexts

    NASA Astrophysics Data System (ADS)

    Bottoms, SueAnn I.; Ciechanowski, Kathryn M.; Hartman, Brian

    2015-12-01

    Iterative cycles of enactment embedded in culturally and linguistically diverse contexts provide rich opportunities for preservice teachers (PSTs) to enact core practices of science. This study is situated in the larger Families Involved in Sociocultural Teaching and Science, Technology, Engineering and Mathematics (FIESTAS) project, which weaves together cycles of enactment, core practices in science education and culturally relevant pedagogies. The theoretical foundation draws upon situated learning theory and communities of practice. Using video analysis by PSTs and course artifacts, the authors studied how the iterative process of these cycles guided PSTs development as teachers of elementary science. Findings demonstrate how PSTs were drawing on resources to inform practice, purposefully noticing their practice, renegotiating their roles in teaching, and reconsidering "professional blindness" through cultural practice.

  10. Iterative Design and Classroom Evaluation of Automated Formative Feedback for Improving Peer Feedback Localization

    ERIC Educational Resources Information Center

    Nguyen, Huy; Xiong, Wenting; Litman, Diane

    2017-01-01

    A peer-review system that automatically evaluates and provides formative feedback on free-text feedback comments of students was iteratively designed and evaluated in college and high-school classrooms. Classroom assignments required students to write paper drafts and submit them to a peer-review system. When student peers later submitted feedback…

  11. Using an Iterative Mixed-Methods Research Design to Investigate Schools Facing Exceptionally Challenging Circumstances within Trinidad and Tobago

    ERIC Educational Resources Information Center

    De Lisle, Jerome; Seunarinesingh, Krishna; Mohammed, Rhoda; Lee-Piggott, Rinnelle

    2017-01-01

    In this study, methodology and theory were linked to explicate the nature of education practice within schools facing exceptionally challenging circumstances (SFECC) in Trinidad and Tobago. The research design was an iterative quan>QUAL-quan>qual multi-method research programme, consisting of 3 independent projects linked together by overall…

  12. Collaborative damage mapping for emergency response: the role of Cognitive Systems Engineering

    NASA Astrophysics Data System (ADS)

    Kerle, N.; Hoffman, R. R.

    2013-01-01

    Remote sensing is increasingly used to assess disaster damage, traditionally by professional image analysts. A recent alternative is crowdsourcing by volunteers experienced in remote sensing, using internet-based mapping portals. We identify a range of problems in current approaches, including how volunteers can best be instructed for the task, ensuring that instructions are accurately understood and translate into valid results, or how the mapping scheme must be adapted for different map user needs. The volunteers, the mapping organizers, and the map users all perform complex cognitive tasks, yet little is known about the actual information needs of the users. We also identify problematic assumptions about the capabilities of the volunteers, principally related to the ability to perform the mapping, and to understand mapping instructions unambiguously. We propose that any robust scheme for collaborative damage mapping must rely on Cognitive Systems Engineering and its principal method, Cognitive Task Analysis (CTA), to understand the information and decision requirements of the map and image users, and how the volunteers can be optimally instructed and their mapping contributions merged into suitable map products. We recommend an iterative approach involving map users, remote sensing specialists, cognitive systems engineers and instructional designers, as well as experimental psychologists.

  13. Mechatronics as a technological basis for an innovative learning environment in engineering

    NASA Astrophysics Data System (ADS)

    Garner, Gavin Thomas

    Mechatronic systems that couple mechanical and electrical systems with the help of computer control are forcing a paradigm shift in the design, manufacture, and implementation of mechanical devices. The inherently interdisciplinary nature of these systems generates exciting new opportunities for developing a hands-on, inventive, and creativity-focused educational program while still embracing rigorous scientific fundamentals. The technologies associated with mechatronics are continually evolving (e.g., integrated circuit chips, miniature and new types of sensors, and state-of-the-art actuators). As a result, a mechatronics curriculum must prepare students to adapt along with these rapidly changing technologies---and perhaps even advance these technologies themselves. Such is the inspiring and uncharted new world that is presented for student exploration and experimentation in the University of Virginia's Mechatronics Laboratory. The underlying goal of this research has been to develop a framework for teaching mechatronics that helps students master fundamental concepts and build essential technical and analytical skills. To this end, two courses involving over fifty hours worth of technologically-innovative and educationally-effective laboratory experiments have been developed along with open-ended projects in response to the unique and new challenges associated with teaching mechatronics. These experiments synthesize an unprecedentedly vast array of skills from many different disciplines and enable students to haptically absorb the fundamental concepts involved in designing mechatronic systems. They have been optimized through several iterations to become highly efficient. Perspectives on the development of these courses and on the field of mechatronics in general are included. Furthermore, this dissertation demonstrates the integration of new technologies within a learning environment specifically designed to teach mechatronics to mechanical engineers. For mechanical engineering in particular, mechatronics poses considerable challenges, and necessitates a fundamental evolution in the understanding of the relationship between the various engineering disciplines. Consequently, this dissertation helps to define the role that mechatronics must play in mechanical engineering and presents unique laboratory experiments, creative projects, and modeling and simulation exercises as effective tools for teaching mechatronics to the modern mechanical engineering student.

  14. Final Report on ITER Task Agreement 81-10

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brad J. Merrill

    An International Thermonuclear Experimental Reactor (ITER) Implementing Task Agreement (ITA) on Magnet Safety was established between the ITER International Organization (IO) and the Idaho National Laboratory (INL) Fusion Safety Program (FSP) during calendar year 2004. The objectives of this ITA were to add new capabilities to the MAGARC code and to use this updated version of MAGARC to analyze unmitigated superconductor quench events for both poloidal field (PF) and toroidal field (TF) coils of the ITER design. This report documents the completion of the work scope for this ITA. Based on the results obtained for this ITA, an unmitigated quenchmore » event in an ITER larger PF coil does not appear to be as severe an accident as in an ITER TF coil.« less

  15. A power autonomous monopedal robot

    NASA Astrophysics Data System (ADS)

    Krupp, Benjamin T.; Pratt, Jerry E.

    2006-05-01

    We present the design and initial results of a power-autonomous planar monopedal robot. The robot is a gasoline powered, two degree of freedom robot that runs in a circle, constrained by a boom. The robot uses hydraulic Series Elastic Actuators, force-controllable actuators which provide high force fidelity, moderate bandwidth, and low impedance. The actuators are mounted in the body of the robot, with cable drives transmitting power to the hip and knee joints of the leg. A two-stroke, gasoline engine drives a constant displacement pump which pressurizes an accumulator. Absolute position and spring deflection of each of the Series Elastic Actuators are measured using linear encoders. The spring deflection is translated into force output and compared to desired force in a closed loop force-control algorithm implemented in software. The output signal of each force controller drives high performance servo valves which control flow to each of the pistons of the actuators. In designing the robot, we used a simulation-based iterative design approach. Preliminary estimates of the robot's physical parameters were based on past experience and used to create a physically realistic simulation model of the robot. Next, a control algorithm was implemented in simulation to produce planar hopping. Using the joint power requirements and range of motions from simulation, we worked backward specifying pulley diameter, piston diameter and stroke, hydraulic pressure and flow, servo valve flow and bandwidth, gear pump flow, and engine power requirements. Components that meet or exceed these specifications were chosen and integrated into the robot design. Using CAD software, we calculated the physical parameters of the robot design, replaced the original estimates with the CAD estimates, and produced new joint power requirements. We iterated on this process, resulting in a design which was prototyped and tested. The Monopod currently runs at approximately 1.2 m/s with the weight of all the power generating components, but powered from an off-board pump. On a test stand, the eventual on-board power system generates enough pressure and flow to meet the requirements of these runs and we are currently integrating the power system into the real robot. When operated from an off-board system without carrying the weight of the power generating components, the robot currently runs at approximately 2.25 m/s. Ongoing work is focused on integrating the power system into the robot, improving the control algorithm, and investigating methods for improving efficiency.

  16. Experimental Evidence on Iterated Reasoning in Games

    PubMed Central

    Grehl, Sascha; Tutić, Andreas

    2015-01-01

    We present experimental evidence on two forms of iterated reasoning in games, i.e. backward induction and interactive knowledge. Besides reliable estimates of the cognitive skills of the subjects, our design allows us to disentangle two possible explanations for the observed limits in performed iterated reasoning: Restrictions in subjects’ cognitive abilities and their beliefs concerning the rationality of co-players. In comparison to previous literature, our estimates regarding subjects’ skills in iterated reasoning are quite pessimistic. Also, we find that beliefs concerning the rationality of co-players are completely irrelevant in explaining the observed limited amount of iterated reasoning in the dirty faces game. In addition, it is demonstrated that skills in backward induction are a solid predictor for skills in iterated knowledge, which points to some generalized ability of the subjects in iterated reasoning. PMID:26312486

  17. Reverse engineering of integrated circuits

    DOEpatents

    Chisholm, Gregory H.; Eckmann, Steven T.; Lain, Christopher M.; Veroff, Robert L.

    2003-01-01

    Software and a method therein to analyze circuits. The software comprises several tools, each of which perform particular functions in the Reverse Engineering process. The analyst, through a standard interface, directs each tool to the portion of the task to which it is most well suited, rendering previously intractable problems solvable. The tools are generally used iteratively to produce a successively more abstract picture of a circuit, about which incomplete a priori knowledge exists.

  18. Self-assembled nanocages based on the coiled coil bundle motif

    NASA Astrophysics Data System (ADS)

    Sinha, Nairiti; Villegas, Jose; Saven, Jeffery; Kiick, Kristi; Pochan, Darrin

    Computational design of coiled coil peptide bundles that undergo solution phase self-assembly presents a diverse toolbox for engineering new materials with tunable and pre-determined nanostructures that can have various end applications such as in drug delivery, biomineralization and electronics. Self-assembled cages are especially advantageous as the cage geometry provides three distinct functional sites: the interior, the exterior and the solvent-cage interface. In this poster, syntheses and characterization of a peptide cage based on computationally designed homotetrameric coiled coil bundles as building blocks is discussed. Techniques such as Transmission Electron Microscopy (TEM), Small-Angle Neutron Scattering (SANS) and Analytical Ultracentrifugation (AUC) are employed to characterize the size, shape and molecular weight of the self-assembled peptide cages under different pH and temperature conditions. Various self-assembly pathways such as dialysis and thermal quenching are shown to have a significant impact on the final structure of these peptides in solution. Comparison of results with the target cage design can be used to iteratively improve the peptide design and provide greater understanding of its interactions and folding.

  19. Space-based solar power conversion and delivery systems study. Volume 2: Engineering analysis

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The technical and economic feasibility of Satellite Solar Power Systems was studied with emphasis on the analysis and definition of an integrated strawman configuration concept, from which credible cost data could be estimated. Specifically, system concepts for each of the major subprogram areas were formulated, analyzed, and iterated to the degree necessary for establishing an overall, workable baseline system design. Cost data were estimated for the baseline and used to conduct economic analyses. The baseline concept selected was a 5-GW crystal silicon truss-type photovoltaic configuration, which represented the most mature concept available. The overall results and major findings, and the results of technical analyses performed during the final phase of the study efforts are reported.

  20. ITER activities and fusion technology

    NASA Astrophysics Data System (ADS)

    Seki, M.

    2007-10-01

    At the 21st IAEA Fusion Energy Conference, 68 and 67 papers were presented in the categories of ITER activities and fusion technology, respectively. ITER performance prediction, results of technology R&D and the construction preparation provide good confidence in ITER realization. The superconducting tokamak EAST achieved the first plasma just before the conference. The construction of other new experimental machines has also shown steady progress. Future reactor studies stress the importance of down sizing and a steady-state approach. Reactor technology in the field of blanket including the ITER TBM programme and materials for the demonstration power plant showed sound progress in both R&D and design activities.

  1. ITER Disruption Mitigation System Design

    NASA Astrophysics Data System (ADS)

    Rasmussen, David; Lyttle, M. S.; Baylor, L. R.; Carmichael, J. R.; Caughman, J. B. O.; Combs, S. K.; Ericson, N. M.; Bull-Ezell, N. D.; Fehling, D. T.; Fisher, P. W.; Foust, C. R.; Ha, T.; Meitner, S. J.; Nycz, A.; Shoulders, J. M.; Smith, S. F.; Warmack, R. J.; Coburn, J. D.; Gebhart, T. E.; Fisher, J. T.; Reed, J. R.; Younkin, T. R.

    2015-11-01

    The disruption mitigation system for ITER is under design and will require injection of up to 10 kPa-m3 of deuterium, helium, neon, or argon material for thermal mitigation and up to 100 kPa-m3 of material for suppression of runaway electrons. A hybrid unit compatible with the ITER nuclear, thermal and magnetic field environment is being developed. The unit incorporates a fast gas valve for massive gas injection (MGI) and a shattered pellet injector (SPI) to inject a massive spray of small particles, and can be operated as an SPI with a frozen pellet or an MGI without a pellet. Three ITER upper port locations will have three SPI/MGI units with a common delivery tube. One equatorial port location has space for sixteen similar SPI/MGI units. Supported by US DOE under DE-AC05-00OR22725.

  2. Autonomous entropy-based intelligent experimental design

    NASA Astrophysics Data System (ADS)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same goal in an automated fashion.

  3. Designing Colorectal Cancer Screening Decision Support: A Cognitive Engineering Enterprise.

    PubMed

    Militello, Laura G; Saleem, Jason J; Borders, Morgan R; Sushereba, Christen E; Haverkamp, Donald; Wolf, Steven P; Doebbeling, Bradley N

    2016-03-01

    Adoption of clinical decision support has been limited. Important barriers include an emphasis on algorithmic approaches to decision support that do not align well with clinical work flow and human decision strategies, and the expense and challenge of developing, implementing, and refining decision support features in existing electronic health records (EHRs). We applied decision-centered design to create a modular software application to support physicians in managing and tracking colorectal cancer screening. Using decision-centered design facilitates a thorough understanding of cognitive support requirements from an end user perspective as a foundation for design. In this project, we used an iterative design process, including ethnographic observation and cognitive task analysis, to move from an initial design concept to a working modular software application called the Screening & Surveillance App. The beta version is tailored to work with the Veterans Health Administration's EHR Computerized Patient Record System (CPRS). Primary care providers using the beta version Screening & Surveillance App more accurately answered questions about patients and found relevant information more quickly compared to those using CPRS alone. Primary care providers also reported reduced mental effort and rated the Screening & Surveillance App positively for usability.

  4. Designing Colorectal Cancer Screening Decision Support: A Cognitive Engineering Enterprise

    PubMed Central

    Militello, Laura G.; Saleem, Jason J.; Borders, Morgan R.; Sushereba, Christen E.; Haverkamp, Donald; Wolf, Steven P.; Doebbeling, Bradley N.

    2016-01-01

    Adoption of clinical decision support has been limited. Important barriers include an emphasis on algorithmic approaches to decision support that do not align well with clinical work flow and human decision strategies, and the expense and challenge of developing, implementing, and refining decision support features in existing electronic health records (EHRs). We applied decision-centered design to create a modular software application to support physicians in managing and tracking colorectal cancer screening. Using decision-centered design facilitates a thorough understanding of cognitive support requirements from an end user perspective as a foundation for design. In this project, we used an iterative design process, including ethnographic observation and cognitive task analysis, to move from an initial design concept to a working modular software application called the Screening & Surveillance App. The beta version is tailored to work with the Veterans Health Administration’s EHR Computerized Patient Record System (CPRS). Primary care providers using the beta version Screening & Surveillance App more accurately answered questions about patients and found relevant information more quickly compared to those using CPRS alone. Primary care providers also reported reduced mental effort and rated the Screening & Surveillance App positively for usability. PMID:26973441

  5. Computer Program for Analysis, Design and Optimization of Propulsion, Dynamics, and Kinematics of Multistage Rockets

    NASA Astrophysics Data System (ADS)

    Lali, Mehdi

    2009-03-01

    A comprehensive computer program is designed in MATLAB to analyze, design and optimize the propulsion, dynamics, thermodynamics, and kinematics of any serial multi-staging rocket for a set of given data. The program is quite user-friendly. It comprises two main sections: "analysis and design" and "optimization." Each section has a GUI (Graphical User Interface) in which the rocket's data are entered by the user and by which the program is run. The first section analyzes the performance of the rocket that is previously devised by the user. Numerous plots and subplots are provided to display the performance of the rocket. The second section of the program finds the "optimum trajectory" via billions of iterations and computations which are done through sophisticated algorithms using numerical methods and incremental integrations. Innovative techniques are applied to calculate the optimal parameters for the engine and designing the "optimal pitch program." This computer program is stand-alone in such a way that it calculates almost every design parameter in regards to rocket propulsion and dynamics. It is meant to be used for actual launch operations as well as educational and research purposes.

  6. Development of High Fidelity, Fuel-Like Thermal Simulators for Non-Nuclear Testing

    NASA Technical Reports Server (NTRS)

    Bragg-Sitton, S. M.; Farmer, J.; Dixon, D.; Kapernick, R.; Dickens, R.; Adams, M.

    2007-01-01

    Non-nuclear testing can be a valuable tool in development of a space nuclear power or propulsion system. In a non-nuclear test bed, electric heaters are used to simulate the heat from nuclear fuel. Work at the NASA Marshall Space Flight Center seeks to develop high fidelity thermal simulators that not only match the static power profile that would be observed in an operating, fueled nuclear reactor, but to also match the dynamic fuel pin performance during feasible transients. Comparison between the fuel pins and thermal simulators is made at the fuel clad surface, which corresponds to the sheath surface in the thermal simulator. Static and dynamic fuel pin performance was determined using SINDA-FLUINT analysis, and the performance of conceptual thermal simulator designs was compared to the expected nuclear performance. Through a series of iterative analysis, a conceptual high fidelity design will be developed, followed by engineering design, fabrication, and testing to validate the overall design process. Although the resulting thermal simulator will be designed for a specific reactor concept, establishing this rigorous design process will assist in streamlining the thermal simulator development for other reactor concepts.

  7. Architectural Specialization for Inter-Iteration Loop Dependence Patterns

    DTIC Science & Technology

    2015-10-01

    Architectural Specialization for Inter-Iteration Loop Dependence Patterns Christopher Batten Computer Systems Laboratory School of Electrical and...Trends in Computer Architecture Transistors (Thousands) Frequency (MHz) Typical Power (W) MIPS R2K Intel P4 DEC Alpha 21264 Data collected by M...T as ks p er Jo ule ) Simple Processor Design Power Constraint High-Performance Architectures Embedded Architectures Design Performance

  8. VIMOS Instrument Control Software Design: an Object Oriented Approach

    NASA Astrophysics Data System (ADS)

    Brau-Nogué, Sylvie; Lucuix, Christian

    2002-12-01

    The Franco-Italian VIMOS instrument is a VIsible imaging Multi-Object Spectrograph with outstanding multiplex capabilities, allowing to take spectra of more than 800 objects simultaneously, or integral field spectroscopy mode in a 54x54 arcsec area. VIMOS is being installed at the Nasmyth focus of the third Unit Telescope of the European Southern Observatory Very Large Telescope (VLT) at Mount Paranal in Chile. This paper will describe the analysis, the design and the implementation of the VIMOS Instrument Control System, using UML notation. Our Control group followed an Object Oriented software process while keeping in mind the ESO VLT standard control concepts. At ESO VLT a complete software library is available. Rather than applying waterfall lifecycle, ICS project used iterative development, a lifecycle consisting of several iterations. Each iteration consisted in : capture and evaluate the requirements, visual modeling for analysis and design, implementation, test, and deployment. Depending of the project phases, iterations focused more or less on specific activity. The result is an object model (the design model), including use-case realizations. An implementation view and a deployment view complement this product. An extract of VIMOS ICS UML model will be presented and some implementation, integration and test issues will be discussed.

  9. Layout compliance for triple patterning lithography: an iterative approach

    NASA Astrophysics Data System (ADS)

    Yu, Bei; Garreton, Gilda; Pan, David Z.

    2014-10-01

    As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.

  10. Design of object-oriented distributed simulation classes

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D. (Principal Investigator)

    1995-01-01

    Distributed simulation of aircraft engines as part of a computer aided design package is being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for 'Numerical Propulsion Simulation System'. NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT 'Actor' model of a concurrent object and uses 'connectors' to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.

  11. Design of Object-Oriented Distributed Simulation Classes

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1995-01-01

    Distributed simulation of aircraft engines as part of a computer aided design package being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for "Numerical Propulsion Simulation System". NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT "Actor" model of a concurrent object and uses "connectors" to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.

  12. Conceptual design of data acquisition and control system for two Rf driver based negative ion source for fusion R&D

    NASA Astrophysics Data System (ADS)

    Soni, Jigensh; Yadav, R. K.; Patel, A.; Gahlaut, A.; Mistry, H.; Parmar, K. G.; Mahesh, V.; Parmar, D.; Prajapati, B.; Singh, M. J.; Bandyopadhyay, M.; Bansal, G.; Pandya, K.; Chakraborty, A.

    2013-02-01

    Twin Source - An Inductively coupled two RF driver based 180 kW, 1 MHz negative ion source experimental setup is initiated at IPR, Gandhinagar, under Indian program, with the objective of understanding the physics and technology of multi-driver coupling. Twin Source [1] (TS) also provides an intermediate platform between operational ROBIN [2] [5] and eight RF drivers based Indian test facility -INTF [3]. A twin source experiment requires a central system to provide control, data acquisition and communication interface, referred as TS-CODAC, for which a software architecture similar to ITER CODAC core system has been decided for implementation. The Core System is a software suite for ITER plant system manufacturers to use as a template for the development of their interface with CODAC. The ITER approach, in terms of technology, has been adopted for the TS-CODAC so as to develop necessary expertise for developing and operating a control system based on the ITER guidelines as similar configuration needs to be implemented for the INTF. This cost effective approach will provide an opportunity to evaluate and learn ITER CODAC technology, documentation, information technology and control system processes, on an operational machine. Conceptual design of the TS-CODAC system has been completed. For complete control of the system, approximately 200 Nos. control signals and 152 acquisition signals are needed. In TS-CODAC, control loop time required is within the range of 5ms - 10 ms, therefore for the control system, PLC (Siemens S-7 400) has been chosen as suggested in the ITER slow controller catalog. For the data acquisition, the maximum sampling interval required is 100 micro second, and therefore National Instruments (NI) PXIe system and NI 6259 digitizer cards have been selected as suggested in the ITER fast controller catalog. This paper will present conceptual design of TS -CODAC system based on ITER CODAC Core software and applicable plant system integration processes.

  13. A transatlantic perspective on 20 emerging issues in biological engineering

    PubMed Central

    Rhodes, Catherine; Molloy, Jennifer C; Millett, Piers; Adam, Laura; Breitling, Rainer; Carlson, Rob; Casagrande, Rocco; Dando, Malcolm; Doubleday, Robert; Drexler, Eric; Edwards, Brett; Ellis, Tom; Evans, Nicholas G; Hammond, Richard; Haseloff, Jim; Kahl, Linda; Kuiken, Todd; Lichman, Benjamin R; Matthewman, Colette A; Napier, Johnathan A; ÓhÉigeartaigh, Seán S; Patron, Nicola J; Perello, Edward; Shapira, Philip; Tait, Joyce; Takano, Eriko; Sutherland, William J

    2017-01-01

    Advances in biological engineering are likely to have substantial impacts on global society. To explore these potential impacts we ran a horizon scanning exercise to capture a range of perspectives on the opportunities and risks presented by biological engineering. We first identified 70 potential issues, and then used an iterative process to prioritise 20 issues that we considered to be emerging, to have potential global impact, and to be relatively unknown outside the field of biological engineering. The issues identified may be of interest to researchers, businesses and policy makers in sectors such as health, energy, agriculture and the environment. PMID:29132504

  14. Gaussian-Beam/Physical-Optics Design Of Beam Waveguide

    NASA Technical Reports Server (NTRS)

    Veruttipong, Watt; Chen, Jacqueline C.; Bathker, Dan A.

    1993-01-01

    In iterative method of designing wideband beam-waveguide feed for paraboloidal-reflector antenna, Gaussian-beam approximation alternated with more nearly exact physical-optics analysis of diffraction. Includes curved and straight reflectors guiding radiation from feed horn to subreflector. For iterative design calculations, curved mirrors mathematically modeled as thin lenses. Each distance Li is combined length of two straight-line segments intersecting at one of flat mirrors. Method useful for designing beam-waveguide reflectors or mirrors required to have diameters approximately less than 30 wavelengths at one or more intended operating frequencies.

  15. Probabilistic distance-based quantizer design for distributed estimation

    NASA Astrophysics Data System (ADS)

    Kim, Yoon Hak

    2016-12-01

    We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.

  16. Too Little Too Soon? Modeling the Risks of Spiral Development

    DTIC Science & Technology

    2007-04-30

    270 315 360 405 450 495 540 585 630 675 720 765 810 855 900 Time (Week) Work started and active PhIt [Requirements,Iter1] : JavelinCalibration work...packages1 1 1 Work started and active PhIt [Technology,Iter1] : JavelinCalibration work packages2 2 2 Work started and active PhIt [Design,Iter1...JavelinCalibration work packages3 3 3 3 Work started and active PhIt [Manufacturing,Iter1] : JavelinCalibration work packages4 4 Work started and active PhIt

  17. Application of a repetitive process setting to design of monotonically convergent iterative learning control

    NASA Astrophysics Data System (ADS)

    Boski, Marcin; Paszke, Wojciech

    2015-11-01

    This paper deals with the problem of designing an iterative learning control algorithm for discrete linear systems using repetitive process stability theory. The resulting design produces a stabilizing output feedback controller in the time domain and a feedforward controller that guarantees monotonic convergence in the trial-to-trial domain. The results are also extended to limited frequency range design specification. New design procedure is introduced in terms of linear matrix inequality (LMI) representations, which guarantee the prescribed performances of ILC scheme. A simulation example is given to illustrate the theoretical developments.

  18. Sensemaking in a Value Based Context for Large Scale Complex Engineered Systems

    NASA Astrophysics Data System (ADS)

    Sikkandar Basha, Nazareen

    The design and the development of Large-Scale Complex Engineered Systems (LSCES) requires the involvement of multiple teams and numerous levels of the organization and interactions with large numbers of people and interdisciplinary departments. Traditionally, requirements-driven Systems Engineering (SE) is used in the design and development of these LSCES. The requirements are used to capture the preferences of the stakeholder for the LSCES. Due to the complexity of the system, multiple levels of interactions are required to elicit the requirements of the system within the organization. Since LSCES involves people and interactions between the teams and interdisciplinary departments, it should be socio-technical in nature. The elicitation of the requirements of most large-scale system projects are subjected to creep in time and cost due to the uncertainty and ambiguity of requirements during the design and development. In an organization structure, the cost and time overrun can occur at any level and iterate back and forth thus increasing the cost and time. To avoid such creep past researches have shown that rigorous approaches such as value based designing can be used to control it. But before the rigorous approaches can be used, the decision maker should have a proper understanding of requirements creep and the state of the system when the creep occurs. Sensemaking is used to understand the state of system when the creep occurs and provide a guidance to decision maker. This research proposes the use of the Cynefin framework, sensemaking framework which can be used in the design and development of LSCES. It can aide in understanding the system and decision making to minimize the value gap due to requirements creep by eliminating ambiguity which occurs during design and development. A sample hierarchical organization is used to demonstrate the state of the system at the occurrence of requirements creep in terms of cost and time using the Cynefin framework. These trials are continued for different requirements and at different sub-system level. The results obtained show that the Cynefin framework can be used to improve the value of the system and can be used for predictive analysis. The decision makers can use these findings and use rigorous approaches and improve the design of Large Scale Complex Engineered Systems.

  19. Volumetric imaging of fast biological dynamics in deep tissue via wavefront engineering

    NASA Astrophysics Data System (ADS)

    Kong, Lingjie; Tang, Jianyong; Cui, Meng

    2016-03-01

    To reveal fast biological dynamics in deep tissue, we combine two wavefront engineering methods that were developed in our laboratory, namely optical phase-locked ultrasound lens (OPLUL) based volumetric imaging and iterative multiphoton adaptive compensation technique (IMPACT). OPLUL is used to generate oscillating defocusing wavefront for fast axial scanning, and IMPACT is used to compensate the wavefront distortions for deep tissue imaging. We show its promising applications in neuroscience and immunology.

  20. Clinical Decision Support Systems (CDSS) for preventive management of COPD patients.

    PubMed

    Velickovski, Filip; Ceccaroni, Luigi; Roca, Josep; Burgos, Felip; Galdiz, Juan B; Marina, Nuria; Lluch-Ariet, Magí

    2014-11-28

    The use of information and communication technologies to manage chronic diseases allows the application of integrated care pathways, and the optimization and standardization of care processes. Decision support tools can assist in the adherence to best-practice medicine in critical decision points during the execution of a care pathway. The objectives are to design, develop, and assess a clinical decision support system (CDSS) offering a suite of services for the early detection and assessment of chronic obstructive pulmonary disease (COPD), which can be easily integrated into a healthcare providers' work-flow. The software architecture model for the CDSS, interoperable clinical-knowledge representation, and inference engine were designed and implemented to form a base CDSS framework. The CDSS functionalities were iteratively developed through requirement-adjustment/development/validation cycles using enterprise-grade software-engineering methodologies and technologies. Within each cycle, clinical-knowledge acquisition was performed by a health-informatics engineer and a clinical-expert team. A suite of decision-support web services for (i) COPD early detection and diagnosis, (ii) spirometry quality-control support, (iii) patient stratification, was deployed in a secured environment on-line. The CDSS diagnostic performance was assessed using a validation set of 323 cases with 90% specificity, and 96% sensitivity. Web services were integrated in existing health information system platforms. Specialized decision support can be offered as a complementary service to existing policies of integrated care for chronic-disease management. The CDSS was able to issue recommendations that have a high degree of accuracy to support COPD case-finding. Integration into healthcare providers' work-flow can be achieved seamlessly through the use of a modular design and service-oriented architecture that connect to existing health information systems.

  1. Clinical Decision Support Systems (CDSS) for preventive management of COPD patients

    PubMed Central

    2014-01-01

    Background The use of information and communication technologies to manage chronic diseases allows the application of integrated care pathways, and the optimization and standardization of care processes. Decision support tools can assist in the adherence to best-practice medicine in critical decision points during the execution of a care pathway. Objectives The objectives are to design, develop, and assess a clinical decision support system (CDSS) offering a suite of services for the early detection and assessment of chronic obstructive pulmonary disease (COPD), which can be easily integrated into a healthcare providers' work-flow. Methods The software architecture model for the CDSS, interoperable clinical-knowledge representation, and inference engine were designed and implemented to form a base CDSS framework. The CDSS functionalities were iteratively developed through requirement-adjustment/development/validation cycles using enterprise-grade software-engineering methodologies and technologies. Within each cycle, clinical-knowledge acquisition was performed by a health-informatics engineer and a clinical-expert team. Results A suite of decision-support web services for (i) COPD early detection and diagnosis, (ii) spirometry quality-control support, (iii) patient stratification, was deployed in a secured environment on-line. The CDSS diagnostic performance was assessed using a validation set of 323 cases with 90% specificity, and 96% sensitivity. Web services were integrated in existing health information system platforms. Conclusions Specialized decision support can be offered as a complementary service to existing policies of integrated care for chronic-disease management. The CDSS was able to issue recommendations that have a high degree of accuracy to support COPD case-finding. Integration into healthcare providers' work-flow can be achieved seamlessly through the use of a modular design and service-oriented architecture that connect to existing health information systems. PMID:25471545

  2. Synthetic river valleys: Creating prescribed topography for form-process inquiry and river rehabilitation design

    NASA Astrophysics Data System (ADS)

    Brown, R. A.; Pasternack, G. B.; Wallender, W. W.

    2014-06-01

    The synthesis of artificial landforms is complementary to geomorphic analysis because it affords a reflection on both the characteristics and intrinsic formative processes of real world conditions. Moreover, the applied terminus of geomorphic theory is commonly manifested in the engineering and rehabilitation of riverine landforms where the goal is to create specific processes associated with specific morphology. To date, the synthesis of river topography has been explored outside of geomorphology through artistic renderings, computer science applications, and river rehabilitation design; while within geomorphology it has been explored using morphodynamic modeling, such as one-dimensional simulation of river reach profiles, two-dimensional simulation of river networks, and three-dimensional simulation of subreach scale river morphology. To date, no approach allows geomorphologists, engineers, or river rehabilitation practitioners to create landforms of prescribed conditions. In this paper a method for creating topography of synthetic river valleys is introduced that utilizes a theoretical framework that draws from fluvial geomorphology, computer science, and geometric modeling. Such a method would be valuable to geomorphologists in understanding form-process linkages as well as to engineers and river rehabilitation practitioners in developing design surfaces that can be rapidly iterated. The method introduced herein relies on the discretization of river valley topography into geometric elements associated with overlapping and orthogonal two-dimensional planes such as the planform, profile, and cross section that are represented by mathematical functions, termed geometric element equations. Topographic surfaces can be parameterized independently or dependently using a geomorphic covariance structure between the spatial series of geometric element equations. To illustrate the approach and overall model flexibility examples are provided that are associated with mountain, lowland, and hybrid synthetic river valleys. To conclude, recommended advances such as multithread channels are discussed along with potential applications.

  3. On the Implementation of Iterative Detection in Real-World MIMO Wireless Systems

    DTIC Science & Technology

    2003-12-01

    multientr~es et multisorties (MIMO) permettent une exploitation remarquable du spectre comparativement aux syst~mes traditionnels A antenne unique...vecteurs symboliques pilotes connus cause une perte de rendement n~gligeable comparativement au cas hypothdtique des connaissances des voies parfaites...useful design guidelines for iterative systems. it does not provide any fundamental understanding as to how the design of the detector can improve the

  4. Sub-Scale Testing and Development of the J-2X Fuel Turbopump Inducer

    NASA Technical Reports Server (NTRS)

    Sargent, Scott R.; Becht, David G.

    2011-01-01

    In the early stages of the J-2X upper stage engine program, various inducer configurations proposed for use in the fuel turbopump (FTP) were tested in water. The primary objectives of this test effort were twofold. First, to obtain a more comprehensive data set than that which existed in the Pratt & Whitney Rocketdyne (PWR) historical archives from the original J-2S program, and second, to supplement that data set with information regarding the cavitation induced vibrations for both the historical J-2S configuration as well as those tested for the J-2X program. The J-2X FTP inducer, which actually consists of an inducer stage mechanically attached to a kicker stage, underwent 4 primary iterations utilizing sub-scaled test articles manufactured and tested in PWR's Engineering Development Laboratory (EDL). The kicker remained unchanged throughout the test series. The four inducer configurations tested retained many of the basic design features of the J-2S inducer, but also included variations on leading edge blade thickness and blade angle distribution, primarily aimed at improving suction performance at higher flow coefficients. From these data sets, the effects of the tested design variables on hydrodynamic performance and cavitation instabilities were discerned. A limited comparison of impact to the inducer efficiency was determined as well.

  5. Aircraft applications of fault detection and isolation techniques

    NASA Astrophysics Data System (ADS)

    Marcos Esteban, Andres

    In this thesis the problems of fault detection & isolation and fault tolerant systems are studied from the perspective of LTI frequency-domain, model-based techniques. Emphasis is placed on the applicability of these LTI techniques to nonlinear models, especially to aerospace systems. Two applications of Hinfinity LTI fault diagnosis are given using an open-loop (no controller) design approach: one for the longitudinal motion of a Boeing 747-100/200 aircraft, the other for a turbofan jet engine. An algorithm formalizing a robust identification approach based on model validation ideas is also given and applied to the previous jet engine. A general linear fractional transformation formulation is given in terms of the Youla and Dual Youla parameterizations for the integrated (control and diagnosis filter) approach. This formulation provides better insight into the trade-off between the control and the diagnosis objectives. It also provides the basic groundwork towards the development of nested schemes for the integrated approach. These nested structures allow iterative improvements on the control/filter Youla parameters based on successive identification of the system uncertainty (as given by the Dual Youla parameter). The thesis concludes with an application of Hinfinity LTI techniques to the integrated design for the longitudinal motion of the previous Boeing 747-100/200 model.

  6. Status of the ITER Electron Cyclotron Heating and Current Drive System

    NASA Astrophysics Data System (ADS)

    Darbos, Caroline; Albajar, Ferran; Bonicelli, Tullio; Carannante, Giuseppe; Cavinato, Mario; Cismondi, Fabio; Denisov, Grigory; Farina, Daniela; Gagliardi, Mario; Gandini, Franco; Gassmann, Thibault; Goodman, Timothy; Hanson, Gregory; Henderson, Mark A.; Kajiwara, Ken; McElhaney, Karen; Nousiainen, Risto; Oda, Yasuhisa; Omori, Toshimichi; Oustinov, Alexander; Parmar, Darshankumar; Popov, Vladimir L.; Purohit, Dharmesh; Rao, Shambhu Laxmikanth; Rasmussen, David; Rathod, Vipal; Ronden, Dennis M. S.; Saibene, Gabriella; Sakamoto, Keishi; Sartori, Filippo; Scherer, Theo; Singh, Narinder Pal; Strauß, Dirk; Takahashi, Koji

    2016-01-01

    The electron cyclotron (EC) heating and current drive (H&CD) system developed for the ITER is made of 12 sets of high-voltage power supplies feeding 24 gyrotrons connected through 24 transmission lines (TL), to five launchers, four located in upper ports and one at the equatorial level. Nearly all procurements are in-kind, following general ITER philosophy, and will come from Europe, India, Japan, Russia and the USA. The full system is designed to couple to the plasma 20 MW among the 24 MW generated power, at the frequency of 170 GHz, for various physics applications such as plasma start-up, central H&CD and magnetohydrodynamic (MHD) activity control. The design takes present day technology and extends toward high-power continuous operation, which represents a large step forward as compared to the present state of the art. The ITER EC system will be a stepping stone to future EC systems for DEMO and beyond.

  7. CORSICA modelling of ITER hybrid operation scenarios

    NASA Astrophysics Data System (ADS)

    Kim, S. H.; Bulmer, R. H.; Campbell, D. J.; Casper, T. A.; LoDestro, L. L.; Meyer, W. H.; Pearlstein, L. D.; Snipes, J. A.

    2016-12-01

    The hybrid operating mode observed in several tokamaks is characterized by further enhancement over the high plasma confinement (H-mode) associated with reduced magneto-hydro-dynamic (MHD) instabilities linked to a stationary flat safety factor (q ) profile in the core region. The proposed ITER hybrid operation is currently aiming at operating for a long burn duration (>1000 s) with a moderate fusion power multiplication factor, Q , of at least 5. This paper presents candidate ITER hybrid operation scenarios developed using a free-boundary transport modelling code, CORSICA, taking all relevant physics and engineering constraints into account. The ITER hybrid operation scenarios have been developed by tailoring the 15 MA baseline ITER inductive H-mode scenario. Accessible operation conditions for ITER hybrid operation and achievable range of plasma parameters have been investigated considering uncertainties on the plasma confinement and transport. ITER operation capability for avoiding the poloidal field coil current, field and force limits has been examined by applying different current ramp rates, flat-top plasma currents and densities, and pre-magnetization of the poloidal field coils. Various combinations of heating and current drive (H&CD) schemes have been applied to study several physics issues, such as the plasma current density profile tailoring, enhancement of the plasma energy confinement and fusion power generation. A parameterized edge pedestal model based on EPED1 added to the CORSICA code has been applied to hybrid operation scenarios. Finally, fully self-consistent free-boundary transport simulations have been performed to provide information on the poloidal field coil voltage demands and to study the controllability with the ITER controllers. Extended from Proc. 24th Int. Conf. on Fusion Energy (San Diego, 2012) IT/P1-13.

  8. Low-temperature tensile strength of the ITER-TF model coil insulation system after reactor irradiation

    NASA Astrophysics Data System (ADS)

    Bittner-Rohrhofer, K.; Humer, K.; Weber, H. W.

    The windings of the superconducting magnet coils for the ITER-FEAT fusion device are affected by high mechanical stresses at cryogenic temperatures and by a radiation environment, which impose certain constraints especially on the insulating materials. A glass fiber reinforced plastic (GFRP) laminate, which consists of Kapton/R-glass-fiber reinforcement tapes, vacuum-impregnated in a DGEBA epoxy system, was used for the European toroidal field model coil turn insulation of ITER. In order to assess its mechanical properties under the actual operating conditions of ITER-FEAT, cryogenic (77 K) static tensile tests and tension-tension fatigue measurements were done before and after irradiation to a fast neutron fluence of 1×10 22 m -2 ( E>0.1 MeV), i.e. the ITER-FEAT design fluence level. We find that the mechanical strength and the fracture behavior of this GFRP are strongly influenced by the winding direction of the tape and by the radiation induced delamination process. In addition, the composite swells by 3%, forming bubbles inside the laminate, and loses weight (1.4%) at the design fluence.

  9. A Principled Approach to the Specification of System Architectures for Space Missions

    NASA Technical Reports Server (NTRS)

    McKelvin, Mark L. Jr.; Castillo, Robert; Bonanne, Kevin; Bonnici, Michael; Cox, Brian; Gibson, Corrina; Leon, Juan P.; Gomez-Mustafa, Jose; Jimenez, Alejandro; Madni, Azad

    2015-01-01

    Modern space systems are increasing in complexity and scale at an unprecedented pace. Consequently, innovative methods, processes, and tools are needed to cope with the increasing complexity of architecting these systems. A key systems challenge in practice is the ability to scale processes, methods, and tools used to architect complex space systems. Traditionally, the process for specifying space system architectures has largely relied on capturing the system architecture in informal descriptions that are often embedded within loosely coupled design documents and domain expertise. Such informal descriptions often lead to misunderstandings between design teams, ambiguous specifications, difficulty in maintaining consistency as the architecture evolves throughout the system development life cycle, and costly design iterations. Therefore, traditional methods are becoming increasingly inefficient to cope with ever-increasing system complexity. We apply the principles of component-based design and platform-based design to the development of the system architecture for a practical space system to demonstrate feasibility of our approach using SysML. Our results show that we are able to apply a systematic design method to manage system complexity, thus enabling effective data management, semantic coherence and traceability across different levels of abstraction in the design chain. Just as important, our approach enables interoperability among heterogeneous tools in a concurrent engineering model based design environment.

  10. Deairing Techniques for Double-Ended Centrifugal Total Artificial Heart Implantation.

    PubMed

    Karimov, Jamshid H; Horvath, David J; Byram, Nicole; Sunagawa, Gengo; Grady, Patrick; Sinkewich, Martin; Moazami, Nader; Sale, Shiva; Golding, Leonard A R; Fukamachi, Kiyotaka

    2017-06-01

    The unique device architecture of the Cleveland Clinic continuous-flow total artificial heart (CFTAH) requires dedicated and specific air-removal techniques during device implantation in vivo. These procedures comprise special surgical techniques and intraoperative manipulations, as well as engineering design changes and optimizations to the device itself. The current study evaluated the optimal air-removal techniques during the Cleveland Clinic double-ended centrifugal CFTAH in vivo implants (n = 17). Techniques and pump design iterations consisted of developing a priming method for the device and the use of built-in deairing ports in the early cases (n = 5). In the remaining cases (n = 12), deairing ports were not used. Dedicated air-removal ports were not considered an essential design requirement, and such ports may represent an additional risk for pump thrombosis. Careful passive deairing was found to be an effective measure with a centrifugal pump of this design. In this report, the techniques and design changes that were made during this CFTAH development program to enable effective residual air removal and prevention of air embolism during in vivo device implantation are explained. © 2016 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  11. Development of a mirror-based endoscope for divertor spectroscopy on JET with the new ITER-like wall (invited).

    PubMed

    Huber, A; Brezinsek, S; Mertens, Ph; Schweer, B; Sergienko, G; Terra, A; Arnoux, G; Balshaw, N; Clever, M; Edlingdon, T; Egner, S; Farthing, J; Hartl, M; Horton, L; Kampf, D; Klammer, J; Lambertz, H T; Matthews, G F; Morlock, C; Murari, A; Reindl, M; Riccardo, V; Samm, U; Sanders, S; Stamp, M; Williams, J; Zastrow, K D; Zauner, C

    2012-10-01

    A new endoscope with optimised divertor view has been developed in order to survey and monitor the emission of specific impurities such as tungsten and the remaining carbon as well as beryllium in the tungsten divertor of JET after the implementation of the ITER-like wall in 2011. The endoscope is a prototype for testing an ITER relevant design concept based on reflective optics only. It may be subject to high neutron fluxes as expected in ITER. The operating wavelength range, from 390 nm to 2500 nm, allows the measurements of the emission of all expected impurities (W I, Be II, C I, C II, C III) with high optical transmittance (≥ 30% in the designed wavelength range) as well as high spatial resolution that is ≤ 2 mm at the object plane and ≤ 3 mm for the full depth of field (± 0.7 m). The new optical design includes options for in situ calibration of the endoscope transmittance during the experimental campaign, which allows the continuous tracing of possible transmittance degradation with time due to impurity deposition and erosion by fast neutral particles. In parallel to the new optical design, a new type of possibly ITER relevant shutter system based on pneumatic techniques has been developed and integrated into the endoscope head. The endoscope is equipped with four digital CCD cameras, each combined with two filter wheels for narrow band interference and neutral density filters. Additionally, two protection cameras in the λ > 0.95 μm range have been integrated in the optical design for the real time wall protection during the plasma operation of JET.

  12. Numerical analysis of modified Central Solenoid insert design

    DOE PAGES

    Khodak, Andrei; Martovetsky, Nicolai; Smirnov, Aleksandre; ...

    2015-06-21

    The United States ITER Project Office (USIPO) is responsible for fabrication of the Central Solenoid (CS) for ITER project. The ITER machine is currently under construction by seven parties in Cadarache, France. The CS Insert (CSI) project should provide a verification of the conductor performance in relevant conditions of temperature, field, currents and mechanical strain. The US IPO designed the CSI that will be tested at the Central Solenoid Model Coil (CSMC) Test Facility at JAEA, Naka. To validate the modified design we performed three-dimensional numerical simulations using coupled solver for simultaneous structural, thermal and electromagnetic analysis. Thermal and electromagneticmore » simulations supported structural calculations providing necessary loads and strains. According to current analysis design of the modified coil satisfies ITER magnet structural design criteria for the following conditions: (1) room temperature, no current, (2) temperature 4K, no current, (3) temperature 4K, current 60 kA direct charge, and (4) temperature 4K, current 60 kA reverse charge. Fatigue life assessment analysis is performed for the alternating conditions of: temperature 4K, no current, and temperature 4K, current 45 kA direct charge. Results of fatigue analysis show that parts of the coil assembly can be qualified for up to 1 million cycles. Distributions of the Current Sharing Temperature (TCS) in the superconductor were obtained from numerical results using parameterization of the critical surface in the form similar to that proposed for ITER. Lastly, special ADPL scripts were developed for ANSYS allowing one-dimensional representation of TCS along the cable, as well as three-dimensional fields of TCS in superconductor material. Published by Elsevier B.V.« less

  13. Development of a mirror-based endoscope for divertor spectroscopy on JET with the new ITER-like wall (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huber, A.; Brezinsek, S.; Mertens, Ph.

    2012-10-15

    A new endoscope with optimised divertor view has been developed in order to survey and monitor the emission of specific impurities such as tungsten and the remaining carbon as well as beryllium in the tungsten divertor of JET after the implementation of the ITER-like wall in 2011. The endoscope is a prototype for testing an ITER relevant design concept based on reflective optics only. It may be subject to high neutron fluxes as expected in ITER. The operating wavelength range, from 390 nm to 2500 nm, allows the measurements of the emission of all expected impurities (W I, Be II,more » C I, C II, C III) with high optical transmittance ({>=}30% in the designed wavelength range) as well as high spatial resolution that is {<=}2 mm at the object plane and {<=}3 mm for the full depth of field ({+-}0.7 m). The new optical design includes options for in situ calibration of the endoscope transmittance during the experimental campaign, which allows the continuous tracing of possible transmittance degradation with time due to impurity deposition and erosion by fast neutral particles. In parallel to the new optical design, a new type of possibly ITER relevant shutter system based on pneumatic techniques has been developed and integrated into the endoscope head. The endoscope is equipped with four digital CCD cameras, each combined with two filter wheels for narrow band interference and neutral density filters. Additionally, two protection cameras in the {lambda} > 0.95 {mu}m range have been integrated in the optical design for the real time wall protection during the plasma operation of JET.« less

  14. Modern Design of Resonant Edge-Slot Array Antennas

    NASA Technical Reports Server (NTRS)

    Gosselin, R. B.

    2006-01-01

    Resonant edge-slot (slotted-waveguide) array antennas can now be designed very accurately following a modern computational approach like that followed for some other microwave components. This modern approach makes it possible to design superior antennas at lower cost than was previously possible. Heretofore, the physical and engineering knowledge of resonant edge-slot array antennas had remained immature since they were introduced during World War II. This is because despite their mechanical simplicity, high reliability, and potential for operation with high efficiency, the electromagnetic behavior of resonant edge-slot antennas is very complex. Because engineering design formulas and curves for such antennas are not available in the open literature, designers have been forced to implement iterative processes of fabricating and testing multiple prototypes to derive design databases, each unique for a specific combination of operating frequency and set of waveguide tube dimensions. The expensive, time-consuming nature of these processes has inhibited the use of resonant edge-slot antennas. The present modern approach reduces costs by making it unnecessary to build and test multiple prototypes. As an additional benefit, this approach affords a capability to design an array of slots having different dimensions to taper the antenna illumination to reduce the amplitudes of unwanted side lobes. The heart of the modern approach is the use of the latest commercially available microwave-design software, which implements finite-element models of electromagnetic fields in and around waveguides, antenna elements, and similar components. Instead of building and testing prototypes, one builds a database and constructs design curves from the results of computational simulations for sets of design parameters. The figure shows a resonant edge-slot antenna designed following this approach. Intended for use as part of a radiometer operating at a frequency of 10.7 GHz, this antenna was fabricated from dimensions defined exclusively by results of computational simulations. The final design was found to be well optimized and to yield performance exceeding that initially required.

  15. Status of the 1 MeV Accelerator Design for ITER NBI

    NASA Astrophysics Data System (ADS)

    Kuriyama, M.; Boilson, D.; Hemsworth, R.; Svensson, L.; Graceffa, J.; Schunke, B.; Decamps, H.; Tanaka, M.; Bonicelli, T.; Masiello, A.; Bigi, M.; Chitarin, G.; Luchetta, A.; Marcuzzi, D.; Pasqualotto, R.; Pomaro, N.; Serianni, G.; Sonato, P.; Toigo, V.; Zaccaria, P.; Kraus, W.; Franzen, P.; Heinemann, B.; Inoue, T.; Watanabe, K.; Kashiwagi, M.; Taniguchi, M.; Tobari, H.; De Esch, H.

    2011-09-01

    The beam source of neutral beam heating/current drive system for ITER is needed to accelerate the negative ion beam of 40A with D- at 1 MeV for 3600 sec. In order to realize the beam source, design and R&D works are being developed in many institutions under the coordination of ITER organization. The development of the key issues of the ion source including source plasma uniformity, suppression of co-extracted electron in D beam operation and also after the long beam duration time of over a few 100 sec, is progressed mainly in IPP with the facilities of BATMAN, MANITU and RADI. In the near future, ELISE, that will be tested the half size of the ITER ion source, will start the operation in 2011, and then SPIDER, which demonstrates negative ion production and extraction with the same size and same structure as the ITER ion source, will start the operation in 2014 as part of the NBTF. The development of the accelerator is progressed mainly in JAEA with the MeV test facility, and also the computer simulation of beam optics also developed in JAEA, CEA and RFX. The full ITER heating and current drive beam performance will be demonstrated in MITICA, which will start operation in 2016 as part of the NBTF.

  16. Iterative Neighbour-Information Gathering for Ranking Nodes in Complex Networks

    NASA Astrophysics Data System (ADS)

    Xu, Shuang; Wang, Pei; Lü, Jinhu

    2017-01-01

    Designing node influence ranking algorithms can provide insights into network dynamics, functions and structures. Increasingly evidences reveal that node’s spreading ability largely depends on its neighbours. We introduce an iterative neighbourinformation gathering (Ing) process with three parameters, including a transformation matrix, a priori information and an iteration time. The Ing process iteratively combines priori information from neighbours via the transformation matrix, and iteratively assigns an Ing score to each node to evaluate its influence. The algorithm appropriates for any types of networks, and includes some traditional centralities as special cases, such as degree, semi-local, LeaderRank. The Ing process converges in strongly connected networks with speed relying on the first two largest eigenvalues of the transformation matrix. Interestingly, the eigenvector centrality corresponds to a limit case of the algorithm. By comparing with eight renowned centralities, simulations of susceptible-infected-removed (SIR) model on real-world networks reveal that the Ing can offer more exact rankings, even without a priori information. We also observe that an optimal iteration time is always in existence to realize best characterizing of node influence. The proposed algorithms bridge the gaps among some existing measures, and may have potential applications in infectious disease control, designing of optimal information spreading strategies.

  17. Radiofrequency pulse design using nonlinear gradient magnetic fields.

    PubMed

    Kopanoglu, Emre; Constable, R Todd

    2015-09-01

    An iterative k-space trajectory and radiofrequency (RF) pulse design method is proposed for excitation using nonlinear gradient magnetic fields. The spatial encoding functions (SEFs) generated by nonlinear gradient fields are linearly dependent in Cartesian coordinates. Left uncorrected, this may lead to flip angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a matching pursuit algorithm, and the RF pulse is designed using a conjugate gradient algorithm. Three variants of the proposed approach are given: the full algorithm, a computationally cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. The method is compared with other iterative (matching pursuit and conjugate gradient) and noniterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity. An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. © 2014 Wiley Periodicals, Inc.

  18. Long-pulse stability limits of the ITER baseline scenario

    DOE PAGES

    Jackson, G. L.; Luce, T. C.; Solomon, W. M.; ...

    2015-01-14

    DIII-D has made significant progress in developing the techniques required to operate ITER, and in understanding their impact on performance when integrated into operational scenarios at ITER relevant parameters. We demonstrated long duration plasmas, stable to m/n =2/1 tearing modes (TMs), with an ITER similar shape and I p/aB T, in DIII-D, that evolve to stationary conditions. The operating region most likely to reach stable conditions has normalized pressure, B N≈1.9–2.1 (compared to the ITER baseline design of 1.6 – 1.8), and a Greenwald normalized density fraction, f GW 0.42 – 0.70 (the ITER design is f GW ≈ 0.8).more » The evolution of the current profile, using internal inductance (l i) as an indicator, is found to produce a smaller fraction of stable pulses when l i is increased above ≈ 1.1 at the beginning of β N flattop. Stable discharges with co-neutral beam injection (NBI) are generally accompanied with a benign n=2 MHD mode. However if this mode exceeds ≈ 10 G, the onset of a m/n=2/1 tearing mode occurs with a loss of confinement. In addition, stable operation with low applied external torque, at or below the extrapolated value expected for ITER has also been demonstrated. With electron cyclotron (EC) injection, the operating region of stable discharges has been further extended at ITER equivalent levels of torque and to ELM free discharges at higher torque but with the addition of an n=3 magnetic perturbation from the DIII-D internal coil set. Lastly, the characterization of the ITER baseline scenario evolution for long pulse duration, extension to more ITER relevant values of torque and electron heating, and suppression of ELMs have significantly advanced the physics basis of this scenario, although significant effort remains in the simultaneous integration of all these requirements.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henning, C.

    This report contains papers on the following topics: conceptual design; radiation damage of ITER magnet systems; insulation system of the magnets; critical current density and strain sensitivity; toroidal field coil structural analysis; stress analysis for the ITER central solenoid; and volt-second capabilities and PF magnet configurations.

  20. Integrated tokamak modeling: when physics informs engineering and research planning

    NASA Astrophysics Data System (ADS)

    Poli, Francesca

    2017-10-01

    Simulations that integrate virtually all the relevant engineering and physics aspects of a real tokamak experiment are a power tool for experimental interpretation, model validation and planning for both present and future devices. This tutorial will guide through the building blocks of an ``integrated'' tokamak simulation, such as magnetic flux diffusion, thermal, momentum and particle transport, external heating and current drive sources, wall particle sources and sinks. Emphasis is given to the connection and interplay between external actuators and plasma response, between the slow time scales of the current diffusion and the fast time scales of transport, and how reduced and high-fidelity models can contribute to simulate a whole device. To illustrate the potential and limitations of integrated tokamak modeling for discharge prediction, a helium plasma scenario for the ITER pre-nuclear phase is taken as an example. This scenario presents challenges because it requires core-edge integration and advanced models for interaction between waves and fast-ions, which are subject to a limited experimental database for validation and guidance. Starting from a scenario obtained by re-scaling parameters from the demonstration inductive ``ITER baseline'', it is shown how self-consistent simulations that encompass both core and edge plasma regions, as well as high-fidelity heating and current drive source models are needed to set constraints on the density, magnetic field and heating scheme. This tutorial aims at demonstrating how integrated modeling, when used with adequate level of criticism, can not only support design of operational scenarios, but also help to asses the limitations and gaps in the available models, thus indicating where improved modeling tools are required and how present experiments can help their validation and inform research planning. Work supported by DOE under DE-AC02-09CH1146.

  1. Status of the ITER Cryodistribution

    NASA Astrophysics Data System (ADS)

    Chang, H.-S.; Vaghela, H.; Patel, P.; Rizzato, A.; Cursan, M.; Henry, D.; Forgeas, A.; Grillot, D.; Sarkar, B.; Muralidhara, S.; Das, J.; Shukla, V.; Adler, E.

    2017-12-01

    Since the conceptual design of the ITER Cryodistribution many modifications have been applied due to both system optimization and improved knowledge of the clients’ requirements. Process optimizations in the Cryoplant resulted in component simplifications whereas increased heat load in some of the superconducting magnet systems required more complicated process configuration but also the removal of a cold box was possible due to component arrangement standardization. Another cold box, planned for redundancy, has been removed due to the Tokamak in-Cryostat piping layout modification. In this proceeding we will summarize the present design status and component configuration of the ITER Cryodistribution with all changes implemented which aim at process optimization and simplification as well as operational reliability, stability and flexibility.

  2. Summary of ECE presentations at EC-18

    DOE PAGES

    Taylor, G.

    2015-03-12

    There were nine ECE and one EBE presentation at EC-18. Four of the presentations were on various aspects of ECE on ITER. The ITER ECE diagnostic has entered an important detailed preliminary design phase and faces several design challenges in the next 2-3 years. Most of the other ECE presentations at the workshop were focused on applications of ECE diagnostics to plasma measurements, rather than improvements in technology, although it was apparent that heterodyne receiver technology continues to improve. CECE, ECE imaging and EBE imaging are increasingly providing valuable insights into plasma behavior that is important to understand if futuremore » burning plasma devices, such as ITER, FNSF and DEMO, are to be successful.« less

  3. Conceptual design of ACB-CP for ITER cryogenic system

    NASA Astrophysics Data System (ADS)

    Jiang, Yongcheng; Xiong, Lianyou; Peng, Nan; Tang, Jiancheng; Liu, Liqiang; Zhang, Liang

    2012-06-01

    ACB-CP (Auxiliary Cold Box for Cryopumps) is used to supply the cryopumps system with necessary cryogen in ITER (International Thermonuclear Experimental Reactor) cryogenic distribution system. The conceptual design of ACB-CP contains thermo-hydraulic analysis, 3D structure design and strength checking. Through the thermohydraulic analysis, the main specifications of process valves, pressure safety valves, pipes, heat exchangers can be decided. During the 3D structure design process, vacuum requirement, adiabatic requirement, assembly constraints and maintenance requirement have been considered to arrange the pipes, valves and other components. The strength checking has been performed to crosscheck if the 3D design meets the strength requirements for the ACB-CP.

  4. Design Study of Propulsion and Drive Systems for the Large Civil TiltRotor (LCTR2) Rotorcraft

    NASA Technical Reports Server (NTRS)

    Robuck, Mark; Wilkerson, Joseph; Zhang, Yiyi; Snyder, Christopher A.; Vonderwell, Daniel

    2013-01-01

    Boeing, Rolls Royce, and NASA have worked together to complete a parametric sizing study for NASA's Large Civil Tilt Rotor (LCTR2) concept 2nd iteration. Vehicle gross weight and fuel usage were evaluated as propulsion and drive system characteristics were varied to maximize the benefit of reduced rotor tip speed during cruise conditions. The study examined different combinations of engine and gearbox variability to achieve rotor cruise tip speed reductions down to 54% of the hover tip speed. Previous NASA studies identified that a 54% rotor speed reduction in cruise minimizes vehicle gross weight and fuel burn. The LCTR2 was the study baseline for initial sizing. This study included rotor tip speed ratios (cruise to hover) of 100%, 77% and 54% at different combinations of engine RPM and gearbox speed reductions, which were analyzed to achieve the lightest overall vehicle gross weight (GW) at the chosen rotor tip speed ratio. Different engine and gearbox technology levels are applied ranging from commercial off-the-shelf (COTS) engines and gearbox technology to entry-in-service (EIS) dates of 2025 and 2035 to assess the benefits of advanced technology on vehicle gross weight and fuel burn. Interim results were previously reported1. This technical paper extends that work and summarizes the final study results including additional engine and drive system study accomplishments. New vehicle sizing data is presented for engine performance at a single operating speed with a multispeed drive system. Modeling details for LCTR2 vehicle sizing and subject engine and drive sub-systems are presented as well. This study was conducted in support of NASA's Fundamental Aeronautics Program, Subsonic Rotary Wing Project.

  5. Inspiring the Next Generation of Engineers and Scientists

    NASA Astrophysics Data System (ADS)

    Tambara, Kevin

    2013-04-01

    Students are usually not excited about abstract concepts, and teachers struggle to inject "pizzazz" into many of their lessons. K-12 teachers need opportunities and the associated pedagogical training to bring meaningful and authentic learning to their students. The professional educator community needs to develop a learning environment which connects desired content knowledge with science and engineering practices that students need to be successful future technology leaders. Furthermore, this environment must foster student exploration and discovery by encouraging them to use their natural creativity with newly acquired technical skills to complete assigned projects. These practices are explicitly listed in the US "Next Generation Science Standards" document that is due for final publication in the very near future. Education in America must unleash students' desires to create and make with their hands, using their intellect, and growing academic knowledge. In this submission I will share various student projects that I have created and implemented for middle and high school. For each project, students were required to learn and implement engineering best practices while designing, building, and testing prototype models, according to pre-assigned teacher specifications. As in all real-world engineering projects, students were required to analyze test data, re-design their models accordingly, and iterate the design process several times to meet specifications. Another key component to successful projects is collaboration between student team members. All my students come to realize that nothing of major significance is ever accomplished alone, that is, without the support of a team. I will highlight several projects that illustrate key engineering practices as well as lessons learned, for both student and teacher. Projects presented will include: magnetically levitated vehicles (maglev) races, solar-powered and mousetrap-powered cars and boats, Popsicle stick catapults and bridges, egg drop "lunar landers", egg-passenger car crashes, cardboard boat races (with human passengers), and working roller coasters made with only paper and tape. Each project requires minimal, low-cost materials commonly found at home or in local stores. I will share the most common student misperceptions about inquiry and problem-solving I have observed while working alongside my students during these projects.

  6. Eugene--a domain specific language for specifying and constraining synthetic biological parts, devices, and systems.

    PubMed

    Bilitchenko, Lesia; Liu, Adam; Cheung, Sherine; Weeding, Emma; Xia, Bing; Leguia, Mariana; Anderson, J Christopher; Densmore, Douglas

    2011-04-29

    Synthetic biological systems are currently created by an ad-hoc, iterative process of specification, design, and assembly. These systems would greatly benefit from a more formalized and rigorous specification of the desired system components as well as constraints on their composition. Therefore, the creation of robust and efficient design flows and tools is imperative. We present a human readable language (Eugene) that allows for the specification of synthetic biological designs based on biological parts, as well as provides a very expressive constraint system to drive the automatic creation of composite Parts (Devices) from a collection of individual Parts. We illustrate Eugene's capabilities in three different areas: Device specification, design space exploration, and assembly and simulation integration. These results highlight Eugene's ability to create combinatorial design spaces and prune these spaces for simulation or physical assembly. Eugene creates functional designs quickly and cost-effectively. Eugene is intended for forward engineering of DNA-based devices, and through its data types and execution semantics, reflects the desired abstraction hierarchy in synthetic biology. Eugene provides a powerful constraint system which can be used to drive the creation of new devices at runtime. It accomplishes all of this while being part of a larger tool chain which includes support for design, simulation, and physical device assembly.

  7. ITER structural design criteria and their extension to advanced reactor blankets*1

    NASA Astrophysics Data System (ADS)

    Majumdar, S.; Kalinin, G.

    2000-12-01

    Applications of the recent ITER structural design criteria (ISDC) are illustrated by two components. First, the low-temperature-design rules are applied to copper alloys that are particularly prone to irradiation embrittlement at relatively low fluences at certain temperatures. Allowable stresses are derived and the impact of the embrittlement on allowable surface heat flux of a simple first-wall/limiter design is demonstrated. Next, the high-temperature-design rules of ISDC are applied to evaporation of lithium and vapor extraction (EVOLVE), a blanket design concept currently being investigated under the US Advanced Power Extraction (APEX) program. A single tungsten first-wall tube is considered for thermal and stress analyses by finite-element method.

  8. A shifted hyperbolic augmented Lagrangian-based artificial fish two-swarm algorithm with guaranteed convergence for constrained global optimization

    NASA Astrophysics Data System (ADS)

    Rocha, Ana Maria A. C.; Costa, M. Fernanda P.; Fernandes, Edite M. G. P.

    2016-12-01

    This article presents a shifted hyperbolic penalty function and proposes an augmented Lagrangian-based algorithm for non-convex constrained global optimization problems. Convergence to an ?-global minimizer is proved. At each iteration k, the algorithm requires the ?-global minimization of a bound constrained optimization subproblem, where ?. The subproblems are solved by a stochastic population-based metaheuristic that relies on the artificial fish swarm paradigm and a two-swarm strategy. To enhance the speed of convergence, the algorithm invokes the Nelder-Mead local search with a dynamically defined probability. Numerical experiments with benchmark functions and engineering design problems are presented. The results show that the proposed shifted hyperbolic augmented Lagrangian compares favorably with other deterministic and stochastic penalty-based methods.

  9. Streamlining genomes: toward the generation of simplified and stabilized microbial systems.

    PubMed

    Leprince, Audrey; van Passel, Mark W J; dos Santos, Vitor A P Martins

    2012-10-01

    At the junction between systems and synthetic biology, genome streamlining provides a solid foundation both for increased understanding of cellular circuitry, and for the tailoring of microbial chassis towards innovative biotechnological applications. Iterative genomic deletions (targeted and random) helps to generate simplified, stabilized and predictable genomes, whereas multiplexing genome engineering reveals a broad functional genetic diversity. The decrease in oligo and gene synthesis costs promises effective combinatorial tools for the generation of chassis based on streamlined and tractable genomes. Here we review recent progresses in streamlining genomes through recombineering techniques aiming to generate insights into cellular mechanisms and responses towards the design and assembly of streamlined genome chassis together with new cellular modules in diverse biotechnological applications. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Progress in Development of the ITER Plasma Control System Simulation Platform

    NASA Astrophysics Data System (ADS)

    Walker, Michael; Humphreys, David; Sammuli, Brian; Ambrosino, Giuseppe; de Tommasi, Gianmaria; Mattei, Massimiliano; Raupp, Gerhard; Treutterer, Wolfgang; Winter, Axel

    2017-10-01

    We report on progress made and expected uses of the Plasma Control System Simulation Platform (PCSSP), the primary test environment for development of the ITER Plasma Control System (PCS). PCSSP will be used for verification and validation of the ITER PCS Final Design for First Plasma, to be completed in 2020. We discuss the objectives of PCSSP, its overall structure, selected features, application to existing devices, and expected evolution over the lifetime of the ITER PCS. We describe an archiving solution for simulation results, methods for incorporating physics models of the plasma and physical plant (tokamak, actuator, and diagnostic systems) into PCSSP, and defining characteristics of models suitable for a plasma control development environment such as PCSSP. Applications of PCSSP simulation models including resistive plasma equilibrium evolution are demonstrated. PCSSP development supported by ITER Organization under ITER/CTS/6000000037. Resistive evolution code developed under General Atomics' Internal funding. The views and opinions expressed herein do not necessarily reflect those of the ITER Organization.

  11. Cooperative solutions coupling a geometry engine and adaptive solver codes

    NASA Technical Reports Server (NTRS)

    Dickens, Thomas P.

    1995-01-01

    Follow-on work has progressed in using Aero Grid and Paneling System (AGPS), a geometry and visualization system, as a dynamic real time geometry monitor, manipulator, and interrogator for other codes. In particular, AGPS has been successfully coupled with adaptive flow solvers which iterate, refining the grid in areas of interest, and continuing on to a solution. With the coupling to the geometry engine, the new grids represent the actual geometry much more accurately since they are derived directly from the geometry and do not use refits to the first-cut grids. Additional work has been done with design runs where the geometric shape is modified to achieve a desired result. Various constraints are used to point the solution in a reasonable direction which also more closely satisfies the desired results. Concepts and techniques are presented, as well as examples of sample case studies. Issues such as distributed operation of the cooperative codes versus running all codes locally and pre-calculation for performance are discussed. Future directions are considered which will build on these techniques in light of changing computer environments.

  12. Analysis of the flow field generated near an aircraft engine operating in reverse thrust. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Ledwith, W. A., Jr.

    1972-01-01

    A computer solution is developed to the exhaust gas reingestion problem for aircraft operating in the reverse thrust mode on a crosswind-free runway. The computer program determines the location of the inlet flow pattern, whether the exhaust efflux lies within the inlet flow pattern or not, and if so, the approximate time before the reversed flow reaches the engine inlet. The program is written so that the user is free to select discrete runway speeds or to study the entire aircraft deceleration process for both the far field and cross-ingestion problems. While developed with STOL applications in mind, the solution is equally applicable to conventional designs. The inlet and reversed jet flow fields involved in the problem are assumed to be noninteracting. The nacelle model used in determining the inlet flow field is generated using an iterative solution to the Neuman problem from potential flow theory while the reversed jet flow field is adapted using an empirical correlation from the literature. Sample results obtained using the program are included.

  13. Earth-to-Orbit Laser Launch Simulation for a Lightcraft Technology Demonstrator

    NASA Astrophysics Data System (ADS)

    Richard, J. C.; Morales, C.; Smith, W. L.; Myrabo, L. N.

    2006-05-01

    Optimized laser launch trajectories have been developed for a 1.4 m diameter, 120 kg (empty mass) Lightcraft Technology Demonstrator (LTD). The lightcraft's combined-cycle airbreathing/rocket engine is designed for single-stage-to-orbit flights with a mass ratio of 2 propelled by a 100 MW class ground-based laser built on a 3 km mountain peak. Once in orbit, the vehicle becomes an autonomous micro-satellite. Two types of trajectories were simulated with the SORT (Simulation and Optimization of Rocket Trajectories) software package: a) direct GBL boost to orbit, and b) GBL boost aided by laser relay satellite. Several new subroutines were constructed for SORT to input engine performance (as a function of Mach number and altitude), vehicle aerodynamics, guidance algorithms, and mass history. A new guidance/steering option required the lightcraft to always point at the GBL or laser relay satellite. SORT iterates on trajectory parameters to optimize vehicle performance, achieve a desired criteria, or constrain the solution to avoid some specific limit. The predicted laser-boost performance for the LTD is undoubtedly revolutionary, and SORT simulations have helped to define this new frontier.

  14. Iterative inversion of deformation vector fields with feedback control.

    PubMed

    Dubey, Abhishek; Iliopoulos, Alexandros-Stavros; Sun, Xiaobai; Yin, Fang-Fang; Ren, Lei

    2018-05-14

    Often, the inverse deformation vector field (DVF) is needed together with the corresponding forward DVF in four-dimesional (4D) reconstruction and dose calculation, adaptive radiation therapy, and simultaneous deformable registration. This study aims at improving both accuracy and efficiency of iterative algorithms for DVF inversion, and advancing our understanding of divergence and latency conditions. We introduce a framework of fixed-point iteration algorithms with active feedback control for DVF inversion. Based on rigorous convergence analysis, we design control mechanisms for modulating the inverse consistency (IC) residual of the current iterate, to be used as feedback into the next iterate. The control is designed adaptively to the input DVF with the objective to enlarge the convergence area and expedite convergence. Three particular settings of feedback control are introduced: constant value over the domain throughout the iteration; alternating values between iteration steps; and spatially variant values. We also introduce three spectral measures of the displacement Jacobian for characterizing a DVF. These measures reveal the critical role of what we term the nontranslational displacement component (NTDC) of the DVF. We carry out inversion experiments with an analytical DVF pair, and with DVFs associated with thoracic CT images of six patients at end of expiration and end of inspiration. The NTDC-adaptive iterations are shown to attain a larger convergence region at a faster pace compared to previous nonadaptive DVF inversion iteration algorithms. By our numerical experiments, alternating control yields smaller IC residuals and inversion errors than constant control. Spatially variant control renders smaller residuals and errors by at least an order of magnitude, compared to other schemes, in no more than 10 steps. Inversion results also show remarkable quantitative agreement with analysis-based predictions. Our analysis captures properties of DVF data associated with clinical CT images, and provides new understanding of iterative DVF inversion algorithms with a simple residual feedback control. Adaptive control is necessary and highly effective in the presence of nonsmall NTDCs. The adaptive iterations or the spectral measures, or both, may potentially be incorporated into deformable image registration methods. © 2018 American Association of Physicists in Medicine.

  15. Development Testing and Subsequent Failure Investigation of a Spring Strut Mechanism

    NASA Technical Reports Server (NTRS)

    Dervan, Jared; Robertson, Brandon; Staab, Lucas; Culberson, Michael

    2014-01-01

    Commodities are transferred between the Multi-Purpose Crew Vehicle (MPCV) crew module (CM) and service module (SM) via an external umbilical that is driven apart with spring-loaded struts after the structural connection is severed. The spring struts must operate correctly for the modules to separate safely. There was no vibration testing of strut development units scoped in the MPCV Program Plan; therefore, any design problems discovered as a result of vibration testing would not have been found until the component qualification. The NASA Engineering and Safety Center (NESC) and Lockheed Martin (LM) performed random vibration testing on a single spring strut development unit to assess its ability to withstand qualification level random vibration environments. Failure of the strut while exposed to random vibration resulted in a follow-on failure investigation, design changes, and additional development tests. This paper focuses on the results of the failure investigations including identified lessons learned and best practices to aid in future design iterations of the spring strut and to help other mechanism developers avoid similar pitfalls.

  16. On the utilization of engineering knowledge in design optimization

    NASA Technical Reports Server (NTRS)

    Papalambros, P.

    1984-01-01

    Some current research work conducted at the University of Michigan is described to illustrate efforts for incorporating knowledge in optimization in a nontraditional way. The incorporation of available knowledge in a logic structure is examined in two circumstances. The first examines the possibility of introducing global design information in a local active set strategy implemented during the iterations of projection-type algorithms for nonlinearly constrained problems. The technique used algorithms for nonlinearly constrained problems. The technique used combines global and local monotinicity analysis of the objective and constraint functions. The second examines a knowledge-based program which aids the user to create condigurations that are most desirable from the manufacturing assembly viewpoint. The data bank used is the classification scheme suggested by Boothroyd. The important aspect of this program is that it is an aid for synthesis intended for use in the design concept phase in a way similar to the so-called idea-triggers in creativity-enhancement techniques like brain-storming. The idea generation, however, is not random but it is driven by the goal of achieving the best acceptable configuration.

  17. Analysis and Design of ITER 1 MV Core Snubber

    NASA Astrophysics Data System (ADS)

    Wang, Haitian; Li, Ge

    2012-11-01

    The core snubber, as a passive protection device, can suppress arc current and absorb stored energy in stray capacitance during the electrical breakdown in accelerating electrodes of ITER NBI. In order to design the core snubber of ITER, the control parameters of the arc peak current have been firstly analyzed by the Fink-Baker-Owren (FBO) method, which are used for designing the DIIID 100 kV snubber. The B-H curve can be derived from the measured voltage and current waveforms, and the hysteresis loss of the core snubber can be derived using the revised parallelogram method. The core snubber can be a simplified representation as an equivalent parallel resistance and inductance, which has been neglected by the FBO method. A simulation code including the parallel equivalent resistance and inductance has been set up. The simulation and experiments result in dramatically large arc shorting currents due to the parallel inductance effect. The case shows that the core snubber utilizing the FBO method gives more compact design.

  18. Developpement d'un catalogue de conception des chaussee pour les pays sub-sahariens

    NASA Astrophysics Data System (ADS)

    Koubikana Pambou, Claude Hugo

    Pavement surface evaluation in Sub-Saharan Africa (SSA) reveals severe, premature, and costly damages that require extensive maintenance. This is due to the limitations of tools used for pavement structural design as well as and the lack of the available calibration for the materials used. It is necessary to search for solutions for these failures and: * feed the discussion on durable roads for SSA area to meet the expectations of the trans- African highway projects of the new partnership for Africa (NEPAD); * provide simple and effective tools for pavement design and promote low cost for maintenance of road infrastructures; * provide users with functional and safety and durable road system. This catalogue, object and result of this work, was developed through a new tool for structural design (OCS-Chaussee), computed using Microsoft Excel worksheet. It uses iteration through empirical mechanics (ME) methods, applied to multilayer linear analysis using Odemark - Boussinesq method as a theoretical and conceptual basis for design pavement. The verification of obtained results was done with viscoelasticity assumption according Quijano's data (2010) and the pavement analysis software WINJULEA developed by US Army Corps of Engineers (USACE) and with the backcalculation's data from Varik and al. (2002) and local's data from South-African. The lifetime of each proposed roadway was estimated by using Asphalt Institute's transfer function and the Miner's law. It's hope that thoughtful use of this catalogue and the OCS- Chaussee will help advance reasonable road engineering solutions approaches, and training and make profitable budgets allocated to the construction and to road rehabilitation in Sub-Saharan Africa.

  19. Thermal properties variations in unconsolidated material for very shallow geothermal application (ITER project)

    NASA Astrophysics Data System (ADS)

    Sipio, Eloisa Di; Bertermann, David

    2018-04-01

    In engineering, agricultural and meteorological project design, sediment thermal properties are highly important parameters, and thermal conductivity plays a fundamental role when dimensioning ground heat exchangers, especially in very shallow geothermal systems. Herein, the first 2 m of depth from surface is of critical importance. However, the heat transfer determination in unconsolidated material is difficult to estimate, as it depends on several factors, including particle size, bulk density, water content, mineralogy composition and ground temperature. The performance of a very shallow geothermal system, as a horizontal collector or heat basket, is strongly correlated to the type of sediment at disposal and rapidly decreases in the case of dry-unsaturated conditions. The available experimental data are often scattered, incomplete and do not fully support thermo-active ground structure modeling. The ITER project, funded by the European Union, contributes to a better knowledge of the relationship between thermal conductivity and water content, required for understanding the very shallow geothermal systems behaviour in saturated and unsaturated conditions. So as to enhance the performance of horizontal geothermal heat exchangers, thermally enhanced backfilling material were tested in the laboratory, and an overview of physical-thermal properties variations under several moisture and load conditions for different mixtures of natural material was here presented.

  20. To Boldly Go Where No Man has Gone Before: Seeking Gaia's Astrometric Solution with AGIS

    NASA Astrophysics Data System (ADS)

    Lammers, U.; Lindegren, L.; O'Mullane, W.; Hobbs, D.

    2009-09-01

    Gaia is ESA's ambitious space astrometry mission with a foreseen launch date in late 2011. Its main objective is to perform a stellar census of the 1,000 million brightest objects in our galaxy (completeness to V=20 mag) from which an astrometric catalog of micro-arcsec (μas) level accuracy will be constructed. A key element in this endeavor is the Astrometric Global Iterative Solution (AGIS) - the mathematical and numerical framework for combining the ≈80 available observations per star obtained during Gaia's 5 yr lifetime into a single global astrometic solution. AGIS consists of four main algorithmic cores which improve the source astrometic parameters, satellite attitude, calibration, and global parameters in a block-iterative manner. We present and discuss this basic scheme, the algorithms themselves and the overarching system architecture. The latter is a data-driven distributed processing framework designed to achieve an overall system performance that is not I/O limited. AGIS is being developed as a pure Java system by a small number of geographically distributed European groups. We present some of the software engineering aspects of the project and show used methodologies and tools. Finally we will briefly discuss how AGIS is embedded into the overall Gaia data processing architecture.

Top