The Effect of Iteration on the Design Performance of Primary School Children
ERIC Educational Resources Information Center
Looijenga, Annemarie; Klapwijk, Remke; de Vries, Marc J.
2015-01-01
Iteration during the design process is an essential element. Engineers optimize their design by iteration. Research on iteration in Primary Design Education is however scarce; possibly teachers believe they do not have enough time for iteration in daily classroom practices. Spontaneous playing behavior of children indicates that iteration fits in…
NASA Astrophysics Data System (ADS)
Parvathi, S. P.; Ramanan, R. V.
2018-06-01
An iterative analytical trajectory design technique that includes perturbations in the departure phase of the interplanetary orbiter missions is proposed. The perturbations such as non-spherical gravity of Earth and the third body perturbations due to Sun and Moon are included in the analytical design process. In the design process, first the design is obtained using the iterative patched conic technique without including the perturbations and then modified to include the perturbations. The modification is based on, (i) backward analytical propagation of the state vector obtained from the iterative patched conic technique at the sphere of influence by including the perturbations, and (ii) quantification of deviations in the orbital elements at periapsis of the departure hyperbolic orbit. The orbital elements at the sphere of influence are changed to nullify the deviations at the periapsis. The analytical backward propagation is carried out using the linear approximation technique. The new analytical design technique, named as biased iterative patched conic technique, does not depend upon numerical integration and all computations are carried out using closed form expressions. The improved design is very close to the numerical design. The design analysis using the proposed technique provides a realistic insight into the mission aspects. Also, the proposed design is an excellent initial guess for numerical refinement and helps arrive at the four distinct design options for a given opportunity.
Enhanced Low-Enriched Uranium Fuel Element for the Advanced Test Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, M. A.; DeHart, M. D.; Morrell, S. R.
2015-03-01
Under the current US Department of Energy (DOE) policy and planning scenario, the Advanced Test Reactor (ATR) and its associated critical facility (ATRC) will be reconfigured to operate on low-enriched uranium (LEU) fuel. This effort has produced a conceptual design for an Enhanced LEU Fuel (ELF) element. This fuel features monolithic U-10Mo fuel foils and aluminum cladding separated by a thin zirconium barrier. As with previous iterations of the ELF design, radial power peaking is managed using different U-10Mo foil thicknesses in different plates of the element. The lead fuel element design, ELF Mk1A, features only three fuel meat thicknesses,more » a reduction from the previous iterations meant to simplify manufacturing. Evaluation of the ELF Mk1A fuel design against reactor performance requirements is ongoing, as are investigations of the impact of manufacturing uncertainty on safety margins. The element design has been evaluated in what are expected to be the most demanding design basis accident scenarios and has met all initial thermal-hydraulic criteria.« less
Iterative simulated quenching for designing irregular-spot-array generators.
Gillet, J N; Sheng, Y
2000-07-10
We propose a novel, to our knowledge, algorithm of iterative simulated quenching with temperature rescaling for designing diffractive optical elements, based on an analogy between simulated annealing and statistical thermodynamics. The temperature is iteratively rescaled at the end of each quenching process according to ensemble statistics to bring the system back from a frozen imperfect state with a local minimum of energy to a dynamic state in a Boltzmann heat bath in thermal equilibrium at the rescaled temperature. The new algorithm achieves much lower cost function and reconstruction error and higher diffraction efficiency than conventional simulated annealing with a fast exponential cooling schedule and is easy to program. The algorithm is used to design binary-phase generators of large irregular spot arrays. The diffractive phase elements have trapezoidal apertures of varying heights, which fit ideal arbitrary-shaped apertures better than do trapezoidal apertures of fixed heights.
Diffractive elements for generating microscale laser beam patterns: a Y2K problem
NASA Astrophysics Data System (ADS)
Teiwes, Stephan; Krueger, Sven; Wernicke, Guenther K.; Ferstl, Margit
2000-03-01
Lasers are widely used in industrial fabrication for engraving, cutting and many other purposes. However, material processing at very small scales is still a matter of concern. Advances in diffractive optics could provide for laser systems that could be used for engraving or cutting of micro-scale patterns at high speeds. In our paper we focus on the design of diffractive elements which can be used for this special application. It is a common desire in material processing to apply 'discrete' as well as 'continuous' beam patterns. Especially, the latter case is difficult to handle as typical micro-scale patterns are characterized by bad band-limitation properties, and as speckles can easily occur in beam patterns. It is shown in this paper that a standard iterative design method usually fails to obtain diffractive elements that generate diffraction patterns with acceptable quality. Insights gained from an analysis of the design problems are used to optimize the iterative design method. We demonstrate applicability and success of our approach by the design of diffractive phase elements that generate a discrete and a continuous 'Y2K' pattern.
Improvements in surface singularity analysis and design methods. [applicable to airfoils
NASA Technical Reports Server (NTRS)
Bristow, D. R.
1979-01-01
The coupling of the combined source vortex distribution of Green's potential flow function with contemporary numerical techniques is shown to provide accurate, efficient, and stable solutions to subsonic inviscid analysis and design problems for multi-element airfoils. The analysis problem is solved by direct calculation of the surface singularity distribution required to satisfy the flow tangency boundary condition. The design or inverse problem is solved by an iteration process. In this process, the geometry and the associated pressure distribution are iterated until the pressure distribution most nearly corresponding to the prescribed design distribution is obtained. Typically, five iteration cycles are required for convergence. A description of the analysis and design method is presented, along with supporting examples.
Li, Jia-Han; Webb, Kevin J; Burke, Gerald J; White, Daniel A; Thompson, Charles A
2006-05-01
A multiresolution direct binary search iterative procedure is used to design small dielectric irregular diffractive optical elements that have subwavelength features and achieve near-field focusing below the diffraction limit. Designs with a single focus or with two foci, depending on wavelength or polarization, illustrate the possible functionalities available from the large number of degrees of freedom. These examples suggest that the concept of such elements may find applications in near-field lithography, wavelength-division multiplexing, spectral analysis, and polarization beam splitters.
The Iterative Design of a Virtual Design Studio
ERIC Educational Resources Information Center
Blevis, Eli; Lim, Youn-kyung; Stolterman, Erik; Makice, Kevin
2008-01-01
In this article, the authors explain how they implemented Design eXchange as a shared collaborative online and physical space for design for their students. Their notion for Design eXchange favors a complex mix of key elements namely: (1) a virtual online studio; (2) a forum for review of all things related to design, especially design with the…
Exploiting parallel computing with limited program changes using a network of microcomputers
NASA Technical Reports Server (NTRS)
Rogers, J. L., Jr.; Sobieszczanski-Sobieski, J.
1985-01-01
Network computing and multiprocessor computers are two discernible trends in parallel processing. The computational behavior of an iterative distributed process in which some subtasks are completed later than others because of an imbalance in computational requirements is of significant interest. The effects of asynchronus processing was studied. A small existing program was converted to perform finite element analysis by distributing substructure analysis over a network of four Apple IIe microcomputers connected to a shared disk, simulating a parallel computer. The substructure analysis uses an iterative, fully stressed, structural resizing procedure. A framework of beams divided into three substructures is used as the finite element model. The effects of asynchronous processing on the convergence of the design variables are determined by not resizing particular substructures on various iterations.
ITER structural design criteria and their extension to advanced reactor blankets*1
NASA Astrophysics Data System (ADS)
Majumdar, S.; Kalinin, G.
2000-12-01
Applications of the recent ITER structural design criteria (ISDC) are illustrated by two components. First, the low-temperature-design rules are applied to copper alloys that are particularly prone to irradiation embrittlement at relatively low fluences at certain temperatures. Allowable stresses are derived and the impact of the embrittlement on allowable surface heat flux of a simple first-wall/limiter design is demonstrated. Next, the high-temperature-design rules of ISDC are applied to evaporation of lithium and vapor extraction (EVOLVE), a blanket design concept currently being investigated under the US Advanced Power Extraction (APEX) program. A single tungsten first-wall tube is considered for thermal and stress analyses by finite-element method.
Modules and methods for all photonic computing
Schultz, David R.; Ma, Chao Hung
2001-01-01
A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.
NASA Astrophysics Data System (ADS)
H, L. SWAMI; C, DANANI; A, K. SHAW
2018-06-01
Activation analyses play a vital role in nuclear reactor design. Activation analyses, along with nuclear analyses, provide important information for nuclear safety and maintenance strategies. Activation analyses also help in the selection of materials for a nuclear reactor, by providing the radioactivity and dose rate levels after irradiation. This information is important to help define maintenance activity for different parts of the reactor, and to plan decommissioning and radioactive waste disposal strategies. The study of activation analyses of candidate structural materials for near-term fusion reactors or ITER is equally essential, due to the presence of a high-energy neutron environment which makes decisive demands on material selection. This study comprises two parts; in the first part the activation characteristics, in a fusion radiation environment, of several elements which are widely present in structural materials, are studied. It reveals that the presence of a few specific elements in a material can diminish its feasibility for use in the nuclear environment. The second part of the study concentrates on activation analyses of candidate structural materials for near-term fusion reactors and their comparison in fusion radiation conditions. The structural materials selected for this study, i.e. India-specific Reduced Activation Ferritic‑Martensitic steel (IN-RAFMS), P91-grade steel, stainless steel 316LN ITER-grade (SS-316LN-IG), stainless steel 316L and stainless steel 304, are candidates for use in ITER either in vessel components or test blanket systems. Tungsten is also included in this study because of its use for ITER plasma-facing components. The study is carried out using the reference parameters of the ITER fusion reactor. The activation characteristics of the materials are assessed considering the irradiation at an ITER equatorial port. The presence of elements like Nb, Mo, Co and Ta in a structural material enhance the activity level as well as the dose level, which has an impact on design considerations. IN-RAFMS was shown to be a more effective low-activation material than SS-316LN-IG.
Design and fabrication of continuous-profile diffractive micro-optical elements as a beam splitter.
Feng, Di; Yan, Yingbai; Jin, Guofan; Fan, Shoushan
2004-10-10
An optimization algorithm that combines a rigorous electromagnetic computation model with an effective iterative method is utilized to design diffractive micro-optical elements that exhibit fast convergence and better design quality. The design example is a two-dimensional 1-to-2 beam splitter that can symmetrically generate two focal lines separated by 80 microm at the observation plane with a small angle separation of +/- 16 degrees. Experimental results are presented for an element with continuous profiles fabricated into a monocrystalline silicon substrate that has a width of 160 microm and a focal length of 140 microm at a free-space wavelength of 10.6 microm.
3-D Analysis of Flanged Joints Through Various Preload Methods Using ANSYS
NASA Astrophysics Data System (ADS)
Murugan, Jeyaraj Paul; Kurian, Thomas; Jayaprakash, Janardhan; Sreedharapanickar, Somanath
2015-10-01
Flanged joints are being employed in aerospace solid rocket motor hardware for the integration of various systems or subsystems. Hence, the design of flanged joints is very important in ensuring the integrity of motor while functioning. As these joints are subjected to higher loads due to internal pressure acting inside the motor chamber, an appropriate preload is required to be applied in this joint before subjecting it to the external load. Preload, also known as clamp load, is applied on the fastener and helps to hold the mating flanges together. Generally preload is simulated as a thermal load and the exact preload is obtained through number of iterations. Infact, more iterations are required when considering the material nonlinearity of the bolt. This way of simulation will take more computational time for generating the required preload. Now a days most commercial software packages use pretension elements for simulating the preload. This element does not require iterations for inducing the preload and it can be solved with single iteration. This approach takes less computational time and thus one can study the characteristics of the joint easily by varying the preload. When the structure contains more number of joints with different sizes of fasteners, pretension elements can be used compared to thermal load approach for simulating each size of fastener. This paper covers the details of analyses carried out simulating the preload through various options viz., preload through thermal, initial state command and pretension element etc. using ANSYS finite element package.
Development of the ITER ICH Transmission Line and Matching System
NASA Astrophysics Data System (ADS)
Rasmussen, D. A.; Goulding, R. H.; Pesavento, P. V.; Peters, B.; Swain, D. W.; Fredd, E. H.; Hosea, J.; Greenough, N.
2011-10-01
The ITER Ion Cyclotron Heating (ICH) System is designed to couple 20 MW of heating power for ion and electron heating. Prototype components for the ITER Ion Cyclotron Heating (ICH) transmission line and matching system are being designed and tested. The ICH transmission lines are pressurized 300 mm diameter coaxial lines with water-cooled aluminum outer conductor and gas-cooled and water-cooled copper inner conductor. Each ICH transmission line is designed to handle 40-55 MHz power at up to 6 MW/line. A total of 8 lines split to 16 antenna inputs on two ICH antennas. Industrial suppliers have designed coaxial transmission line and matching components and prototypes will be manufactured. The prototype components will be qualified on a test stand operating at the full power and pulse length needed for ITER. The matching system must accommodated dynamic changes in the plasma loading due to ELMS and the L to H-mode transition. Passive ELM tolerance will be performed using hybrid couplers and loads, which can absorb the transient reflected power. The system is also designed to compensate for the mutual inductances of the antenna current straps to limit the peak voltages on the antenna array elements.
Novel aspects of plasma control in ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphreys, D.; Jackson, G.; Walker, M.
2015-02-15
ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g., current profile regulation, tearing mode (TM) suppression), control mathematics (e.g., algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g., methods for management of highly subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less
Novel aspects of plasma control in ITER
Humphreys, David; Ambrosino, G.; de Vries, Peter; ...
2015-02-12
ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g. current profile regulation, tearing mode suppression (TM)), control mathematics (e.g. algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g. methods for management of highly-subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Finally, issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less
A finite element solver for 3-D compressible viscous flows
NASA Technical Reports Server (NTRS)
Reddy, K. C.; Reddy, J. N.; Nayani, S.
1990-01-01
Computation of the flow field inside a space shuttle main engine (SSME) requires the application of state of the art computational fluid dynamic (CFD) technology. Several computer codes are under development to solve 3-D flow through the hot gas manifold. Some algorithms were designed to solve the unsteady compressible Navier-Stokes equations, either by implicit or explicit factorization methods, using several hundred or thousands of time steps to reach a steady state solution. A new iterative algorithm is being developed for the solution of the implicit finite element equations without assembling global matrices. It is an efficient iteration scheme based on a modified nonlinear Gauss-Seidel iteration with symmetric sweeps. The algorithm is analyzed for a model equation and is shown to be unconditionally stable. Results from a series of test problems are presented. The finite element code was tested for couette flow, which is flow under a pressure gradient between two parallel plates in relative motion. Another problem that was solved is viscous laminar flow over a flat plate. The general 3-D finite element code was used to compute the flow in an axisymmetric turnaround duct at low Mach numbers.
Seismic Design of ITER Component Cooling Water System-1 Piping
NASA Astrophysics Data System (ADS)
Singh, Aditya P.; Jadhav, Mahesh; Sharma, Lalit K.; Gupta, Dinesh K.; Patel, Nirav; Ranjan, Rakesh; Gohil, Guman; Patel, Hiren; Dangi, Jinendra; Kumar, Mohit; Kumar, A. G. A.
2017-04-01
The successful performance of ITER machine very much depends upon the effective removal of heat from the in-vessel components and other auxiliary systems during Tokamak operation. This objective will be accomplished by the design of an effective Cooling Water System (CWS). The optimized piping layout design is an important element in CWS design and is one of the major design challenges owing to the factors of large thermal expansion and seismic accelerations; considering safety, accessibility and maintainability aspects. An important sub-system of ITER CWS, Component Cooling Water System-1 (CCWS-1) has very large diameter of pipes up to DN1600 with many intersections to fulfill the process flow requirements of clients for heat removal. Pipe intersection is the weakest link in the layout due to high stress intensification factor. CCWS-1 piping up to secondary confinement isolation valves as well as in-between these isolation valves need to survive a Seismic Level-2 (SL-2) earthquake during the Tokamak operation period to ensure structural stability of the system in the Safe Shutdown Earthquake (SSE) event. This paper presents the design, qualification and optimization of layout of ITER CCWS-1 loop to withstand SSE event combined with sustained and thermal loads as per the load combinations defined by ITER and allowable limits as per ASME B31.3, This paper also highlights the Modal and Response Spectrum Analyses done to find out the natural frequency and system behavior during the seismic event.
Advanced Software for Analysis of High-Speed Rolling-Element Bearings
NASA Technical Reports Server (NTRS)
Poplawski, J. V.; Rumbarger, J. H.; Peters, S. M.; Galatis, H.; Flower, R.
2003-01-01
COBRA-AHS is a package of advanced software for analysis of rigid or flexible shaft systems supported by rolling-element bearings operating at high speeds under complex mechanical and thermal loads. These loads can include centrifugal and thermal loads generated by motions of bearing components. COBRA-AHS offers several improvements over prior commercial bearing-analysis programs: It includes innovative probabilistic fatigue-life-estimating software that provides for computation of three-dimensional stress fields and incorporates stress-based (in contradistinction to prior load-based) mathematical models of fatigue life. It interacts automatically with the ANSYS finite-element code to generate finite-element models for estimating distributions of temperature and temperature-induced changes in dimensions in iterative thermal/dimensional analyses: thus, for example, it can be used to predict changes in clearances and thermal lockup. COBRA-AHS provides an improved graphical user interface that facilitates the iterative cycle of analysis and design by providing analysis results quickly in graphical form, enabling the user to control interactive runs without leaving the program environment, and facilitating transfer of plots and printed results for inclusion in design reports. Additional features include roller-edge stress prediction and influence of shaft and housing distortion on bearing performance.
Elastic-plastic mixed-iterative finite element analysis: Implementation and performance assessment
NASA Technical Reports Server (NTRS)
Sutjahjo, Edhi; Chamis, Christos C.
1993-01-01
An elastic-plastic algorithm based on Von Mises and associative flow criteria is implemented in MHOST-a mixed iterative finite element analysis computer program developed by NASA Lewis Research Center. The performance of the resulting elastic-plastic mixed-iterative analysis is examined through a set of convergence studies. Membrane and bending behaviors of 4-node quadrilateral shell finite elements are tested for elastic-plastic performance. Generally, the membrane results are excellent, indicating the implementation of elastic-plastic mixed-iterative analysis is appropriate.
Jiang, Rui; McKanna, James; Calabrese, Samantha; Seif El-Nasr, Magy
2017-08-01
Herein we describe a methodology for developing a game-based intervention to raise awareness of Chlamydia and other sexually transmitted infections among youth in Boston's underserved communities. We engaged in three design-based experiments. These utilized mixed methods, including playtesting and assessment methods, to examine the overall effectiveness of the game. In this case, effectiveness is defined as (1) engaging the target group, (2) increasing knowledge about Chlamydia, and (3) changing attitudes toward Chlamydia testing. These three experiments were performed using participants from different communities and with slightly different versions of the game, as we iterated through the design/feedback process. Overall, participants who played the game showed a significant increase in participants' knowledge of Chlamydia compared with those in the control group (P = 0.0002). The version of the game, including elements specifically targeting systemic thinking, showed significant improvement in participants' intent to get tested compared with the version of the game without such elements (Stage 2: P > 0.05; Stage 3: P = 0.0045). Furthermore, during both Stage 2 and Stage 3, participants showed high levels of enjoyment, mood, and participation and moderate levels of game engagement and social engagement. During Stage 3, however, participants' game engagement (P = 0.0003), social engagement (P = 0.0003), and participation (P = 0.0003) were significantly higher compared with those of Stage 2. Thus, we believe that motivation improvements from Stage 2 to 3 were also effective. Finally, participants' overall learning effectiveness was correlated with their prepositive affect (r = 0.52) and their postproblem hierarchy (r = -0.54). The game improved considerably from its initial conception through three stages of iterative design and feedback. Our assessment methods for each stage targeted and integrated learning, health, and engagement outcomes. Lessons learned through this iterative design process are a great contribution to the games for health community, especially in targeting the development of health and learning goals through game design.
Data Sciences Summer Institute Topology Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watts, Seth
DSSI_TOPOPT is a 2D topology optimization code that designs stiff structures made of a single linear elastic material and void space. The code generates a finite element mesh of a rectangular design domain on which the user specifies displacement and load boundary conditions. The code iteratively designs a structure that minimizes the compliance (maximizes the stiffness) of the structure under the given loading, subject to an upper bound on the amount of material used. Depending on user options, the code can evaluate the performance of a user-designed structure, or create a design from scratch. Output includes the finite element mesh,more » design, and visualizations of the design.« less
ACT Payload Shroud Structural Concept Analysis and Optimization
NASA Technical Reports Server (NTRS)
Zalewski, Bart B.; Bednarcyk, Brett A.
2010-01-01
Aerospace structural applications demand a weight efficient design to perform in a cost effective manner. This is particularly true for launch vehicle structures, where weight is the dominant design driver. The design process typically requires many iterations to ensure that a satisfactory minimum weight has been obtained. Although metallic structures can be weight efficient, composite structures can provide additional weight savings due to their lower density and additional design flexibility. This work presents structural analysis and weight optimization of a composite payload shroud for NASA s Ares V heavy lift vehicle. Two concepts, which were previously determined to be efficient for such a structure are evaluated: a hat stiffened/corrugated panel and a fiber reinforced foam sandwich panel. A composite structural optimization code, HyperSizer, is used to optimize the panel geometry, composite material ply orientations, and sandwich core material. HyperSizer enables an efficient evaluation of thousands of potential designs versus multiple strength and stability-based failure criteria across multiple load cases. HyperSizer sizing process uses a global finite element model to obtain element forces, which are statistically processed to arrive at panel-level design-to loads. These loads are then used to analyze each candidate panel design. A near optimum design is selected as the one with the lowest weight that also provides all positive margins of safety. The stiffness of each newly sized panel or beam component is taken into account in the subsequent finite element analysis. Iteration of analysis/optimization is performed to ensure a converged design. Sizing results for the hat stiffened panel concept and the fiber reinforced foam sandwich concept are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panayotov, Dobromir; Poitevin, Yves; Grief, Andrew
'Fusion for Energy' (F4E) is designing, developing, and implementing the European Helium-Cooled Lead-Lithium (HCLL) and Helium-Cooled Pebble-Bed (HCPB) Test Blanket Systems (TBSs) for ITER (Nuclear Facility INB-174). Safety demonstration is an essential element for the integration of these TBSs into ITER and accident analysis is one of its critical components. A systematic approach to accident analysis has been developed under the F4E contract on TBS safety analyses. F4E technical requirements, together with Amec Foster Wheeler and INL efforts, have resulted in a comprehensive methodology for fusion breeding blanket accident analysis that addresses the specificity of the breeding blanket designs, materials,more » and phenomena while remaining consistent with the approach already applied to ITER accident analyses. Furthermore, the methodology phases are illustrated in the paper by its application to the EU HCLL TBS using both MELCOR and RELAP5 codes.« less
Drawing Analogies to Deepen Learning
ERIC Educational Resources Information Center
Fava, Michelle
2017-01-01
This article offers examples of how drawing can facilitate thinking skills that promote analogical reasoning to enable deeper learning. The instructional design applies cognitive principles, briefly described here. The workshops were developed iteratively, through feedback from student and teacher participants. Elements of the UK National…
Iterative methods for elliptic finite element equations on general meshes
NASA Technical Reports Server (NTRS)
Nicolaides, R. A.; Choudhury, Shenaz
1986-01-01
Iterative methods for arbitrary mesh discretizations of elliptic partial differential equations are surveyed. The methods discussed are preconditioned conjugate gradients, algebraic multigrid, deflated conjugate gradients, an element-by-element techniques, and domain decomposition. Computational results are included.
An Annotated Bibliography of the Manned Systems Measurement Literature
1985-02-01
designs that are considered applicable to assessment of training effectiveness include the classic Solomon four - group design; iterative adaptation to...element (analogue computer) were used for this study . *».U Operators were taken from 3 groups : (1) persons with both licensed flying and driving...conclusions are that the classic four - group design is impractical for most training evaluation; that "adaptive research for big effects" is apt to be
Sensitivity calculations for iteratively solved problems
NASA Technical Reports Server (NTRS)
Haftka, R. T.
1985-01-01
The calculation of sensitivity derivatives of solutions of iteratively solved systems of algebraic equations is investigated. A modified finite difference procedure is presented which improves the accuracy of the calculated derivatives. The procedure is demonstrated for a simple algebraic example as well as an element-by-element preconditioned conjugate gradient iterative solution technique applied to truss examples.
Iterative methods for mixed finite element equations
NASA Technical Reports Server (NTRS)
Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.
1985-01-01
Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.
Corwin, Lisa A; Runyon, Christopher R; Ghanem, Eman; Sandy, Moriah; Clark, Greg; Palmer, Gregory C; Reichler, Stuart; Rodenbusch, Stacia E; Dolan, Erin L
2018-06-01
Course-based undergraduate research experiences (CUREs) provide a promising avenue to attract a larger and more diverse group of students into research careers. CUREs are thought to be distinctive in offering students opportunities to make discoveries, collaborate, engage in iterative work, and develop a sense of ownership of their lab course work. Yet how these elements affect students' intentions to pursue research-related careers remain unexplored. To address this knowledge gap, we collected data on three design features thought to be distinctive of CUREs (discovery, iteration, collaboration) and on students' levels of ownership and career intentions from ∼800 undergraduates who had completed CURE or inquiry courses, including courses from the Freshman Research Initiative (FRI), which has a demonstrated positive effect on student retention in college and in science, technology, engineering, and mathematics. We used structural equation modeling to test relationships among the design features and student ownership and career intentions. We found that discovery, iteration, and collaboration had small but significant effects on students' intentions; these effects were fully mediated by student ownership. Students in FRI courses reported significantly higher levels of discovery, iteration, and ownership than students in other CUREs. FRI research courses alone had a significant effect on students' career intentions.
Design and Stress Analysis of Low-Noise Adjusted Bearing Contact Spiral Bevel Gears
NASA Technical Reports Server (NTRS)
Fuentes, A.; Litvin, F. L.; Mullins, B. R.; Woods, R.; Handschuh, R. F.; Lewicki, David G.
2002-01-01
An integrated computerized approach for design and stress analysis of low-noise spiral bevel gear drives with adjusted bearing contact is proposed. The procedure of computations is an iterative process that requires four separate procedures and provide: (a) a parabolic function of transmission errors that is able to reduce the effect of errors of alignment on noise and vibration, and (b) reduction of the shift of bearing contact caused by misalignment. Application of finite element analysis enables us to determine the contact and bending stresses and investigate the formation of the bearing contact. The design of finite element models and boundary conditions is automated and does not require intermediate CAD computer programs for application of general purpose computer program for finite element analysis.
NASA Technical Reports Server (NTRS)
Nakazawa, Shohei
1989-01-01
The internal structure is discussed of the MHOST finite element program designed for 3-D inelastic analysis of gas turbine hot section components. The computer code is the first implementation of the mixed iterative solution strategy for improved efficiency and accuracy over the conventional finite element method. The control structure of the program is covered along with the data storage scheme and the memory allocation procedure and the file handling facilities including the read and/or write sequences.
A computer program for the design and analysis of low-speed airfoils, supplement
NASA Technical Reports Server (NTRS)
Eppler, R.; Somers, D. M.
1980-01-01
Three new options were incorporated into an existing computer program for the design and analysis of low speed airfoils. These options permit the analysis of airfoils having variable chord (variable geometry), a boundary layer displacement iteration, and the analysis of the effect of single roughness elements. All three options are described in detail and are included in the FORTRAN IV computer program.
Conceptual design and structural analysis for an 8.4-m telescope
NASA Astrophysics Data System (ADS)
Mendoza, Manuel; Farah, Alejandro; Ruiz Schneider, Elfego
2004-09-01
This paper describes the conceptual design of the optics support structures of a telescope with a primary mirror of 8.4 m, the same size as a Large Binocular Telescope (LBT) primary mirror. The design goal is to achieve a structure for supporting the primary and secondary mirrors and keeping them joined as rigid as possible. With this purpose an optimization with several models was done. This iterative design process includes: specifications development, concepts generation and evaluation. Process included Finite Element Analysis (FEA) as well as other analytical calculations. Quality Function Deployment (QFD) matrix was used to obtain telescope tube and spider specifications. Eight spiders and eleven tubes geometric concepts were proposed. They were compared in decision matrixes using performance indicators and parameters. Tubes and spiders went under an iterative optimization process. The best tubes and spiders concepts were assembled together. All assemblies were compared and ranked according to their performance.
NASA Astrophysics Data System (ADS)
Liu, Chao; Yang, Guigeng; Zhang, Yiqun
2015-01-01
The electrostatically controlled deployable membrane reflector (ECDMR) is a promising scheme to construct large size and high precision space deployable reflector antennas. This paper presents a novel design method for the large size and small F/D ECDMR considering the coupled structure-electrostatic problem. First, the fully coupled structural-electrostatic system is described by a three field formulation, in which the structure and passive electrical field is modeled by finite element method, and the deformation of the electrostatic domain is predicted by a finite element formulation of a fictitious elastic structure. A residual formulation of the structural-electrostatic field finite element model is established and solved by Newton-Raphson method. The coupled structural-electrostatic analysis procedure is summarized. Then, with the aid of this coupled analysis procedure, an integrated optimization method of membrane shape accuracy and stress uniformity is proposed, which is divided into inner and outer iterative loops. The initial state of relatively high shape accuracy and uniform stress distribution is achieved by applying the uniform prestress on the membrane design shape and optimizing the voltages, in which the optimal voltage is computed by a sensitivity analysis. The shape accuracy is further improved by the iterative prestress modification using the reposition balance method. Finally, the results of the uncoupled and coupled methods are compared and the proposed optimization method is applied to design an ECDMR. The results validate the effectiveness of this proposed methods.
Panayotov, Dobromir; Poitevin, Yves; Grief, Andrew; ...
2016-09-23
'Fusion for Energy' (F4E) is designing, developing, and implementing the European Helium-Cooled Lead-Lithium (HCLL) and Helium-Cooled Pebble-Bed (HCPB) Test Blanket Systems (TBSs) for ITER (Nuclear Facility INB-174). Safety demonstration is an essential element for the integration of these TBSs into ITER and accident analysis is one of its critical components. A systematic approach to accident analysis has been developed under the F4E contract on TBS safety analyses. F4E technical requirements, together with Amec Foster Wheeler and INL efforts, have resulted in a comprehensive methodology for fusion breeding blanket accident analysis that addresses the specificity of the breeding blanket designs, materials,more » and phenomena while remaining consistent with the approach already applied to ITER accident analyses. Furthermore, the methodology phases are illustrated in the paper by its application to the EU HCLL TBS using both MELCOR and RELAP5 codes.« less
Material nonlinear analysis via mixed-iterative finite element method
NASA Technical Reports Server (NTRS)
Sutjahjo, Edhi; Chamis, Christos C.
1992-01-01
The performance of elastic-plastic mixed-iterative analysis is examined through a set of convergence studies. Membrane and bending behaviors are tested using 4-node quadrilateral finite elements. The membrane result is excellent, which indicates the implementation of elastic-plastic mixed-iterative analysis is appropriate. On the other hand, further research to improve bending performance of the method seems to be warranted.
Designing magnetic systems for reliability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitzenroeder, P.J.
1991-01-01
Designing magnetic system is an iterative process in which the requirements are set, a design is developed, materials and manufacturing processes are defined, interrelationships with the various elements of the system are established, engineering analyses are performed, and fault modes and effects are studied. Reliability requires that all elements of the design process, from the seemingly most straightforward such as utilities connection design and implementation, to the most sophisticated such as advanced finite element analyses, receives a balanced and appropriate level of attention. D.B. Montgomery's study of magnet failures has shown that the predominance of magnet failures tend not tomore » be in the most intensively engineered areas, but are associated with insulation, leads, ad unanticipated conditions. TFTR, JET, JT-60, and PBX are all major tokamaks which have suffered loss of reliability due to water leaks. Similarly the majority of causes of loss of magnet reliability at PPPL has not been in the sophisticated areas of the design but are due to difficulties associated with coolant connections, bus connections, and external structural connections. Looking towards the future, the major next-devices such as BPX and ITER are most costly and complex than any of their predecessors and are pressing the bounds of operating levels, materials, and fabrication. Emphasis on reliability is a must as the fusion program enters a phase where there are fewer, but very costly devices with the goal of reaching a reactor prototype stage in the next two or three decades. This paper reviews some of the magnet reliability issues which PPPL has faced over the years the lessons learned from them, and magnet design and fabrication practices which have been found to contribute to magnet reliability.« less
Design and Stress Analysis of Low-Noise Adjusted Bearing Contact Spiral Bevel Gears
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Fuentes, Alfonso; Mullins, Baxter R.; Woods, Ron
2002-01-01
An integrated computerized approach for design and stress analysis of low-noise spiral bevel gear drives with adjusted bearing contact has been developed. The computation procedure is an iterative process, requiring four separate steps that provide: (a) a parabolic function of transmission errors that is able to reduce the effect of errors of alignment, and (b) reduction of the shift of bearing contact caused by misalignment. Application of finite element analysis permits the contact and bending stresses to be determined and investigate the formation of the bearing contact. The design of finite element models and boundary conditions is automated and does not require an intermediate CAD computer program. A commercially available finite element analysis computer program with contact capability was used to conduct the stress analysis. The theory developed is illustrated with numerical examples.
Design, Manufacture, and Experimental Serviceability Validation of ITER Blanket Components
NASA Astrophysics Data System (ADS)
Leshukov, A. Yu.; Strebkov, Yu. S.; Sviridenko, M. N.; Safronov, V. M.; Putrik, A. B.
2017-12-01
In 2014, the Russian Federation and the ITER International Organization signed two Procurement Arrangements (PAs) for ITER blanket components: 1.6.P1ARF.01 "Blanket First Wall" of February 14, 2014, and 1.6.P3.RF.01 "Blanket Module Connections" of December 19, 2014. The first PA stipulates development, manufacture, testing, and delivery to the ITER site of 179 Enhanced Heat Flux (EHF) First Wall (FW) Panels intended for withstanding the heat flux from the plasma up to 4.7MW/m2. Two Russian institutions, NIIEFA (Efremov Institute) and NIKIET, are responsible for the implementation of this PA. NIIEFA manufactures plasma-facing components (PFCs) of the EHF FW panels and performs the final assembly and testing of the panels, and NIKIET manufactures FW beam structures, load-bearing structures of PFCs, and all elements of the panel attachment system. As for the second PA, NIKIET is the sole official supplier of flexible blanket supports, electrical insulation key pads (EIKPs), and blanket module/vacuum vessel electrical connectors. Joint activities of NIKIET and NIIEFA for implementing PA 1.6.P1ARF.01 are briefly described, and information on implementation of PA 1.6.P3.RF.01 is given. Results of the engineering design and research efforts in the scope of the above PAs in 2015-2016 are reported, and results of developing the technology for manufacturing ITER blanket components are presented.
Integration of rocket turbine design and analysis through computer graphics
NASA Technical Reports Server (NTRS)
Hsu, Wayne; Boynton, Jim
1988-01-01
An interactive approach with engineering computer graphics is used to integrate the design and analysis processes of a rocket engine turbine into a progressive and iterative design procedure. The processes are interconnected through pre- and postprocessors. The graphics are used to generate the blade profiles, their stacking, finite element generation, and analysis presentation through color graphics. Steps of the design process discussed include pitch-line design, axisymmetric hub-to-tip meridional design, and quasi-three-dimensional analysis. The viscous two- and three-dimensional analysis codes are executed after acceptable designs are achieved and estimates of initial losses are confirmed.
Adaptive implicit-explicit and parallel element-by-element iteration schemes
NASA Technical Reports Server (NTRS)
Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.
1989-01-01
Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.
Analysis of the influence of manufacturing and alignment related errors on an optical tweezer system
NASA Astrophysics Data System (ADS)
Kampmann, R.; Sinzinger, S.
2014-12-01
In this work we present the design process as well as experimental results of an optical system for trapping particles in air. For positioning applications of micro-sized objects onto a glass wafer we developed a highly efficient optical tweezer. The focus of this paper is the iterative design process where we combine classical optics design software with a ray optics based force simulation tool. Thus we can find the best compromise which matches the optical systems restrictions with stable trapping conditions. Furthermore we analyze the influence of manufacturing related tolerances and errors in the alignment process of the optical elements on the optical forces. We present the design procedure for the necessary optical elements as well as experimental results for the aligned system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos, Bruno; Carvalho, Paulo F.; Rodrigues, A.P.
The ATCA standard specifies a mandatory Shelf Manager (ShM) unit which is a key element for the system operation. It includes the Intelligent Platform Management Controller (IPMC) which monitors the system health, retrieves inventory information and controls the Field Replaceable Units (FRUs). These elements enable the intelligent health monitoring, providing high-availability and safety operation, ensuring the correct system operation. For critical systems like ones of tokamak ITER these features are mandatory to support the long pulse operation. The Nominal Device Support (NDS) was designed and developed for the ITER CODAC Core System (CCS), which will be the responsible for plantmore » Instrumentation and Control (I and C), supervising and monitoring on ITER. It generalizes the Enhanced Physics and Industrial Control System (EPICS) device support interface for Data Acquisition (DAQ) and timing devices. However the support for health management features and ATCA ShM are not yet provided. This paper presents the implementation and test of a NDS for the ATCA ShM, using the ITER Fast Plant System Controller (FPSC) prototype environment. This prototype is fully compatible with the ITER CCS and uses the EPICS Channel Access (CA) protocol as the interface with the Plant Operation Network (PON). The implemented solution running in an EPICS Input / Output Controller (IOC) provides Process Variables (PV) to the PON network with the system information. These PVs can be used for control and monitoring by all CA clients, such as EPICS user interface clients and alarm systems. The results are presented, demonstrating the fully integration and the usability of this solution. (authors)« less
NASA Technical Reports Server (NTRS)
Winget, J. M.; Hughes, T. J. R.
1985-01-01
The particular problems investigated in the present study arise from nonlinear transient heat conduction. One of two types of nonlinearities considered is related to a material temperature dependence which is frequently needed to accurately model behavior over the range of temperature of engineering interest. The second nonlinearity is introduced by radiation boundary conditions. The finite element equations arising from the solution of nonlinear transient heat conduction problems are formulated. The finite element matrix equations are temporally discretized, and a nonlinear iterative solution algorithm is proposed. Algorithms for solving the linear problem are discussed, taking into account the form of the matrix equations, Gaussian elimination, cost, and iterative techniques. Attention is also given to approximate factorization, implementational aspects, and numerical results.
Low Average Sidelobe Slot Array Antennas for Radiometer Applications
NASA Technical Reports Server (NTRS)
Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.
2012-01-01
In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E-plane end-fire direction. Because of the alternating slot offsets, grating lobes called butterfly lobes are produced in non-principal planes close to the H-plane. An attempt to reduce the influence of such grating lobes resulted in a symmetric design.
The Design of Adult Acute Care Units in U.S. Hospitals
Catrambone, Cathy; Johnson, Mary E.; Mion, Lorraine C.; Minnick, Ann F.
2010-01-01
Purpose To describe the current state of design characteristics determined to be desirable by the Agency for Health Research and Quality (AHRQ) in U.S. adult medical, surgical, and intensive care units (ICUs). Design Descriptive study of patient visibility; distance to hygiene, toileting, charting, and supplies; unit configuration; percentage of private rooms; and presence or absence of carpeting in 56 ICUs and 81 medical-surgical units in six metropolitan areas. Methods Data were collected via observation, measurement, and interviews. Unit configurations were classified via an iterative process. Descriptive data were analyzed according to ICU and non-ICU status using SPSS (Version 15). Findings Analysis of unit configurations indicated eight unit designs. Statistical analysis showed inter- and intrahospital variation in unit configurations, percentage private rooms, carpeting, visibility, and distance to supplies and charting. Few units met the AHRQ designated design elements studied. Conclusions A wide gap exists between desirable characteristics in ICUs and medical-surgical units. Future research is needed to explore operationalization of unit design elements as risk adjustments, how design elements contribute to patient outcomes, and how design elements influence one another. Clinical Relevance There is room for improvement on almost every design variable, particularly on medical-surgical units. Future planning should take into consideration the interaction of bed capacity and unit configuration. PMID:19335681
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1976-01-01
An iterative method for numerically solving the time independent Navier-Stokes equations for viscous compressible flows is presented. The method is based upon partial application of the Gauss-Seidel principle in block form to the systems of nonlinear algebraic equations which arise in construction of finite element (Galerkin) models approximating solutions of fluid dynamic problems. The C deg-cubic element on triangles is employed for function approximation. Computational results for a free shear flow at Re = 1,000 indicate significant achievement of economy in iterative convergence rate over finite element and finite difference models which employ the customary time dependent equations and asymptotic time marching procedure to steady solution. Numerical results are in excellent agreement with those obtained for the same test problem employing time marching finite element and finite difference solution techniques.
Design sensitivity analysis of boundary element substructures
NASA Technical Reports Server (NTRS)
Kane, James H.; Saigal, Sunil; Gallagher, Richard H.
1989-01-01
The ability to reduce or condense a three-dimensional model exactly, and then iterate on this reduced size model representing the parts of the design that are allowed to change in an optimization loop is discussed. The discussion presents the results obtained from an ongoing research effort to exploit the concept of substructuring within the structural shape optimization context using a Boundary Element Analysis (BEA) formulation. The first part contains a formulation for the exact condensation of portions of the overall boundary element model designated as substructures. The use of reduced boundary element models in shape optimization requires that structural sensitivity analysis can be performed. A reduced sensitivity analysis formulation is then presented that allows for the calculation of structural response sensitivities of both the substructured (reduced) and unsubstructured parts of the model. It is shown that this approach produces significant computational economy in the design sensitivity analysis and reanalysis process by facilitating the block triangular factorization and forward reduction and backward substitution of smaller matrices. The implementatior of this formulation is discussed and timings and accuracies of representative test cases presented.
3D unstructured-mesh radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morel, J.
1997-12-31
Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options:more » $$S{_}n$$ (discrete-ordinates), $$P{_}n$$ (spherical harmonics), and $$SP{_}n$$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $$S{_}n$$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.« less
Networking Theories by Iterative Unpacking
ERIC Educational Resources Information Center
Koichu, Boris
2014-01-01
An iterative unpacking strategy consists of sequencing empirically-based theoretical developments so that at each step of theorizing one theory serves as an overarching conceptual framework, in which another theory, either existing or emerging, is embedded in order to elaborate on the chosen element(s) of the overarching theory. The strategy is…
Optimization of multi-element airfoils for maximum lift
NASA Technical Reports Server (NTRS)
Olsen, L. E.
1979-01-01
Two theoretical methods are presented for optimizing multi-element airfoils to obtain maximum lift. The analyses assume that the shapes of the various high lift elements are fixed. The objective of the design procedures is then to determine the optimum location and/or deflection of the leading and trailing edge devices. The first analysis determines the optimum horizontal and vertical location and the deflection of a leading edge slat. The structure of the flow field is calculated by iteratively coupling potential flow and boundary layer analysis. This design procedure does not require that flow separation effects be modeled. The second analysis determines the slat and flap deflection required to maximize the lift of a three element airfoil. This approach requires that the effects of flow separation from one or more of the airfoil elements be taken into account. The theoretical results are in good agreement with results of a wind tunnel test used to corroborate the predicted optimum slat and flap positions.
High-performance equation solvers and their impact on finite element analysis
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Knight, Norman F., Jr.; Davis, D. Dale, Jr.
1990-01-01
The role of equation solvers in modern structural analysis software is described. Direct and iterative equation solvers which exploit vectorization on modern high-performance computer systems are described and compared. The direct solvers are two Cholesky factorization methods. The first method utilizes a novel variable-band data storage format to achieve very high computation rates and the second method uses a sparse data storage format designed to reduce the number of operations. The iterative solvers are preconditioned conjugate gradient methods. Two different preconditioners are included; the first uses a diagonal matrix storage scheme to achieve high computation rates and the second requires a sparse data storage scheme and converges to the solution in fewer iterations that the first. The impact of using all of the equation solvers in a common structural analysis software system is demonstrated by solving several representative structural analysis problems.
High-performance equation solvers and their impact on finite element analysis
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Knight, Norman F., Jr.; Davis, D. D., Jr.
1992-01-01
The role of equation solvers in modern structural analysis software is described. Direct and iterative equation solvers which exploit vectorization on modern high-performance computer systems are described and compared. The direct solvers are two Cholesky factorization methods. The first method utilizes a novel variable-band data storage format to achieve very high computation rates and the second method uses a sparse data storage format designed to reduce the number od operations. The iterative solvers are preconditioned conjugate gradient methods. Two different preconditioners are included; the first uses a diagonal matrix storage scheme to achieve high computation rates and the second requires a sparse data storage scheme and converges to the solution in fewer iterations that the first. The impact of using all of the equation solvers in a common structural analysis software system is demonstrated by solving several representative structural analysis problems.
DIII-D accomplishments and plans in support of fusion next steps
Buttery, R. J; Eidietis, N.; Holcomb, C.; ...
2013-06-01
DIII-D is using its flexibility and diagnostics to address the critical science required to enable next step fusion devices. We have adapted operating scenarios for ITER to low torque and are now being optimized for transport. Three ELM mitigation scenarios have been developed to near-ITER parameters. New control techniques are managing the most challenging plasma instabilities. Disruption mitigation tools show promising dissipation strategies for runaway electrons and heat load. An off axis neutral beam upgrade has enabled sustainment of high βN capable steady state regimes. Divertor research is identifying the challenge, physics and candidate solutions for handling the hot plasmamore » exhaust with notable progress in heat flux reduction using the snowflake configuration. Our work is helping optimize design choices and prepare the scientific tools for operation in ITER, and resolve key elements of the plasma configuration and divertor solution for an FNSF.« less
ERIC Educational Resources Information Center
Ahrens, Fred; Mistry, Rajendra
2005-01-01
In product engineering there often arise design analysis problems for which a commercial software package is either unavailable or cost prohibitive. Further, these calculations often require successive iterations that can be time intensive when performed by hand, thus development of a software application is indicated. This case relates to the…
ERIC Educational Resources Information Center
Lowrie, Tom; Diezmann, Carmel M.; Logan, Tracy
2012-01-01
Graphical tasks have become a prominent aspect of mathematics assessment. From a conceptual stance, the purpose of this study was to better understand the composition of graphical tasks commonly used to assess students' mathematics understandings. Through an iterative design, the investigation described the sense making of 11-12-year-olds as they…
Parallel iterative solution for h and p approximations of the shallow water equations
Barragy, E.J.; Walters, R.A.
1998-01-01
A p finite element scheme and parallel iterative solver are introduced for a modified form of the shallow water equations. The governing equations are the three-dimensional shallow water equations. After a harmonic decomposition in time and rearrangement, the resulting equations are a complex Helmholz problem for surface elevation, and a complex momentum equation for the horizontal velocity. Both equations are nonlinear and the resulting system is solved using the Picard iteration combined with a preconditioned biconjugate gradient (PBCG) method for the linearized subproblems. A subdomain-based parallel preconditioner is developed which uses incomplete LU factorization with thresholding (ILUT) methods within subdomains, overlapping ILUT factorizations for subdomain boundaries and under-relaxed iteration for the resulting block system. The method builds on techniques successfully applied to linear elements by introducing ordering and condensation techniques to handle uniform p refinement. The combined methods show good performance for a range of p (element order), h (element size), and N (number of processors). Performance and scalability results are presented for a field scale problem where up to 512 processors are used. ?? 1998 Elsevier Science Ltd. All rights reserved.
FENDL: International reference nuclear data library for fusion applications
NASA Astrophysics Data System (ADS)
Pashchenko, A. B.; Wienke, H.; Ganesan, S.
1996-10-01
The IAEA Nuclear Data Section, in co-operation with several national nuclear data centres and research groups, has created the first version of an internationally available Fusion Evaluated Nuclear Data Library (FENDL-1). The FENDL library has been selected to serve as a comprehensive source of processed and tested nuclear data tailored to the requirements of the engineering design activity (EDA) of the ITER project and other fusion-related development projects. The present version of FENDL consists of the following sublibraries covering the necessary nuclear input for all physics and engineering aspects of the material development, design, operation and safety of the ITER project in its current EDA phase: FENDL/A-1.1: neutron activation cross-sections, selected from different available sources, for 636 nuclides, FENDL/D-1.0: nuclear decay data for 2900 nuclides in ENDF-6 format, FENDL/DS-1.0: neutron activation data for dosimetry by foil activation, FENDL/C-1.0: data for the fusion reactions D(d,n), D(d,p), T(d,n), T(t,2n), He-3(d,p) extracted from ENDF/B-6 and processed, FENDL/E-1.0:data for coupled neutron—photon transport calculations, including a data library for neutron interaction and photon production for 63 elements or isotopes, selected from ENDF/B-6, JENDL-3, or BROND-2, and a photon—atom interaction data library for 34 elements. The benchmark validation of FENDL-1 as required by the customer, i.e. the ITER team, is considered to be a task of high priority in the coming months. The well tested and validated nuclear data libraries in processed form of the FENDL-2 are expected to be ready by mid 1996 for use by the ITER team in the final phase of ITER EDA after extensive benchmarking and integral validation studies in the 1995-1996 period. The FENDL data files can be electronically transferred to users from the IAEA nuclear data section online system through INTERNET. A grand total of 54 (sub)directories with 845 files with total size of about 2 million blocks or about 1 Gigabyte (1 block = 512 bytes) of numerical data is currently available on-line.
NASA Astrophysics Data System (ADS)
Yang, C.; Zheng, W.; Zhang, M.; Yuan, T.; Zhuang, G.; Pan, Y.
2016-06-01
Measurement and control of the plasma in real-time are critical for advanced Tokamak operation. It requires high speed real-time data acquisition and processing. ITER has designed the Fast Plant System Controllers (FPSC) for these purposes. At J-TEXT Tokamak, a real-time data acquisition and processing framework has been designed and implemented using standard ITER FPSC technologies. The main hardware components of this framework are an Industrial Personal Computer (IPC) with a real-time system and FlexRIO devices based on FPGA. With FlexRIO devices, data can be processed by FPGA in real-time before they are passed to the CPU. The software elements are based on a real-time framework which runs under Red Hat Enterprise Linux MRG-R and uses Experimental Physics and Industrial Control System (EPICS) for monitoring and configuring. That makes the framework accord with ITER FPSC standard technology. With this framework, any kind of data acquisition and processing FlexRIO FPGA program can be configured with a FPSC. An application using the framework has been implemented for the polarimeter-interferometer diagnostic system on J-TEXT. The application is able to extract phase-shift information from the intermediate frequency signal produced by the polarimeter-interferometer diagnostic system and calculate plasma density profile in real-time. Different algorithms implementations on the FlexRIO FPGA are compared in the paper.
NASA Technical Reports Server (NTRS)
Meyer, Marit Elisabeth
2015-01-01
A thermal precipitator (TP) was designed to collect smoke aerosol particles for microscopic analysis in fire characterization research. Information on particle morphology, size and agglomerate structure obtained from these tests supplements additional aerosol data collected. Modeling of the thermal precipitator throughout the design process was performed with the COMSOL Multiphysics finite element software package, including the Eulerian flow field and thermal gradients in the fluid. The COMSOL Particle Tracing Module was subsequently used to determine particle deposition. Modeling provided optimized design parameters such as geometry, flow rate and temperatures. The thermal precipitator was built and testing verified the performance of the first iteration of the device. The thermal precipitator was successfully operated and provided quality particle samples for microscopic analysis, which furthered the body of knowledge on smoke particulates. This information is a key element of smoke characterization and will be useful for future spacecraft fire detection research.
NASA Astrophysics Data System (ADS)
Song, Yang; Liu, Zhigang; Wang, Hongrui; Lu, Xiaobing; Zhang, Jing
2015-10-01
Due to the intrinsic nonlinear characteristics and complex structure of the high-speed catenary system, a modelling method is proposed based on the analytical expressions of nonlinear cable and truss elements. The calculation procedure for solving the initial equilibrium state is proposed based on the Newton-Raphson iteration method. The deformed configuration of the catenary system as well as the initial length of each wire can be calculated. Its accuracy and validity of computing the initial equilibrium state are verified by comparison with the separate model method, absolute nodal coordinate formulation and other methods in the previous literatures. Then, the proposed model is combined with a lumped pantograph model and a dynamic simulation procedure is proposed. The accuracy is guaranteed by the multiple iterative calculations in each time step. The dynamic performance of the proposed model is validated by comparison with EN 50318, the results of the finite element method software and SIEMENS simulation report, respectively. At last, the influence of the catenary design parameters (such as the reserved sag and pre-tension) on the dynamic performance is preliminarily analysed by using the proposed model.
Wind turbine generator application places unique demands on tower design and materials
NASA Technical Reports Server (NTRS)
Kita, J. P.
1978-01-01
The most relevant contractual tower design requirements and goal for the Mod-1 tower are related to steel truss tower construction, cost-effective state-of-the-art design, a design life of 30 years, and maximum wind conditions of 120 mph at 30 feet elevation. The Mod-1 tower design approach was an iterative process. Static design loads were calculated and member sizes and overall geometry chosen with the use of finite element computer techniques. Initial tower dynamic characteristics were then combined with the dynamic properties of the other wind turbine components, and a series of complex dynamic computer programs were run to establish a dynamic load set and then a second tower design.
NASA Technical Reports Server (NTRS)
West, Jeff; Westra, Doug; Lin, Jeff; Tucker, Kevin
2006-01-01
A robust rocket engine combustor design and development process must include tools which can accurately predict the multi-dimensional thermal environments imposed on solid surfaces by the hot combustion products. Currently, empirical methods used in the design process are typically one dimensional and do not adequately account for the heat flux rise rate in the near-injector region of the chamber. Computational Fluid Dynamics holds promise to meet the design tool requirement, but requires accuracy quantification, or validation, before it can be confidently applied in the design process. This effort presents the beginning of such a validation process for the Loci-CHEM CFD code. The model problem examined here is a gaseous oxygen (GO2)/gaseous hydrogen (GH2) shear coaxial single element injector operating at a chamber pressure of 5.42 MPa. The GO2/GH2 propellant combination in this geometry represents one the simplest rocket model problems and is thus foundational to subsequent validation efforts for more complex injectors. Multiple steady state solutions have been produced with Loci-CHEM employing different hybrid grids and two-equation turbulence models. Iterative convergence for each solution is demonstrated via mass conservation, flow variable monitoring at discrete flow field locations as a function of solution iteration and overall residual performance. A baseline hybrid was used and then locally refined to demonstrate grid convergence. Solutions were obtained with three variations of the k-omega turbulence model.
NASA Technical Reports Server (NTRS)
West, Jeff; Westra, Doug; Lin, Jeff; Tucker, Kevin
2006-01-01
A robust rocket engine combustor design and development process must include tools which can accurately predict the multi-dimensional thermal environments imposed on solid surfaces by the hot combustion products. Currently, empirical methods used in the design process are typically one dimensional and do not adequately account for the heat flux rise rate in the near-injector region of the chamber. Computational Fluid Dynamics holds promise to meet the design tool requirement, but requires accuracy quantification, or validation, before it can be confidently applied in the design process. This effort presents the beginning of such a validation process for the Loci- CHEM CPD code. The model problem examined here is a gaseous oxygen (GO2)/gaseous hydrogen (GH2) shear coaxial single element injector operating at a chamber pressure of 5.42 MPa. The GO2/GH2 propellant combination in this geometry represents one the simplest rocket model problems and is thus foundational to subsequent validation efforts for more complex injectors. Multiple steady state solutions have been produced with Loci-CHEM employing different hybrid grids and two-equation turbulence models. Iterative convergence for each solution is demonstrated via mass conservation, flow variable monitoring at discrete flow field locations as a function of solution iteration and overall residual performance. A baseline hybrid grid was used and then locally refined to demonstrate grid convergence. Solutions were also obtained with three variations of the k-omega turbulence model.
NASA Technical Reports Server (NTRS)
Nakazawa, Shohei
1991-01-01
Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.
NASA Technical Reports Server (NTRS)
Joncas, K. P.
1972-01-01
Concepts and techniques for identifying and simulating both the steady state and dynamic characteristics of electrical loads for use during integrated system test and evaluation are discussed. The investigations showed that it is feasible to design and develop interrogation and simulation equipment to perform the desired functions. During the evaluation, actual spacecraft loads were interrogated by stimulating the loads with their normal input voltage and measuring the resultant voltage and current time histories. Elements of the circuits were optimized by an iterative process of selecting element values and comparing the time-domain response of the model with those obtained from the real equipment during interrogation.
NASA Technical Reports Server (NTRS)
Gartling, D. K.; Roache, P. J.
1978-01-01
The efficiency characteristics of finite element and finite difference approximations for the steady-state solution of the Navier-Stokes equations are examined. The finite element method discussed is a standard Galerkin formulation of the incompressible, steady-state Navier-Stokes equations. The finite difference formulation uses simple centered differences that are O(delta x-squared). Operation counts indicate that a rapidly converging Newton-Raphson-Kantorovitch iteration scheme is generally preferable over a Picard method. A split NOS Picard iterative algorithm for the finite difference method was most efficient.
Analysis of truss, beam, frame, and membrane components. [composite structures
NASA Technical Reports Server (NTRS)
Knoell, A. C.; Robinson, E. Y.
1975-01-01
Truss components are considered, taking into account composite truss structures, truss analysis, column members, and truss joints. Beam components are discussed, giving attention to composite beams, laminated beams, and sandwich beams. Composite frame components and composite membrane components are examined. A description is given of examples of flat membrane components and examples of curved membrane elements. It is pointed out that composite structural design and analysis is a highly interactive, iterative procedure which does not lend itself readily to characterization by design or analysis function only.-
NASA Astrophysics Data System (ADS)
Escourbiac, F.; Richou, M.; Guigon, R.; Constans, S.; Durocher, A.; Merola, M.; Schlosser, J.; Riccardi, B.; Grosman, A.
2009-12-01
Experience has shown that a critical part of the high-heat flux (HHF) plasma-facing component (PFC) is the armour to heat sink bond. An experimental study was performed in order to define acceptance criteria with regards to thermal hydraulics and fatigue performance of the International Thermonuclear Experimental Reactor (ITER) divertor PFCs. This study, which includes the manufacturing of samples with calibrated artificial defects relevant to the divertor design, is reported in this paper. In particular, it was concluded that defects detectable with non-destructive examination (NDE) techniques appeared to be acceptable during HHF experiments relevant to heat fluxes expected in the ITER divertor. On the basis of these results, a set of acceptance criteria was proposed and applied to the European vertical target medium-size qualification prototype: 98% of the inspected carbon fibre composite (CFC) monoblocks and 100% of tungsten (W) monoblock and flat tiles elements (i.e. 80% of the full units) were declared acceptable.
AERODYNAMIC AND BLADING DESIGN OF MULTISTAGE AXIAL FLOW COMPRESSORS
NASA Technical Reports Server (NTRS)
Crouse, J. E.
1994-01-01
The axial-flow compressor is used for aircraft engines because it has distinct configuration and performance advantages over other compressor types. However, good potential performance is not easily obtained. The designer must be able to model the actual flows well enough to adequately predict aerodynamic performance. This computer program has been developed for computing the aerodynamic design of a multistage axial-flow compressor and, if desired, the associated blading geometry input for internal flow analysis. The aerodynamic solution gives velocity diagrams on selected streamlines of revolution at the blade row edges. The program yields aerodynamic and blading design results that can be directly used by flow and mechanical analysis codes. Two such codes are TSONIC, a blade-to-blade channel flow analysis code (COSMIC program LEW-10977), and MERIDL, a more detailed hub-to-shroud flow analysis code (COSMIC program LEW-12966). The aerodynamic and blading design program can reduce the time and effort required to obtain acceptable multistage axial-flow compressor configurations by generating good initial solutions and by being compatible with available analysis codes. The aerodynamic solution assumes steady, axisymmetric flow so that the problem is reduced to solving the two-dimensional flow field in the meridional plane. The streamline curvature method is used for the iterative aerodynamic solution at stations outside of the blade rows. If a blade design is desired, the blade elements are defined and stacked within the aerodynamic solution iteration. The blade element inlet and outlet angles are established by empirical incidence and deviation angles to the relative flow angles of the velocity diagrams. The blade element centerline is composed of two segments tangentially joined at a transition point. The local blade angle variation of each element can be specified as a fourth-degree polynomial function of path distance. Blade element thickness can also be specified with fourth-degree polynomial functions of path distance from the maximum thickness point. Input to the aerodynamic and blading design program includes the annulus profile, the overall compressor mass flow, the pressure ratio, and the rotative speed. A number of input parameters are also used to specify and control the blade row aerodynamics and geometry. The output from the aerodynamic solution has an overall blade row and compressor performance summary followed by blade element parameters for the individual blade rows. If desired, the blade coordinates in the streamwise direction for internal flow analysis codes and the coordinates on plane sections through blades for fabrication drawings may be stored and printed. The aerodynamic and blading design program for multistage axial-flow compressors is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 series computer with a central memory requirement of approximately 470K of 8 bit bytes. This program was developed in 1981.
Optimization of an Offset Receiver Optics for Radio Telescopes
NASA Astrophysics Data System (ADS)
Yeap, Kim Ho; Tham, Choy Yoong
2018-01-01
The latest generation of Cassegrain radio astronomy antennas is designed for multiple frequency bands with receivers for individual bands offset from the antenna axis. The offset feed arrangement typically has two focusing elements in the form of ellipsoidal mirrors in the optical path between the feed horn and the antenna focus. This arrangement aligns the beam from the offset feed horn to illuminate the subreflector. The additional focusing elements increase the number of design variables, namely the distances between the horn aperture and the first mirror and that between the two mirrors, and their focal lengths. There are a huge number of possible combinations of these four variables in which the optics system can take on. The design aim is to seek the combination that will give the optimum antenna efficiency, not only at the centre frequency of the particular band but also across its bandwidth. To pick the optimum combination of the variables, it requires working through, by computational mean, a continuum range of variable values at different frequencies which will fit the optics system within the allocated physical space. Physical optics (PO) is a common technique used in optics design. However, due to the repeated iteration of the huge number of computation involved, the use of PO is not feasible. We present a procedure based on using multimode Gaussian optics to pick the optimum design and using PO for final verification of the system performance. The best antenna efficiency is achieved when the beam illuminating the subreflector is truncated with the optimum edge taper. The optimization procedure uses the beam's edge taper at the subreflector as the iteration target. The band 6 receiver optics design for the Atacama Large Millimetre Array (ALMA) antenna is used to illustrate the optimization procedure.
McLeod, Bryce D; Sutherland, Kevin S; Martinez, Ruben G; Conroy, Maureen A; Snyder, Patricia A; Southam-Gerow, Michael A
2017-02-01
Educators are increasingly being encouraged to implement evidence-based interventions and practices to address the social, emotional, and behavioral needs of young children who exhibit problem behavior in early childhood settings. Given the nature of social-emotional learning during the early childhood years and the lack of a common set of core evidence-based practices within the early childhood literature, selection of instructional practices that foster positive social, emotional, and behavioral outcomes for children in early childhood settings can be difficult. The purpose of this paper is to report findings from a study designed to identify common practice elements found in comprehensive intervention models (i.e., manualized interventions that include a number of components) or discrete practices (i.e., a specific behavior or action) designed to target social, emotional, and behavioral learning of young children who exhibit problem behavior. We conducted a systematic review of early childhood classroom interventions that had been evaluated in randomized group designs, quasi-experimental designs, and single-case experimental designs. A total of 49 published articles were identified, and an iterative process was used to identify common practice elements. The practice elements were subsequently reviewed by experts in social-emotional and behavioral interventions for young children. Twenty-four practice elements were identified and classified into content (the goal or general principle that guides a practice element) and delivery (the way in which a teacher provides instruction to the child) categories. We discuss implications that the identification of these practice elements found in the early childhood literature has for efforts to implement models and practices.
Calculating massive 3-loop graphs for operator matrix elements by the method of hyperlogarithms
NASA Astrophysics Data System (ADS)
Ablinger, Jakob; Blümlein, Johannes; Raab, Clemens; Schneider, Carsten; Wißbrock, Fabian
2014-08-01
We calculate convergent 3-loop Feynman diagrams containing a single massive loop equipped with twist τ=2 local operator insertions corresponding to spin N. They contribute to the massive operator matrix elements in QCD describing the massive Wilson coefficients for deep-inelastic scattering at large virtualities. Diagrams of this kind can be computed using an extended version of the method of hyperlogarithms, originally being designed for massless Feynman diagrams without operators. The method is applied to Benz- and V-type graphs, belonging to the genuine 3-loop topologies. In case of the V-type graphs with five massive propagators, new types of nested sums and iterated integrals emerge. The sums are given in terms of finite binomially and inverse binomially weighted generalized cyclotomic sums, while the 1-dimensionally iterated integrals are based on a set of ∼30 square-root valued letters. We also derive the asymptotic representations of the nested sums and present the solution for N∈C. Integrals with a power-like divergence in N-space ∝aN,a∈R,a>1, for large values of N emerge. They still possess a representation in x-space, which is given in terms of root-valued iterated integrals in the present case. The method of hyperlogarithms is also used to calculate higher moments for crossed box graphs with different operator insertions.
NASA Technical Reports Server (NTRS)
Hrinda, Glenn A.; Nguyen, Duc T.
2008-01-01
A technique for the optimization of stability constrained geometrically nonlinear shallow trusses with snap through behavior is demonstrated using the arc length method and a strain energy density approach within a discrete finite element formulation. The optimization method uses an iterative scheme that evaluates the design variables' performance and then updates them according to a recursive formula controlled by the arc length method. A minimum weight design is achieved when a uniform nonlinear strain energy density is found in all members. This minimal condition places the design load just below the critical limit load causing snap through of the structure. The optimization scheme is programmed into a nonlinear finite element algorithm to find the large strain energy at critical limit loads. Examples of highly nonlinear trusses found in literature are presented to verify the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tresemer, K. R.
2015-07-01
ITER is an international project under construction in France that will demonstrate nuclear fusion at a power plant-relevant scale. The Toroidal Interferometer and Polarimeter (TIP) Diagnostic will be used to measure the plasma electron line density along 5 laser-beam chords. This line-averaged density measurement will be input to the ITER feedback-control system. The TIP is considered the primary diagnostic for these measurements, which are needed for basic ITER machine control. Therefore, system reliability & accuracy is a critical element in TIP’s design. There are two major challenges to the reliability of the TIP system. First is the survivability and performancemore » of in-vessel optics and second is maintaining optical alignment over long optical paths and large vessel movements. Both of these issues greatly depend on minimizing the overall distortion due to neutron & gamma heating of the Corner Cube Retroreflectors (CCRs). These are small optical mirrors embedded in five first wall locations around the vacuum vessel, corresponding to certain plasma tangency radii. During the development of the design and location of these CCRs, several iterations of neutronics analyses were performed to determine and minimize the total distortion due to nuclear heating of the CCRs. The CCR corresponding to TIP Channel 2 was chosen for analysis as a good middle-road case, being an average distance from the plasma (of the five channels) and having moderate neutron shielding from its blanket shield housing. Results show that Channel 2 meets the requirements of the TIP Diagnostic, but barely. These results suggest other CCRs might be at risk of exceeding thermal deformation due to nuclear heating.« less
Generative Representations for Automated Design of Robots
NASA Technical Reports Server (NTRS)
Homby, Gregory S.; Lipson, Hod; Pollack, Jordan B.
2007-01-01
A method of automated design of complex, modular robots involves an evolutionary process in which generative representations of designs are used. The term generative representations as used here signifies, loosely, representations that consist of or include algorithms, computer programs, and the like, wherein encoded designs can reuse elements of their encoding and thereby evolve toward greater complexity. Automated design of robots through synthetic evolutionary processes has already been demonstrated, but it is not clear whether genetically inspired search algorithms can yield designs that are sufficiently complex for practical engineering. The ultimate success of such algorithms as tools for automation of design depends on the scaling properties of representations of designs. A nongenerative representation (one in which each element of the encoded design is used at most once in translating to the design) scales linearly with the number of elements. Search algorithms that use nongenerative representations quickly become intractable (search times vary approximately exponentially with numbers of design elements), and thus are not amenable to scaling to complex designs. Generative representations are compact representations and were devised as means to circumvent the above-mentioned fundamental restriction on scalability. In the present method, a robot is defined by a compact programmatic form (its generative representation) and the evolutionary variation takes place on this form. The evolutionary process is an iterative one, wherein each cycle consists of the following steps: 1. Generative representations are generated in an evolutionary subprocess. 2. Each generative representation is a program that, when compiled, produces an assembly procedure. 3. In a computational simulation, a constructor executes an assembly procedure to generate a robot. 4. A physical-simulation program tests the performance of a simulated constructed robot, evaluating the performance according to a fitness criterion to yield a figure of merit that is fed back into the evolutionary subprocess of the next iteration. In comparison with prior approaches to automated evolutionary design of robots, the use of generative representations offers two advantages: First, a generative representation enables the reuse of components in regular and hierarchical ways and thereby serves a systematic means of creating more complex modules out of simpler ones. Second, the evolved generative representation may capture intrinsic properties of the design problem, so that variations in the representations move through the design space more effectively than do equivalent variations in a nongenerative representation. This method has been demonstrated by using it to design some robots that move, variously, by walking, rolling, or sliding. Some of the robots were built (see figure). Although these robots are very simple, in comparison with robots designed by humans, their structures are more regular, modular, hierarchical, and complex than are those of evolved designs of comparable functionality synthesized by use of nongenerative representations.
NASA Astrophysics Data System (ADS)
McDonald, Geoff L.; Zhao, Qing
2017-01-01
Minimum Entropy Deconvolution (MED) has been applied successfully to rotating machine fault detection from vibration data, however this method has limitations. A convolution adjustment to the MED definition and solution is proposed in this paper to address the discontinuity at the start of the signal - in some cases causing spurious impulses to be erroneously deconvolved. A problem with the MED solution is that it is an iterative selection process, and will not necessarily design an optimal filter for the posed problem. Additionally, the problem goal in MED prefers to deconvolve a single-impulse, while in rotating machine faults we expect one impulse-like vibration source per rotational period of the faulty element. Maximum Correlated Kurtosis Deconvolution was proposed to address some of these problems, and although it solves the target goal of multiple periodic impulses, it is still an iterative non-optimal solution to the posed problem and only solves for a limited set of impulses in a row. Ideally, the problem goal should target an impulse train as the output goal, and should directly solve for the optimal filter in a non-iterative manner. To meet these goals, we propose a non-iterative deconvolution approach called Multipoint Optimal Minimum Entropy Deconvolution Adjusted (MOMEDA). MOMEDA proposes a deconvolution problem with an infinite impulse train as the goal and the optimal filter solution can be solved for directly. From experimental data on a gearbox with and without a gear tooth chip, we show that MOMEDA and its deconvolution spectrums according to the period between the impulses can be used to detect faults and study the health of rotating machine elements effectively.
Genetic Algorithm Design of a 3D Printed Heat Sink
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Tong; Ozpineci, Burak; Ayers, Curtis William
2016-01-01
In this paper, a genetic algorithm- (GA-) based approach is discussed for designing heat sinks based on total heat generation and dissipation for a pre-specified size andshape. This approach combines random iteration processesand genetic algorithms with finite element analysis (FEA) to design the optimized heat sink. With an approach that prefers survival of the fittest , a more powerful heat sink can bedesigned which can cool power electronics more efficiently. Some of the resulting designs can only be 3D printed due totheir complexity. In addition to describing the methodology, this paper also includes comparisons of different cases to evaluate themore » performance of the newly designed heat sinkcompared to commercially available heat sinks.« less
Krylov subspace iterative methods for boundary element method based near-field acoustic holography.
Valdivia, Nicolas; Williams, Earl G
2005-02-01
The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.
Application of a neural network to simulate analysis in an optimization process
NASA Technical Reports Server (NTRS)
Rogers, James L.; Lamarsh, William J., II
1992-01-01
A new experimental software package called NETS/PROSSS aimed at reducing the computing time required to solve a complex design problem is described. The software combines a neural network for simulating the analysis program with an optimization program. The neural network is applied to approximate results of a finite element analysis program to quickly obtain a near-optimal solution. Results of the NETS/PROSSS optimization process can also be used as an initial design in a normal optimization process and make it possible to converge to an optimum solution with significantly fewer iterations.
Wang, Hai-Yan; Liu, Cheng; Veetil, Suhas P; Pan, Xing-Chen; Zhu, Jian-Qiang
2014-01-27
Wavefront control is a significant parameter in inertial confinement fusion (ICF). The complex transmittance of large optical elements which are often used in ICF is obtained by computing the phase difference of the illuminating and transmitting fields using Ptychographical Iterative Engine (PIE). This can accurately and effectively measure the transmittance of large optical elements with irregular surface profiles, which are otherwise not measurable using commonly used interferometric techniques due to a lack of standard reference plate. Experiments are done with a Continue Phase Plate (CPP) to illustrate the feasibility of this method.
Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.
2016-01-01
Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.
Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frinks, Neal T.
2016-01-01
Several improvements to the mixed-elementUSM3Ddiscretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobromir Panayotov; Andrew Grief; Brad J. Merrill
'Fusion for Energy' (F4E) develops designs and implements the European Test Blanket Systems (TBS) in ITER - Helium-Cooled Lithium-Lead (HCLL) and Helium-Cooled Pebble-Bed (HCPB). Safety demonstration is an essential element for the integration of TBS in ITER and accident analyses are one of its critical segments. A systematic approach to the accident analyses had been acquired under the F4E contract on TBS safety analyses. F4E technical requirements and AMEC and INL efforts resulted in the development of a comprehensive methodology for fusion breeding blanket accident analyses. It addresses the specificity of the breeding blankets design, materials and phenomena and atmore » the same time is consistent with the one already applied to ITER accident analyses. Methodology consists of several phases. At first the reference scenarios are selected on the base of FMEA studies. In the second place elaboration of the accident analyses specifications we use phenomena identification and ranking tables to identify the requirements to be met by the code(s) and TBS models. Thus the limitations of the codes are identified and possible solutions to be built into the models are proposed. These include among others the loose coupling of different codes or code versions in order to simulate multi-fluid flows and phenomena. The code selection and issue of the accident analyses specifications conclude this second step. Furthermore the breeding blanket and ancillary systems models are built on. In this work challenges met and solutions used in the development of both MELCOR and RELAP5 codes models of HCLL and HCPB TBSs will be shared. To continue the developed models are qualified by comparison with finite elements analyses, by code to code comparison and sensitivity studies. Finally, the qualified models are used for the execution of the accident analyses of specific scenario. When possible the methodology phases will be illustrated in the paper by limited number of tables and figures. Description of each phase and its results in detail as well the methodology applications to EU HCLL and HCPB TBSs will be published in separate papers. The developed methodology is applicable to accident analyses of other TBSs to be tested in ITER and as well to DEMO breeding blankets.« less
Updating finite element dynamic models using an element-by-element sensitivity methodology
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Hemez, Francois M.
1993-01-01
A sensitivity-based methodology for improving the finite element model of a given structure using test modal data and a few sensors is presented. The proposed method searches for both the location and sources of the mass and stiffness errors and does not interfere with the theory behind the finite element model while correcting these errors. The updating algorithm is derived from the unconstrained minimization of the squared L sub 2 norms of the modal dynamic residuals via an iterative two-step staggered procedure. At each iteration, the measured mode shapes are first expanded assuming that the model is error free, then the model parameters are corrected assuming that the expanded mode shapes are exact. The numerical algorithm is implemented in an element-by-element fashion and is capable of 'zooming' on the detected error locations. Several simulation examples which demonstate the potential of the proposed methodology are discussed.
Blacker, Teddy D.
1994-01-01
An automatic quadrilateral surface discretization method and apparatus is provided for automatically discretizing a geometric region without decomposing the region. The automated quadrilateral surface discretization method and apparatus automatically generates a mesh of all quadrilateral elements which is particularly useful in finite element analysis. The generated mesh of all quadrilateral elements is boundary sensitive, orientation insensitive and has few irregular nodes on the boundary. A permanent boundary of the geometric region is input and rows are iteratively layered toward the interior of the geometric region. Also, an exterior permanent boundary and an interior permanent boundary for a geometric region may be input and the rows are iteratively layered inward from the exterior boundary in a first counter clockwise direction while the rows are iteratively layered from the interior permanent boundary toward the exterior of the region in a second clockwise direction. As a result, a high quality mesh for an arbitrary geometry may be generated with a technique that is robust and fast for complex geometric regions and extreme mesh gradations.
NASA Astrophysics Data System (ADS)
Zhang, Fei; Huang, Weizhang; Li, Xianping; Zhang, Shicheng
2018-03-01
A moving mesh finite element method is studied for the numerical solution of a phase-field model for brittle fracture. The moving mesh partial differential equation approach is employed to dynamically track crack propagation. Meanwhile, the decomposition of the strain tensor into tensile and compressive components is essential for the success of the phase-field modeling of brittle fracture but results in a non-smooth elastic energy and stronger nonlinearity in the governing equation. This makes the governing equation much more difficult to solve and, in particular, Newton's iteration often fails to converge. Three regularization methods are proposed to smooth out the decomposition of the strain tensor. Numerical examples of fracture propagation under quasi-static load demonstrate that all of the methods can effectively improve the convergence of Newton's iteration for relatively small values of the regularization parameter but without compromising the accuracy of the numerical solution. They also show that the moving mesh finite element method is able to adaptively concentrate the mesh elements around propagating cracks and handle multiple and complex crack systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boozer, Allen H., E-mail: ahb17@columbia.edu
2015-03-15
The plasma current in ITER cannot be allowed to transfer from thermal to relativistic electron carriers. The potential for damage is too great. Before the final design is chosen for the mitigation system to prevent such a transfer, it is important that the parameters that control the physics be understood. Equations that determine these parameters and their characteristic values are derived. The mitigation benefits of the injection of impurities with the highest possible atomic number Z and the slowing plasma cooling during halo current mitigation to ≳40 ms in ITER are discussed. The highest possible Z increases the poloidal flux consumptionmore » required for each e-fold in the number of relativistic electrons and reduces the number of high energy seed electrons from which exponentiation builds. Slow cooling of the plasma during halo current mitigation also reduces the electron seed. Existing experiments could test physics elements required for mitigation but cannot carry out an integrated demonstration. ITER itself cannot carry out an integrated demonstration without excessive danger of damage unless the probability of successful mitigation is extremely high. The probability of success depends on the reliability of the theory. Equations required for a reliable Monte Carlo simulation are derived.« less
NASA Astrophysics Data System (ADS)
Humer, K.; Raff, S.; Prokopec, R.; Weber, H. W.
2008-03-01
A glass fiber reinforced plastic laminate, which consists of half-overlapped wrapped Kapton/R-glass-fiber reinforcing tapes vacuum-pressure impregnated in a cyanate ester/epoxy blend, is proposed as the insulation system for the ITER Toroidal Field coils. In order to assess its mechanical performance under the actual operating conditions, cryogenic (77 K) tensile and interlaminar shear tests were done after irradiation to the ITER design fluence of 1×1022 m-2 (E>0.1 MeV). The data were then used for a Finite Element Method (FEM) stress analysis. We find that the mechanical strength and the fracture behavior as well as the stress distribution and the failure criteria are strongly influenced by the winding direction and the wrapping technique of the reinforcing tapes.
Ruffato, Gianluca; Rossi, Roberto; Massari, Michele; Mafakheri, Erfan; Capaldo, Pietro; Romanato, Filippo
2017-12-21
In this paper, we present the design, fabrication and optical characterization of computer-generated holograms (CGH) encoding information for light beams carrying orbital angular momentum (OAM). Through the use of a numerical code, based on an iterative Fourier transform algorithm, a phase-only diffractive optical element (PO-DOE) specifically designed for OAM illumination has been computed, fabricated and tested. In order to shape the incident beam into a helicoidal phase profile and generate light carrying phase singularities, a method based on transmission through high-order spiral phase plates (SPPs) has been used. The phase pattern of the designed holographic DOEs has been fabricated using high-resolution Electron-Beam Lithography (EBL) over glass substrates coated with a positive photoresist layer (polymethylmethacrylate). To the best of our knowledge, the present study is the first attempt, in a comprehensive work, to design, fabricate and characterize computer-generated holograms encoding information for structured light carrying OAM and phase singularities. These optical devices appear promising as high-security optical elements for anti-counterfeiting applications.
Finite element analysis as a design tool for thermoplastic vulcanizate glazing seals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gase, K.M.; Hudacek, L.L.; Pesevski, G.T.
1998-12-31
There are three materials that are commonly used in commercial glazing seals: EPDM, silicone and thermoplastic vulcanizates (TPVs). TPVs are a high performance class of thermoplastic elastomers (TPEs), where TPEs have elastomeric properties with thermoplastic processability. TPVs have emerged as materials well suited for use in glazing seals due to ease of processing, economics and part design flexibility. The part design and development process is critical to ensure that the chosen TPV provides economics, quality and function in demanding environments. In the design and development process, there is great value in utilizing dual durometer systems to capitalize on the benefitsmore » of soft and rigid materials. Computer-aided design tools, such as Finite Element Analysis (FEA), are effective in minimizing development time and predicting system performance. Examples of TPV glazing seals will illustrate the benefits of utilizing FEA to take full advantage of the material characteristics, which results in functional performance and quality while reducing development iterations. FEA will be performed on two glazing seal profiles to confirm optimum geometry.« less
Resizing procedure for optimum design of structures under combined mechanical and thermal loading
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Narayanaswami, R.
1976-01-01
An algorithm is reported for resizing structures subjected to combined thermal and mechanical loading. The algorithm is applicable to uniaxial stress elements (rods) and membrane biaxial stress members. Thermal Fully Stressed Design (TFSD) is based on the basic difference between mechanical and thermal stresses in their response to resizing. The TFSD technique is found to converge in fewer iterations than ordinary fully stressed design for problems where thermal stresses are comparable to the mechanical stresses. The improved convergence is demonstrated by example with a study of a simplified wing structure, built-up with rods and membranes and subjected to a combination of mechanical loads and a three dimensional temperature distribution.
Overview of International Thermonuclear Experimental Reactor (ITER) engineering design activities*
NASA Astrophysics Data System (ADS)
Shimomura, Y.
1994-05-01
The International Thermonuclear Experimental Reactor (ITER) [International Thermonuclear Experimental Reactor (ITER) (International Atomic Energy Agency, Vienna, 1988), ITER Documentation Series, No. 1] project is a multiphased project, presently proceeding under the auspices of the International Atomic Energy Agency according to the terms of a four-party agreement among the European Atomic Energy Community (EC), the Government of Japan (JA), the Government of the Russian Federation (RF), and the Government of the United States (US), ``the Parties.'' The ITER project is based on the tokamak, a Russian invention, and has since been brought to a high level of development in all major fusion programs in the world. The objective of ITER is to demonstrate the scientific and technological feasibility of fusion energy for peaceful purposes. The ITER design is being developed, with support from the Parties' four Home Teams and is in progress by the Joint Central Team. An overview of ITER Design activities is presented.
NASA Technical Reports Server (NTRS)
Jandhyala, Vikram (Inventor); Chowdhury, Indranil (Inventor)
2011-01-01
An approach that efficiently solves for a desired parameter of a system or device that can include both electrically large fast multipole method (FMM) elements, and electrically small QR elements. The system or device is setup as an oct-tree structure that can include regions of both the FMM type and the QR type. An iterative solver is then used to determine a first matrix vector product for any electrically large elements, and a second matrix vector product for any electrically small elements that are included in the structure. These matrix vector products for the electrically large elements and the electrically small elements are combined, and a net delta for a combination of the matrix vector products is determined. The iteration continues until a net delta is obtained that is within predefined limits. The matrix vector products that were last obtained are used to solve for the desired parameter.
ITER Magnet Feeder: Design, Manufacturing and Integration
NASA Astrophysics Data System (ADS)
CHEN, Yonghua; ILIN, Y.; M., SU; C., NICHOLAS; BAUER, P.; JAROMIR, F.; LU, Kun; CHENG, Yong; SONG, Yuntao; LIU, Chen; HUANG, Xiongyi; ZHOU, Tingzhi; SHEN, Guang; WANG, Zhongwei; FENG, Hansheng; SHEN, Junsong
2015-03-01
The International Thermonuclear Experimental Reactor (ITER) feeder procurement is now well underway. The feeder design has been improved by the feeder teams at the ITER Organization (IO) and the Institute of Plasma Physics, Chinese Academy of Sciences (ASIPP) in the last 2 years along with analyses and qualification activities. The feeder design is being progressively finalized. In addition, the preparation of qualification and manufacturing are well scheduled at ASIPP. This paper mainly presents the design, the overview of manufacturing and the status of integration on the ITER magnet feeders. supported by the National Special Support for R&D on Science and Technology for ITER (Ministry of Public Security of the People's Republic of China-MPS) (No. 2008GB102000)
Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian
2016-03-20
We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.
Control system design for flexible structures using data models
NASA Technical Reports Server (NTRS)
Irwin, R. Dennis; Frazier, W. Garth; Mitchell, Jerrel R.; Medina, Enrique A.; Bukley, Angelia P.
1993-01-01
The dynamics and control of flexible aerospace structures exercises many of the engineering disciplines. In recent years there has been considerable research in the developing and tailoring of control system design techniques for these structures. This problem involves designing a control system for a multi-input, multi-output (MIMO) system that satisfies various performance criteria, such as vibration suppression, disturbance and noise rejection, attitude control and slewing control. Considerable progress has been made and demonstrated in control system design techniques for these structures. The key to designing control systems for these structures that meet stringent performance requirements is an accurate model. It has become apparent that theoretically and finite-element generated models do not provide the needed accuracy; almost all successful demonstrations of control system design techniques have involved using test results for fine-tuning a model or for extracting a model using system ID techniques. This paper describes past and ongoing efforts at Ohio University and NASA MSFC to design controllers using 'data models.' The basic philosophy of this approach is to start with a stabilizing controller and frequency response data that describes the plant; then, iteratively vary the free parameters of the controller so that performance measures become closer to satisfying design specifications. The frequency response data can be either experimentally derived or analytically derived. One 'design-with-data' algorithm presented in this paper is called the Compensator Improvement Program (CIP). The current CIP designs controllers for MIMO systems so that classical gain, phase, and attenuation margins are achieved. The center-piece of the CIP algorithm is the constraint improvement technique which is used to calculate a parameter change vector that guarantees an improvement in all unsatisfied, feasible performance metrics from iteration to iteration. The paper also presents a recently demonstrated CIP-type algorithm, called the Model and Data Oriented Computer-Aided Design System (MADCADS), developed for achieving H(sub infinity) type design specifications using data models. Control system design for the NASA/MSFC Single Structure Control Facility are demonstrated for both CIP and MADCADS. Advantages of design-with-data algorithms over techniques that require analytical plant models are also presented.
An iterative requirements specification procedure for decision support systems.
Brookes, C H
1987-08-01
Requirements specification is a key element in a DSS development project because it not only determines what is to be done, it also drives the evolution process. A procedure for requirements elicitation is described that is based on the decomposition of the DSS design task into a number of functions, subfunctions, and operators. It is postulated that the procedure facilitates the building of a DSS that is complete and integrates MIS, modelling and expert system components. Some examples given are drawn from the health administration field.
A wideband FMBEM for 2D acoustic design sensitivity analysis based on direct differentiation method
NASA Astrophysics Data System (ADS)
Chen, Leilei; Zheng, Changjun; Chen, Haibo
2013-09-01
This paper presents a wideband fast multipole boundary element method (FMBEM) for two dimensional acoustic design sensitivity analysis based on the direct differentiation method. The wideband fast multipole method (FMM) formed by combining the original FMM and the diagonal form FMM is used to accelerate the matrix-vector products in the boundary element analysis. The Burton-Miller formulation is used to overcome the fictitious frequency problem when using a single Helmholtz boundary integral equation for exterior boundary-value problems. The strongly singular and hypersingular integrals in the sensitivity equations can be evaluated explicitly and directly by using the piecewise constant discretization. The iterative solver GMRES is applied to accelerate the solution of the linear system of equations. A set of optimal parameters for the wideband FMBEM design sensitivity analysis are obtained by observing the performances of the wideband FMM algorithm in terms of computing time and memory usage. Numerical examples are presented to demonstrate the efficiency and validity of the proposed algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Y.; Loesser, G.; Smith, M.
ITER diagnostic first walls (DFWs) and diagnostic shield modules (DSMs) inside the port plugs (PPs) are designed to protect diagnostic instrument and components from a harsh plasma environment and provide structural support while allowing for diagnostic access to the plasma. The design of DFWs and DSMs are driven by 1) plasma radiation and nuclear heating during normal operation 2) electromagnetic loads during plasma events and associate component structural responses. A multi-physics engineering analysis protocol for the design has been established at Princeton Plasma Physics Laboratory and it was used for the design of ITER DFWs and DSMs. The analyses weremore » performed to address challenging design issues based on resultant stresses and deflections of the DFW-DSM-PP assembly for the main load cases. ITER Structural Design Criteria for In-Vessel Components (SDC-IC) required for design by analysis and three major issues driving the mechanical design of ITER DFWs are discussed. The general guidelines for the DSM design have been established as a result of design parametric studies.« less
Thermal Analysis of Small Re-Entry Probe
NASA Technical Reports Server (NTRS)
Agrawal, Parul; Prabhu, Dinesh K.; Chen, Y. K.
2012-01-01
The Small Probe Reentry Investigation for TPS Engineering (SPRITE) concept was developed at NASA Ames Research Center to facilitate arc-jet testing of a fully instrumented prototype probe at flight scale. Besides demonstrating the feasibility of testing a flight-scale model and the capability of an on-board data acquisition system, another objective for this project was to investigate the capability of simulation tools to predict thermal environments of the probe/test article and its interior. This paper focuses on finite-element thermal analyses of the SPRITE probe during the arcjet tests. Several iterations were performed during the early design phase to provide critical design parameters and guidelines for testing. The thermal effects of ablation and pyrolysis were incorporated into the final higher-fidelity modeling approach by coupling the finite-element analyses with a two-dimensional thermal protection materials response code. Model predictions show good agreement with thermocouple data obtained during the arcjet test.
Application of optimized multiscale mathematical morphology for bearing fault diagnosis
NASA Astrophysics Data System (ADS)
Gong, Tingkai; Yuan, Yanbin; Yuan, Xiaohui; Wu, Xiaotao
2017-04-01
In order to suppress noise effectively and extract the impulsive features in the vibration signals of faulty rolling element bearings, an optimized multiscale morphology (OMM) based on conventional multiscale morphology (CMM) and iterative morphology (IM) is presented in this paper. Firstly, the operator used in the IM method must be non-idempotent; therefore, an optimized difference (ODIF) operator has been designed. Furthermore, in the iterative process the current operation is performed on the basis of the previous one. This means that if a larger scale is employed, more fault features are inhibited. Thereby, a unit scale is proposed as the structuring element (SE) scale in IM. According to the above definitions, the IM method is implemented on the results over different scales obtained by CMM. The validity of the proposed method is first evaluated by a simulated signal. Subsequently, aimed at an outer race fault two vibration signals sampled by different accelerometers are analyzed by OMM and CMM, respectively. The same is done for an inner race fault. The results show that the optimized method is effective in diagnosing the two bearing faults. Compared with the CMM method, the OMM method can extract much more fault features under strong noise background.
Lewis, Peter A; Mai, Van Anh Thi; Gray, Genevieve
2012-04-01
The advent of eLearning has seen online discussion forums widely used in both undergraduate and postgraduate nursing education. This paper reports an Australian university experience of design, delivery and redevelopment of a distance education module developed for Vietnamese nurse academics. The teaching experience of Vietnamese nurse academics is mixed and frequently limited. It was decided that the distance module should attempt to utilise the experience of senior Vietnamese nurse academics - asynchronous online discussion groups were used to facilitate this. Online discussion occurred in both Vietnamese and English and was moderated by an Australian academic working alongside a Vietnamese translator. This paper will discuss the design of an online learning environment for foreign correspondents, the resources and translation required to maximise the success of asynchronous online discussion groups, as well as the rationale of delivering complex content in a foreign language. While specifically addressing the first iteration of the first distance module designed, this paper will also address subsequent changes made for the second iteration of the module and comment on their success. While a translator is clearly a key component of success, the elements of simplicity and clarity combined with supportive online moderation must not be overlooked. Copyright © 2011 Elsevier Ltd. All rights reserved.
DENSITY-DEPENDENT FLOW IN ONE-DIMENSIONAL VARIABLY-SATURATED MEDIA
A one-dimensional finite element is developed to simulate density-dependent flow of saltwater in variably saturated media. The flow and solute equations were solved in a coupled mode (iterative), in a partially coupled mode (non-iterative), and in a completely decoupled mode. P...
Computational aspects of helicopter trim analysis and damping levels from Floquet theory
NASA Technical Reports Server (NTRS)
Gaonkar, Gopal H.; Achar, N. S.
1992-01-01
Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.
A hybrid Gerchberg-Saxton-like algorithm for DOE and CGH calculation
NASA Astrophysics Data System (ADS)
Wang, Haichao; Yue, Weirui; Song, Qiang; Liu, Jingdan; Situ, Guohai
2017-02-01
The Gerchberg-Saxton (GS) algorithm is widely used in various disciplines of modern sciences and technologies where phase retrieval is required. However, this legendary algorithm most likely stagnates after a few iterations. Many efforts have been taken to improve this situation. Here we propose to introduce the strategy of gradient descent and weighting technique to the GS algorithm, and demonstrate it using two examples: design of a diffractive optical element (DOE) to achieve off-axis illumination in lithographic tools, and design of a computer generated hologram (CGH) for holographic display. Both numerical simulation and optical experiments are carried out for demonstration.
Structures for handling high heat fluxes
NASA Astrophysics Data System (ADS)
Watson, R. D.
1990-12-01
The divertor is reconized as one of the main performance limiting components for ITER. This paper reviews the critical issues for structures that are designed to withstand heat fluxes > 5 MW/m 2. High velocity, sub-cooled water with twisted tape inserts for enhanced heat transfer provides a critical heat flux limit of 40-60 MW/m 2. Uncertainties in physics and engineering heat flux peaking factors require that the design heat flux not exceed 10 MW/m 2 to maintain an adequate burnout safety margin. Armor tiles and heat sink materials must have a well matched thermal expansion coefficient to minimize stresses. The divertor lifetime from sputtering erosion is highly uncertain. The number of disruptions specified for ITER must be reduced to achieve a credible design. In-situ plasma spray repair with thick metallic coatings may reduce the problems of erosion. Runaway electrons in ITER have the potential to melt actively cooled components in a single event. A water leak is a serious accident because of steam reactions with hot carbon, beryllium, or tungsten that can mobilize large amounts of tritium and radioactive elements. If the plasma does not shutdown immediately, the divertor can melt in 1-10 s after a loss of coolant accident. Very high reliability of carbon tile braze joints will be required to achieve adequate safety and performance goals. Most of these critical issues will be addressed in the near future by operation of the Tore Supra pump limiters and the JET pumped divertor. An accurate understanding of the power flow out of edge of a DT burning plasma is essential to successful design of high heat flux components.
2016-01-01
Many excellent methods exist that incorporate cryo-electron microscopy (cryoEM) data to constrain computational protein structure prediction and refinement. Previously, it was shown that iteration of two such orthogonal sampling and scoring methods – Rosetta and molecular dynamics (MD) simulations – facilitated exploration of conformational space in principle. Here, we go beyond a proof-of-concept study and address significant remaining limitations of the iterative MD–Rosetta protein structure refinement protocol. Specifically, all parts of the iterative refinement protocol are now guided by medium-resolution cryoEM density maps, and previous knowledge about the native structure of the protein is no longer necessary. Models are identified solely based on score or simulation time. All four benchmark proteins showed substantial improvement through three rounds of the iterative refinement protocol. The best-scoring final models of two proteins had sub-Ångstrom RMSD to the native structure over residues in secondary structure elements. Molecular dynamics was most efficient in refining secondary structure elements and was thus highly complementary to the Rosetta refinement which is most powerful in refining side chains and loop regions. PMID:25883538
NASA Technical Reports Server (NTRS)
Howard, S. Adam; Dellacorte, Christopher
2015-01-01
Rolling element bearings utilized in precision rotating machines require proper alignment, preload, and interference fits to ensure overall optimum performance. Hence, careful attention must be given to bearing installation and disassembly procedures to ensure the above conditions are met. Usually, machines are designed in such a way that bearings can be pressed into housings or onto shafts through the races without loading the rolling elements. However, in some instances, either due to limited size or access, a bearing must be installed or removed in such a way that the load path travels through the rolling elements. This can cause high contact stresses between the rolling elements and the races and introduces the potential for Brinell denting of the races. This paper is a companion to the Part I paper by the authors that discusses material selection and the general design philosophy for the bearing. Here, a more in-depth treatment is given to the design of a dent-resistant bearing utilizing a superelastic alloy, 60NiTi, for the races. A common bearing analysis tool based on rigid body dynamics is used in combination with finite element simulations to design the superelastic bearing. The primary design constraints are prevention of denting and avoiding the balls riding over the edge of the race groove during a blind disassembly process where the load passes through the rolling elements. Through an iterative process, the resulting bearing geometry is tailored to improve axial static load capability compared to a deep-groove ball bearing of the same size. The results suggest that careful selection of materials and bearing geometry can enable blind disassembly without damage to the raceways, which is necessary in the current application (a compressor in the International Space Station Environmental Control and Life Support System), and results in potential design flexibility for other applications, especially small machines with miniature bearings.
Relativistic electron kinetic effects on laser diagnostics in burning plasmas
NASA Astrophysics Data System (ADS)
Mirnov, V. V.; Den Hartog, D. J.
2018-02-01
Toroidal interferometry/polarimetry (TIP), poloidal polarimetry (PoPola), and Thomson scattering systems (TS) are major optical diagnostics being designed and developed for ITER. Each of them relies upon a sophisticated quantitative understanding of the electron response to laser light propagating through a burning plasma. Review of the theoretical results for two different applications is presented: interferometry/polarimetry (I/P) and polarization of Thomson scattered light, unified by the importance of relativistic (quadratic in vTe/c) electron kinetic effects. For I/P applications, rigorous analytical results are obtained perturbatively by expansion in powers of the small parameter τ = Te/me c2, where Te is electron temperature and me is electron rest mass. Experimental validation of the analytical models has been made by analyzing data of more than 1200 pulses collected from high-Te JET discharges. Based on this validation the relativistic analytical expressions are included in the error analysis and design projects of the ITER TIP and PoPola systems. The polarization properties of incoherent Thomson scattered light are being examined as a method of Te measurement relevant to ITER operational regimes. The theory is based on Stokes vector transformation and Mueller matrices formalism. The general approach is subdivided into frequency-integrated and frequency-resolved cases. For each of them, the exact analytical relativistic solutions are presented in the form of Mueller matrix elements averaged over the relativistic Maxwellian distribution function. New results related to the detailed verification of the frequency-resolved solutions are reported. The precise analytic expressions provide output much more rapidly than relativistic kinetic numerical codes allowing for direct real-time feedback control of ITER device operation.
NASA Astrophysics Data System (ADS)
Shuxia, ZHAO; Lei, ZHANG; Jiajia, HOU; Yang, ZHAO; Wangbao, YIN; Weiguang, MA; Lei, DONG; Liantuan, XIAO; Suotang, JIA
2018-03-01
The chemical composition of alloys directly determines their mechanical behaviors and application fields. Accurate and rapid analysis of both major and minor elements in alloys plays a key role in metallurgy quality control and material classification processes. A quantitative calibration-free laser-induced breakdown spectroscopy (CF-LIBS) analysis method, which carries out combined correction of plasma temperature and spectral intensity by using a second-order iterative algorithm and two boundary standard samples, is proposed to realize accurate composition measurements. Experimental results show that, compared to conventional CF-LIBS analysis, the relative errors for major elements Cu and Zn and minor element Pb in the copper-lead alloys has been reduced from 12%, 26% and 32% to 1.8%, 2.7% and 13.4%, respectively. The measurement accuracy for all elements has been improved substantially.
NASA Technical Reports Server (NTRS)
Cooke, C. H.; Blanchard, D. K.
1975-01-01
A finite element algorithm for solution of fluid flow problems characterized by the two-dimensional compressible Navier-Stokes equations was developed. The program is intended for viscous compressible high speed flow; hence, primitive variables are utilized. The physical solution was approximated by trial functions which at a fixed time are piecewise cubic on triangular elements. The Galerkin technique was employed to determine the finite-element model equations. A leapfrog time integration is used for marching asymptotically from initial to steady state, with iterated integrals evaluated by numerical quadratures. The nonsymmetric linear systems of equations governing time transition from step-to-step are solved using a rather economical block iterative triangular decomposition scheme. The concept was applied to the numerical computation of a free shear flow. Numerical results of the finite-element method are in excellent agreement with those obtained from a finite difference solution of the same problem.
Forward and inverse solutions for three-element Risley prism beam scanners.
Li, Anhu; Liu, Xingsheng; Sun, Wansong
2017-04-03
Scan blind zone and control singularity are two adverse issues for the beam scanning performance in double-prism Risley systems. In this paper, a theoretical model which introduces a third prism is developed. The critical condition for a fully eliminated scan blind zone is determined through a geometric derivation, providing several useful formulae for three-Risley-prism system design. Moreover, inverse solutions for a three-prism system are established, based on the damped least-squares iterative refinement by a forward ray tracing method. It is shown that the efficiency of this iterative calculation of the inverse solutions can be greatly enhanced by a numerical differentiation method. In order to overcome the control singularity problem, the motion law of any one prism in a three-prism system needs to be conditioned, resulting in continuous and steady motion profiles for the other two prisms.
Optimal Orbit Maneuvers with Electrodynamic Tethers
2006-06-01
orbital elements , which completely describe a unique orbit ; equinoctial elements are not employed but left for future iterations of the formulation...periods in the maneuver. Follow on work, uch as the transformation of this state vector from classical orbital elements to the quinoctial set of...
JPL-IDEAS - ITERATIVE DESIGN OF ANTENNA STRUCTURES
NASA Technical Reports Server (NTRS)
Levy, R.
1994-01-01
The Iterative DEsign of Antenna Structures (IDEAS) program is a finite element analysis and design optimization program with special features for the analysis and design of microwave antennas and associated sub-structures. As the principal structure analysis and design tool for the Jet Propulsion Laboratory's Ground Antenna and Facilities Engineering section of NASA's Deep Space Network, IDEAS combines flexibility with easy use. The relatively small bending stiffness of the components of large, steerable reflector antennas allows IDEAS to use pinjointed (three translational degrees of freedom per joint) models for modeling the gross behavior of these antennas when subjected to static and dynamic loading. This facilitates the formulation of the redesign algorithm which has only one design variable per structural element. Input data deck preparation has been simplified by the use of NAMELIST inputs to promote clarity of data input for problem defining parameters, user selection of execution and design options and output requests, and by the use of many attractive and familiar features of the NASTRAN program (in many cases, NASTRAN and IDEAS formatted bulk data cards are interchangeable). Features such as simulation of a full symmetric structure based on analyses of only half the structure make IDEAS a handy and efficient analysis tool, with many features unavailable in any other finite element analysis program. IDEAS can choose design variables such as areas of rods and thicknesses of plates to minimize total structure weight, constrain the structure weight to a specified value while maximizing a natural frequency or minimizing compliance measures, and can use a stress ratio algorithm to size each structural member so that it is at maximum or minimum stress level for at least one of the applied loads. Calculations of total structure weight can be broken down according to material. Center of gravity weight balance, static first and second moments about the center of mass and optionally about a user-specified gridpoint, and lumped structure weight at grid points can also be calculated. Other analysis outputs include calculation of reactions, displacements, and element stresses due to specified gravity, thermal, and external applied loads; calculations of linear combinations of specific node displacements (e.g. to represent motions of rigid attachments not included in the structure model), natural frequency eigenvalues and eigenvectors, structure reactions and element stresses, and coordinates of effective modal masses. Cassegrain antenna boresight error analysis of a best fitting paraboloid and Cassegrain microwave antenna root mean square half-pathlength error analysis of a best fitting paraboloid are also performed. The IDEAS program is written in ATHENA FORTRAN and ASSEMBLER for an EXEC 8 operating system and was implemented on a UNIVAC 1100 series computer. The minimum memory requirement for the program is approximately 42,000 36-bit words. This program is available on a 9-track 1600 BPI magnetic tape in UNIVAC FURPUR format only; since JPL-IDEAS will not run on other platforms, COSMIC will not reformat the code to be readable on other platforms. The program was developed in 1988.
NASA Astrophysics Data System (ADS)
1990-09-01
The main purpose of the International Thermonuclear Experimental Reactor (ITER) is to develop an experimental fusion reactor through the united efforts of many technologically advanced countries. The ITER terms of reference, issued jointly by the European Community, Japan, the USSR, and the United States, call for an integrated international design activity and constitute the basis of current activities. Joint work on ITER is carried out under the auspices of the International Atomic Energy Agency (IAEA), according to the terms of quadripartite agreement reached between the European Community, Japan, the USSR, and the United States. The site for joint technical work sessions is at the Max Planck Institute of Plasma Physics. Garching, Federal Republic of Germany. The ITER activities have two phases: a definition phase performed in 1988 and the present design phase (1989 to 1990). During the definition phase, a set of ITER technical characteristics and supporting research and development (R and D) activities were developed and reported. The present conceptual design phase of ITER lasts until the end of 1990. The objectives of this phase are to develop the design of ITER, perform a safety and environmental analysis, develop site requirements, define future R and D needs, and estimate cost, manpower, and schedule for construction and operation. A final report will be submitted at the end of 1990. This paper summarizes progress in the ITER program during the 1989 design phase.
Cleaning of first mirrors in ITER by means of radio frequency discharges.
Leipold, F; Reichle, R; Vorpahl, C; Mukhin, E E; Dmitriev, A M; Razdobarin, A G; Samsonov, D S; Marot, L; Moser, L; Steiner, R; Meyer, E
2016-11-01
First mirrors of optical diagnostics in ITER are subject to charge exchange fluxes of Be, W, and potentially other elements. This may degrade the optical performance significantly via erosion or deposition. In order to restore reflectivity, cleaning by applying radio frequency (RF) power to the mirror itself and thus creating a discharge in front of the mirror will be used. The plasma generated in front of the mirror surface sputters off deposition, restoring its reflectivity. Although the functionality of such a mirror cleaning technique is proven in laboratory experiments, the technical implementation in ITER revealed obstacles which needs to be overcome: Since the discharge as an RF load in general is not very well matched to the power generator and transmission line, power reflections will occur leading to a thermal load of the cable. Its implementation for ITER requires additional R&D. This includes the design of mirrors as RF electrodes, as well as feeders and matching networks inside the vacuum vessel. Mitigation solutions will be evaluated and discussed. Furthermore, technical obstacles (i.e., cooling water pipes for the mirrors) need to be solved. Since cooling water lines are usually on ground potential at the feed through of the vacuum vessel, a solution to decouple the ground potential from the mirror would be a major simplification. Such a solution will be presented.
Saab, Xavier E; Griggs, Jason A; Powers, John M; Engelmeier, Robert L
2007-02-01
Angled abutments are often used to restore dental implants placed in the anterior maxilla due to esthetic or spatial needs. The effect of abutment angulation on bone strain is unknown. The purpose of the current study was to measure and compare the strain distribution on the bone around an implant in the anterior maxilla using 2 different abutments by means of finite element analysis. Two-dimensional finite element models were designed using software (ANSYS) for 2 situations: (1) an implant with a straight abutment in the anterior maxilla, and (2) an implant with an angled abutment in the anterior maxilla. The implant used was 4x13 mm (MicroThread). The maxillary bone was modeled as type 3 bone with a cortical layer thickness of 0.5 mm. Oblique loads of 178 N were applied on the cingulum area of both models. Seven consecutive iterations of mesh refinement were performed in each model to observe the convergence of the results. The greatest strain was found on the cancellous bone, adjacent to the 3 most apical microthreads on the palatal side of the implant where tensile forces were created. The same strain distribution was observed around both the straight and angled abutments. After several iterations, the results converged to a value for the maximum first principal strain on the bone of both models, which was independent of element size. Most of the deformation occurred in the cancellous bone and ranged between 1000 and 3500 microstrain. Small areas of cancellous bone experienced strain above the physiologic limit (4000 microstrain). The model predicted a 15% higher maximum bone strain for the straight abutment compared with the angled abutment. The results converged after several iterations of mesh refinement, which confirmed the lack of dependence of the maximum strain at the implant-bone interface on mesh density. Most of the strain produced on the cancellous and cortical bone was within the range that has been reported to increase bone mass and mineralization.
The L sub 1 finite element method for pure convection problems
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan
1991-01-01
The least squares (L sub 2) finite element method is introduced for 2-D steady state pure convection problems with smooth solutions. It is proven that the L sub 2 method has the same stability estimate as the original equation, i.e., the L sub 2 method has better control of the streamline derivative. Numerical convergence rates are given to show that the L sub 2 method is almost optimal. This L sub 2 method was then used as a framework to develop an iteratively reweighted L sub 2 finite element method to obtain a least absolute residual (L sub 1) solution for problems with discontinuous solutions. This L sub 1 finite element method produces a nonoscillatory, nondiffusive and highly accurate numerical solution that has a sharp discontinuity in one element on both coarse and fine meshes. A robust reweighting strategy was also devised to obtain the L sub 1 solution in a few iterations. A number of examples solved by using triangle and bilinear elements are presented.
NASA Astrophysics Data System (ADS)
Elkurdi, Yousef; Fernández, David; Souleimanov, Evgueni; Giannacopoulos, Dennis; Gross, Warren J.
2008-04-01
The Finite Element Method (FEM) is a computationally intensive scientific and engineering analysis tool that has diverse applications ranging from structural engineering to electromagnetic simulation. The trends in floating-point performance are moving in favor of Field-Programmable Gate Arrays (FPGAs), hence increasing interest has grown in the scientific community to exploit this technology. We present an architecture and implementation of an FPGA-based sparse matrix-vector multiplier (SMVM) for use in the iterative solution of large, sparse systems of equations arising from FEM applications. FEM matrices display specific sparsity patterns that can be exploited to improve the efficiency of hardware designs. Our architecture exploits FEM matrix sparsity structure to achieve a balance between performance and hardware resource requirements by relying on external SDRAM for data storage while utilizing the FPGAs computational resources in a stream-through systolic approach. The architecture is based on a pipelined linear array of processing elements (PEs) coupled with a hardware-oriented matrix striping algorithm and a partitioning scheme which enables it to process arbitrarily big matrices without changing the number of PEs in the architecture. Therefore, this architecture is only limited by the amount of external RAM available to the FPGA. The implemented SMVM-pipeline prototype contains 8 PEs and is clocked at 110 MHz obtaining a peak performance of 1.76 GFLOPS. For 8 GB/s of memory bandwidth typical of recent FPGA systems, this architecture can achieve 1.5 GFLOPS sustained performance. Using multiple instances of the pipeline, linear scaling of the peak and sustained performance can be achieved. Our stream-through architecture provides the added advantage of enabling an iterative implementation of the SMVM computation required by iterative solution techniques such as the conjugate gradient method, avoiding initialization time due to data loading and setup inside the FPGA internal memory.
NASA Astrophysics Data System (ADS)
Weiss, Chester J.
2013-08-01
An essential element for computational hypothesis testing, data inversion and experiment design for electromagnetic geophysics is a robust forward solver, capable of easily and quickly evaluating the electromagnetic response of arbitrary geologic structure. The usefulness of such a solver hinges on the balance among competing desires like ease of use, speed of forward calculation, scalability to large problems or compute clusters, parsimonious use of memory access, accuracy and by necessity, the ability to faithfully accommodate a broad range of geologic scenarios over extremes in length scale and frequency content. This is indeed a tall order. The present study addresses recent progress toward the development of a forward solver with these properties. Based on the Lorenz-gauged Helmholtz decomposition, a new finite volume solution over Cartesian model domains endowed with complex-valued electrical properties is shown to be stable over the frequency range 10-2-1010 Hz and range 10-3-105 m in length scale. Benchmark examples are drawn from magnetotellurics, exploration geophysics, geotechnical mapping and laboratory-scale analysis, showing excellent agreement with reference analytic solutions. Computational efficiency is achieved through use of a matrix-free implementation of the quasi-minimum-residual (QMR) iterative solver, which eliminates explicit storage of finite volume matrix elements in favor of "on the fly" computation as needed by the iterative Krylov sequence. Further efficiency is achieved through sparse coupling matrices between the vector and scalar potentials whose non-zero elements arise only in those parts of the model domain where the conductivity gradient is non-zero. Multi-thread parallelization in the QMR solver through OpenMP pragmas is used to reduce the computational cost of its most expensive step: the single matrix-vector product at each iteration. High-level MPI communicators farm independent processes to available compute nodes for simultaneous computation of multi-frequency or multi-transmitter responses.
{lambda} elements for one-dimensional singular problems with known strength of singularity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, K.K.; Surana, K.S.
1996-10-01
This paper presents a new and general procedure for designing special elements called {lambda} elements for one dimensional singular problems where the strength of the singularity is know. The {lambda} elements presented here are of type C{sup 0}. These elements also provide inter-element C{sup 0} continuity with p-version elements. The {lambda} elements do not require a precise knowledge of the extent of singular zone, i.e., their use may be extended beyond the singular zone. When {lambda} elements are used at the singularity, a singular problem behaves like a smooth problem thereby eliminating the need for h, p-adaptive processes all together.more » One dimensional steady state radial flow of an upper convected Maxwell fluid is considered as a sample problem. Least squares approach (or least squares finite element formulation: LSFEF) is used to construct the integral form (error functional I) from the differential equations. Numerical results presented for radially inward flow with inner radius r{sub i} = 0.1, 0.01, 0.001, 0.0001, 0.00001, and Deborah number of 2 (De = 2) demonstrate the accuracy, faster convergence of the iterative solution procedure, faster convergence rate of the error functional and mesh independent characteristics of the {lambda} elements regardless of the severity of the singularity.« less
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Watson, Willie R. (Technical Monitor)
2005-01-01
The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...
2017-09-21
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, Ajit; Khalil, Mohammad; Pettit, Chris
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
O'Connor, Sydney; Ayres, Alison; Cortellini, Lynelle; Rosand, Jonathan; Rosenthal, Eric; Kimberly, W Taylor
2012-08-01
Reliable and efficient data repositories are essential for the advancement of research in Neurocritical care. Various factors, such as the large volume of patients treated within the neuro ICU, their differing length and complexity of hospital stay, and the substantial amount of desired information can complicate the process of data collection. We adapted the tools of process improvement to the data collection and database design of a research repository for a Neuroscience intensive care unit. By the Shewhart-Deming method, we implemented an iterative approach to improve the process of data collection for each element. After an initial design phase, we re-evaluated all data fields that were challenging or time-consuming to collect. We then applied root-cause analysis to optimize the accuracy and ease of collection, and to determine the most efficient manner of collecting the maximal amount of data. During a 6-month period, we iteratively analyzed the process of data collection for various data elements. For example, the pre-admission medications were found to contain numerous inaccuracies after comparison with a gold standard (sensitivity 71% and specificity 94%). Also, our first method of tracking patient admissions and discharges contained higher than expected errors (sensitivity 94% and specificity 93%). In addition to increasing accuracy, we focused on improving efficiency. Through repeated incremental improvements, we reduced the number of subject records that required daily monitoring from 40 to 6 per day, and decreased daily effort from 4.5 to 1.5 h/day. By applying process improvement methods to the design of a Neuroscience ICU data repository, we achieved a threefold improvement in efficiency and increased accuracy. Although individual barriers to data collection will vary from institution to institution, a focus on process improvement is critical to overcoming these barriers.
NASA Astrophysics Data System (ADS)
Lavery, N.; Taylor, C.
1999-07-01
Multigrid and iterative methods are used to reduce the solution time of the matrix equations which arise from the finite element (FE) discretisation of the time-independent equations of motion of the incompressible fluid in turbulent motion. Incompressible flow is solved by using the method of reduce interpolation for the pressure to satisfy the Brezzi-Babuska condition. The k-l model is used to complete the turbulence closure problem. The non-symmetric iterative matrix methods examined are the methods of least squares conjugate gradient (LSCG), biconjugate gradient (BCG), conjugate gradient squared (CGS), and the biconjugate gradient squared stabilised (BCGSTAB). The multigrid algorithm applied is based on the FAS algorithm of Brandt, and uses two and three levels of grids with a V-cycling schedule. These methods are all compared to the non-symmetric frontal solver. Copyright
NASA Astrophysics Data System (ADS)
Tupa, Peter R.; Quirin, S.; DeLeo, G. G.; McCluskey, G. E., Jr.
2007-12-01
We present a modified Fourier transform approach to determine the orbital parameters of detached visual binary stars. Originally inspired by Monet (ApJ 234, 275, 1979), this new method utilizes an iterative routine of refining higher order Fourier terms in a manner consistent with Keplerian motion. In most cases, this approach is not sensitive to the starting orbital parameters in the iterative loop. In many cases we have determined orbital elements even with small fragments of orbits and noisy data, although some systems show computational instabilities. The algorithm was constructed using the MAPLE mathematical software code and tested on artificially created orbits and many real binary systems, including Gliese 22 AC, Tau 51, and BU 738. This work was supported at Lehigh University by NSF-REU grant PHY-9820301.
Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.
2014-08-21
In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and representmore » the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ–ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.« less
Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER
NASA Astrophysics Data System (ADS)
Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.
2014-08-01
In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ-ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.
Parallel iterative methods for sparse linear and nonlinear equations
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
As three-dimensional models are gaining importance, iterative methods will become almost mandatory. Among these, preconditioned Krylov subspace methods have been viewed as the most efficient and reliable, when solving linear as well as nonlinear systems of equations. There has been several different approaches taken to adapt iterative methods for supercomputers. Some of these approaches are discussed and the methods that deal more specifically with general unstructured sparse matrices, such as those arising from finite element methods, are emphasized.
Programmable Iterative Optical Image And Data Processing
NASA Technical Reports Server (NTRS)
Jackson, Deborah J.
1995-01-01
Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.
Parallel Ellipsoidal Perfectly Matched Layers for Acoustic Helmholtz Problems on Exterior Domains
Bunting, Gregory; Prakash, Arun; Walsh, Timothy; ...
2018-01-26
Exterior acoustic problems occur in a wide range of applications, making the finite element analysis of such problems a common practice in the engineering community. Various methods for truncating infinite exterior domains have been developed, including absorbing boundary conditions, infinite elements, and more recently, perfectly matched layers (PML). PML are gaining popularity due to their generality, ease of implementation, and effectiveness as an absorbing boundary condition. PML formulations have been developed in Cartesian, cylindrical, and spherical geometries, but not ellipsoidal. In addition, the parallel solution of PML formulations with iterative solvers for the solution of the Helmholtz equation, and howmore » this compares with more traditional strategies such as infinite elements, has not been adequately investigated. In this study, we present a parallel, ellipsoidal PML formulation for acoustic Helmholtz problems. To faciliate the meshing process, the ellipsoidal PML layer is generated with an on-the-fly mesh extrusion. Though the complex stretching is defined along ellipsoidal contours, we modify the Jacobian to include an additional mapping back to Cartesian coordinates in the weak formulation of the finite element equations. This allows the equations to be solved in Cartesian coordinates, which is more compatible with existing finite element software, but without the necessity of dealing with corners in the PML formulation. Herein we also compare the conditioning and performance of the PML Helmholtz problem with infinite element approach that is based on high order basis functions. On a set of representative exterior acoustic examples, we show that high order infinite element basis functions lead to an increasing number of Helmholtz solver iterations, whereas for PML the number of iterations remains constant for the same level of accuracy. Finally, this provides an additional advantage of PML over the infinite element approach.« less
Parallel Ellipsoidal Perfectly Matched Layers for Acoustic Helmholtz Problems on Exterior Domains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bunting, Gregory; Prakash, Arun; Walsh, Timothy
Exterior acoustic problems occur in a wide range of applications, making the finite element analysis of such problems a common practice in the engineering community. Various methods for truncating infinite exterior domains have been developed, including absorbing boundary conditions, infinite elements, and more recently, perfectly matched layers (PML). PML are gaining popularity due to their generality, ease of implementation, and effectiveness as an absorbing boundary condition. PML formulations have been developed in Cartesian, cylindrical, and spherical geometries, but not ellipsoidal. In addition, the parallel solution of PML formulations with iterative solvers for the solution of the Helmholtz equation, and howmore » this compares with more traditional strategies such as infinite elements, has not been adequately investigated. In this study, we present a parallel, ellipsoidal PML formulation for acoustic Helmholtz problems. To faciliate the meshing process, the ellipsoidal PML layer is generated with an on-the-fly mesh extrusion. Though the complex stretching is defined along ellipsoidal contours, we modify the Jacobian to include an additional mapping back to Cartesian coordinates in the weak formulation of the finite element equations. This allows the equations to be solved in Cartesian coordinates, which is more compatible with existing finite element software, but without the necessity of dealing with corners in the PML formulation. Herein we also compare the conditioning and performance of the PML Helmholtz problem with infinite element approach that is based on high order basis functions. On a set of representative exterior acoustic examples, we show that high order infinite element basis functions lead to an increasing number of Helmholtz solver iterations, whereas for PML the number of iterations remains constant for the same level of accuracy. Finally, this provides an additional advantage of PML over the infinite element approach.« less
Alternate Design of ITER Cryostat Skirt Support System
NASA Astrophysics Data System (ADS)
Pandey, Manish Kumar; Jha, Saroj Kumar; Gupta, Girish Kumar; Bhattacharya, Avik; Jogi, Gaurav; Bhardwaj, Anil Kumar
2017-04-01
The skirt support of ITER cryostat is a support system which takes all the load of cryostat cylinder and dome during normal and operational condition. The present design of skirt support has full penetration weld joints at the bottom (shell to horizontal plate joint). To fulfil the requirements of tolerances and control the welding distortions, we have proposed to change the full penetration weld into fillet weld. A detail calculation is done to check the feasibility and structural impact due to proposed design. The calculations provide the size requirements of fillet weld. To verify the structural integrity during most severe load case, finite element analysis (FEA) has been done in line with ASME section VIII division 2 [1]. By FEA ‘Plastic Collapse’ and ‘Local Failure’ modes has been assessed. 5° sector of skirt clamp has been modelled in CATIA V5 R21 and used in FEA. Fillet weld at shell to horizontal plate joint has been modelled and symmetry boundary condition at ± 2.5° applied. ‘Elastic Plastic Analysis’ has been performed for the most severe loading case i.e. Category IV loading. The alternate design of Cryostat Skirt support system has been found safe by analysis against Plastic collapse and Local Failure Modes with load proportionality factor 2.3. Alternate design of Cryostat skirt support system has been done and validated by FEA. As per alternate design, the proposal of fillet weld has been implemented in manufacturing.
Elasto-Plastic Behavior of Aluminum Foams Subjected to Compression Loading
NASA Astrophysics Data System (ADS)
Silva, H. M.; Carvalho, C. D.; Peixinho, N. R.
2017-05-01
The non-linear behavior of uniform-size cellular foams made of aluminum is investigated when subjected to compressive loads while comparing numerical results obtained in the Finite Element Method software (FEM) ANSYS workbench and ANSYS Mechanical APDL (ANSYS Parametric Design Language). The numerical model is built on AUTODESK INVENTOR, being imported into ANSYS and solved by the Newton-Raphson iterative method. The most similar conditions were used in ANSYS mechanical and ANSYS workbench, as possible. The obtained numerical results and the differences between the two programs are presented and discussed
NASA Technical Reports Server (NTRS)
Miller, W. S.
1974-01-01
The cryogenic refrigerator thermal design calculations establish design approach and basic sizing of the machine's elements. After the basic design is defined, effort concentrates on matching the thermodynamic design with that of the heat transfer devices (heat exchangers and regenerators). Typically, the heat transfer device configurations and volumes are adjusted to improve their heat transfer and pressure drop characteristics. These adjustments imply that changes be made to the active displaced volumes, compensating for the influence of the heat transfer devices on the thermodynamic processes of the working fluid. Then, once the active volumes are changed, the heat transfer devices require adjustment to account for the variations in flows, pressure levels, and heat loads. This iterative process is continued until the thermodynamic cycle parameters match the design of the heat transfer devices. By examing several matched designs, a near-optimum refrigerator is selected.
Iterated reaction graphs: simulating complex Maillard reaction pathways.
Patel, S; Rabone, J; Russell, S; Tissen, J; Klaffke, W
2001-01-01
This study investigates a new method of simulating a complex chemical system including feedback loops and parallel reactions. The practical purpose of this approach is to model the actual reactions that take place in the Maillard process, a set of food browning reactions, in sufficient detail to be able to predict the volatile composition of the Maillard products. The developed framework, called iterated reaction graphs, consists of two main elements: a soup of molecules and a reaction base of Maillard reactions. An iterative process loops through the reaction base, taking reactants from and feeding products back to the soup. This produces a reaction graph, with molecules as nodes and reactions as arcs. The iterated reaction graph is updated and validated by comparing output with the main products found by classical gas-chromatographic/mass spectrometric analysis. To ensure a realistic output and convergence to desired volatiles only, the approach contains a number of novel elements: rate kinetics are treated as reaction probabilities; only a subset of the true chemistry is modeled; and the reactions are blocked into groups.
NASA Technical Reports Server (NTRS)
Graf, Wiley E.
1991-01-01
A mixed formulation is chosen to overcome deficiencies of the standard displacement-based shell model. Element development is traced from the incremental variational principle on through to the final set of equilibrium equations. Particular attention is paid to developing specific guidelines for selecting the optimal set of strain parameters. A discussion of constraint index concepts and their predictive capability related to locking is included. Performance characteristics of the elements are assessed in a wide variety of linear and nonlinear plate/shell problems. Despite limiting the study to geometric nonlinear analysis, a substantial amount of additional insight concerning the finite element modeling of thin plate/shell structures is provided. For example, in nonlinear analysis, given the same mesh and load step size, mixed elements converge in fewer iterations than equivalent displacement-based models. It is also demonstrated that, in mixed formulations, lower order elements are preferred. Additionally, meshes used to obtain accurate linear solutions do not necessarily converge to the correct nonlinear solution. Finally, a new form of locking was identified associated with employing elements designed for biaxial bending in uniaxial bending applications.
Finite Volume Element (FVE) discretization and multilevel solution of the axisymmetric heat equation
NASA Astrophysics Data System (ADS)
Litaker, Eric T.
1994-12-01
The axisymmetric heat equation, resulting from a point-source of heat applied to a metal block, is solved numerically; both iterative and multilevel solutions are computed in order to compare the two processes. The continuum problem is discretized in two stages: finite differences are used to discretize the time derivatives, resulting is a fully implicit backward time-stepping scheme, and the Finite Volume Element (FVE) method is used to discretize the spatial derivatives. The application of the FVE method to a problem in cylindrical coordinates is new, and results in stencils which are analyzed extensively. Several iteration schemes are considered, including both Jacobi and Gauss-Seidel; a thorough analysis of these schemes is done, using both the spectral radii of the iteration matrices and local mode analysis. Using this discretization, a Gauss-Seidel relaxation scheme is used to solve the heat equation iteratively. A multilevel solution process is then constructed, including the development of intergrid transfer and coarse grid operators. Local mode analysis is performed on the components of the amplification matrix, resulting in the two-level convergence factors for various combinations of the operators. A multilevel solution process is implemented by using multigrid V-cycles; the iterative and multilevel results are compared and discussed in detail. The computational savings resulting from the multilevel process are then discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imhoff, Seth D.
LANL was approached to provide material and design guidance for a fan-shaped fuel element. A total of at least three castings were planned. The first casting is a simple billet mold to be made from high carbon DU-10Mo charge material. The second and third castings are for optimization of the actual fuel plate mold. The experimental scope for optimization is only broad enough for a second iteration of the mold design. It is important to note that partway through FY17, this project was cancelled by the sponsor. This report is being written in order to capture the knowledge gained shouldmore » this project resume at a later date.« less
The Iterative Design Process in Research and Development: A Work Experience Paper
NASA Technical Reports Server (NTRS)
Sullivan, George F. III
2013-01-01
The iterative design process is one of many strategies used in new product development. Top-down development strategies, like waterfall development, place a heavy emphasis on planning and simulation. The iterative process, on the other hand, is better suited to the management of small to medium scale projects. Over the past four months, I have worked with engineers at Johnson Space Center on a multitude of electronics projects. By describing the work I have done these last few months, analyzing the factors that have driven design decisions, and examining the testing and verification process, I will demonstrate that iterative design is the obvious choice for research and development projects.
A High Order, Locally-Adaptive Method for the Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Chan, Daniel
1998-11-01
I have extended the FOSLS method of Cai, Manteuffel and McCormick (1997) and implemented it within the framework of a spectral element formulation using the Legendre polynomial basis function. The FOSLS method solves the Navier-Stokes equations as a system of coupled first-order equations and provides the ellipticity that is needed for fast iterative matrix solvers like multigrid to operate efficiently. Each element is treated as an object and its properties are self-contained. Only C^0 continuity is imposed across element interfaces; this design allows local grid refinement and coarsening without the burden of having an elaborate data structure, since only information along element boundaries is needed. With the FORTRAN 90 programming environment, I can maintain a high computational efficiency by employing a hybrid parallel processing model. The OpenMP directives provides parallelism in the loop level which is executed in a shared-memory SMP and the MPI protocol allows the distribution of elements to a cluster of SMP's connected via a commodity network. This talk will provide timing results and a comparison with a second order finite difference method.
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan
1993-01-01
A comparative description is presented for the least-squares FEM (LSFEM) for 2D steady-state pure convection problems. In addition to exhibiting better control of the streamline derivative than the streamline upwinding Petrov-Galerkin method, numerical convergence rates are obtained which show the LSFEM to be virtually optimal. The LSFEM is used as a framework for an iteratively reweighted LSFEM yielding nonoscillatory and nondiffusive solutions for problems with contact discontinuities; this method is shown to convect contact discontinuities without error when using triangular and bilinear elements.
Chen, Tinggui; Xiao, Renbin
2014-01-01
Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.
Rapid iterative reanalysis for automated design
NASA Technical Reports Server (NTRS)
Bhatia, K. G.
1973-01-01
A method for iterative reanalysis in automated structural design is presented for a finite-element analysis using the direct stiffness approach. A basic feature of the method is that the generalized stiffness and inertia matrices are expressed as functions of structural design parameters, and these generalized matrices are expanded in Taylor series about the initial design. Only the linear terms are retained in the expansions. The method is approximate because it uses static condensation, modal reduction, and the linear Taylor series expansions. The exact linear representation of the expansions of the generalized matrices is also described and a basis for the present method is established. Results of applications of the present method to the recalculation of the natural frequencies of two simple platelike structural models are presented and compared with results obtained by using a commonly applied analysis procedure used as a reference. In general, the results are in good agreement. A comparison of the computer times required for the use of the present method and the reference method indicated that the present method required substantially less time for reanalysis. Although the results presented are for relatively small-order problems, the present method will become more efficient relative to the reference method as the problem size increases. An extension of the present method to static reanalysis is described, ana a basis for unifying the static and dynamic reanalysis procedures is presented.
Preconditioned Mixed Spectral Element Methods for Elasticity and Stokes Problems
NASA Technical Reports Server (NTRS)
Pavarino, Luca F.
1996-01-01
Preconditioned iterative methods for the indefinite systems obtained by discretizing the linear elasticity and Stokes problems with mixed spectral elements in three dimensions are introduced and analyzed. The resulting stiffness matrices have the structure of saddle point problems with a penalty term, which is associated with the Poisson ratio for elasticity problems or with stabilization techniques for Stokes problems. The main results of this paper show that the convergence rate of the resulting algorithms is independent of the penalty parameter, the number of spectral elements Nu and mildly dependent on the spectral degree eta via the inf-sup constant. The preconditioners proposed for the whole indefinite system are block-diagonal and block-triangular. Numerical experiments presented in the final section show that these algorithms are a practical and efficient strategy for the iterative solution of the indefinite problems arising from mixed spectral element discretizations of elliptic systems.
NASA Astrophysics Data System (ADS)
Huismann, Immo; Stiller, Jörg; Fröhlich, Jochen
2017-10-01
The paper proposes a novel factorization technique for static condensation of a spectral-element discretization matrix that yields a linear operation count of just 13N multiplications for the residual evaluation, where N is the total number of unknowns. In comparison to previous work it saves a factor larger than 3 and outpaces unfactored variants for all polynomial degrees. Using the new technique as a building block for a preconditioned conjugate gradient method yields linear scaling of the runtime with N which is demonstrated for polynomial degrees from 2 to 32. This makes the spectral-element method cost effective even for low polynomial degrees. Moreover, the dependence of the iterative solution on the element aspect ratio is addressed, showing only a slight increase in the number of iterations for aspect ratios up to 128. Hence, the solver is very robust for practical applications.
NASA Technical Reports Server (NTRS)
Koppenhoefer, Kyle C.; Gullerud, Arne S.; Ruggieri, Claudio; Dodds, Robert H., Jr.; Healy, Brian E.
1998-01-01
This report describes theoretical background material and commands necessary to use the WARP3D finite element code. WARP3D is under continuing development as a research code for the solution of very large-scale, 3-D solid models subjected to static and dynamic loads. Specific features in the code oriented toward the investigation of ductile fracture in metals include a robust finite strain formulation, a general J-integral computation facility (with inertia, face loading), an element extinction facility to model crack growth, nonlinear material models including viscoplastic effects, and the Gurson-Tver-gaard dilatant plasticity model for void growth. The nonlinear, dynamic equilibrium equations are solved using an incremental-iterative, implicit formulation with full Newton iterations to eliminate residual nodal forces. The history integration of the nonlinear equations of motion is accomplished with Newmarks Beta method. A central feature of WARP3D involves the use of a linear-preconditioned conjugate gradient (LPCG) solver implemented in an element-by-element format to replace a conventional direct linear equation solver. This software architecture dramatically reduces both the memory requirements and CPU time for very large, nonlinear solid models since formation of the assembled (dynamic) stiffness matrix is avoided. Analyses thus exhibit the numerical stability for large time (load) steps provided by the implicit formulation coupled with the low memory requirements characteristic of an explicit code. In addition to the much lower memory requirements of the LPCG solver, the CPU time required for solution of the linear equations during each Newton iteration is generally one-half or less of the CPU time required for a traditional direct solver. All other computational aspects of the code (element stiffnesses, element strains, stress updating, element internal forces) are implemented in the element-by- element, blocked architecture. This greatly improves vectorization of the code on uni-processor hardware and enables straightforward parallel-vector processing of element blocks on multi-processor hardware.
Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.
Junker, André; Brenner, Karl-Heinz
2018-03-01
The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.
NASA Astrophysics Data System (ADS)
Whiteley, J. P.
2017-10-01
Large, incompressible elastic deformations are governed by a system of nonlinear partial differential equations. The finite element discretisation of these partial differential equations yields a system of nonlinear algebraic equations that are usually solved using Newton's method. On each iteration of Newton's method, a linear system must be solved. We exploit the structure of the Jacobian matrix to propose a preconditioner, comprising two steps. The first step is the solution of a relatively small, symmetric, positive definite linear system using the preconditioned conjugate gradient method. This is followed by a small number of multigrid V-cycles for a larger linear system. Through the use of exemplar elastic deformations, the preconditioner is demonstrated to facilitate the iterative solution of the linear systems arising. The number of GMRES iterations required has only a very weak dependence on the number of degrees of freedom of the linear systems.
Design of a novel instrument for active neutron interrogation of artillery shells.
Bélanger-Champagne, Camille; Vainionpää, Hannes; Peura, Pauli; Toivonen, Harri; Eerola, Paula; Dendooven, Peter
2017-01-01
The most common explosives can be uniquely identified by measuring the elemental H/N ratio with a precision better than 10%. Monte Carlo simulations were used to design two variants of a new prompt gamma neutron activation instrument that can achieve this precision. The instrument features an intense pulsed neutron generator with precise timing. Measuring the hydrogen peak from the target explosive is especially challenging because the instrument itself contains hydrogen, which is needed for neutron moderation and shielding. By iterative design optimization, the fraction of the hydrogen peak counts coming from the explosive under interrogation increased from [Formula: see text]% to [Formula: see text]% (statistical only) for the benchmark design. In the optimized design variants, the hydrogen signal from a high-explosive shell can be measured to a statistics-only precision better than 1% in less than 30 minutes for an average neutron production yield of 109 n/s.
Design of a novel instrument for active neutron interrogation of artillery shells
Vainionpää, Hannes; Peura, Pauli; Toivonen, Harri; Eerola, Paula; Dendooven, Peter
2017-01-01
The most common explosives can be uniquely identified by measuring the elemental H/N ratio with a precision better than 10%. Monte Carlo simulations were used to design two variants of a new prompt gamma neutron activation instrument that can achieve this precision. The instrument features an intense pulsed neutron generator with precise timing. Measuring the hydrogen peak from the target explosive is especially challenging because the instrument itself contains hydrogen, which is needed for neutron moderation and shielding. By iterative design optimization, the fraction of the hydrogen peak counts coming from the explosive under interrogation increased from 53-7+7% to 74-10+8% (statistical only) for the benchmark design. In the optimized design variants, the hydrogen signal from a high-explosive shell can be measured to a statistics-only precision better than 1% in less than 30 minutes for an average neutron production yield of 109 n/s. PMID:29211773
[Numerical finite element modeling of custom car seat using computer aided design].
Huang, Xuqi; Singare, Sekou
2014-02-01
A good cushion can not only provide the sitter with a high comfort, but also control the distribution of the hip pressure to reduce the incidence of diseases. The purpose of this study is to introduce a computer-aided design (CAD) modeling method of the buttocks-cushion using numerical finite element (FE) simulation to predict the pressure distribution on the buttocks-cushion interface. The buttock and the cushion model geometrics were acquired from a laser scanner, and the CAD software was used to create the solid model. The FE model of a true seated individual was developed using ANSYS software (ANSYS Inc, Canonsburg, PA). The model is divided into two parts, i.e. the cushion model made of foam and the buttock model represented by the pelvis covered with a soft tissue layer. Loading simulations consisted of imposing a vertical force of 520N on the pelvis, corresponding to the weight of the user upper extremity, and then solving iteratively the system.
Optimized growth and reorientation of anisotropic material based on evolution equations
NASA Astrophysics Data System (ADS)
Jantos, Dustin R.; Junker, Philipp; Hackl, Klaus
2018-07-01
Modern high-performance materials have inherent anisotropic elastic properties. The local material orientation can thus be considered to be an additional design variable for the topology optimization of structures containing such materials. In our previous work, we introduced a variational growth approach to topology optimization for isotropic, linear-elastic materials. We solved the optimization problem purely by application of Hamilton's principle. In this way, we were able to determine an evolution equation for the spatial distribution of density mass, which can be evaluated in an iterative process within a solitary finite element environment. We now add the local material orientation described by a set of three Euler angles as additional design variables into the three-dimensional model. This leads to three additional evolution equations that can be separately evaluated for each (material) point. Thus, no additional field unknown within the finite element approach is needed, and the evolution of the spatial distribution of density mass and the evolution of the Euler angles can be evaluated simultaneously.
NASA Astrophysics Data System (ADS)
Kholish Rumayshah, Khodijah; Prayoga, Aditya; Mochammad Agoes Moelyadi, Ing., Dr.
2018-04-01
Research on a High Altitude Long Endurance (HALE) Unmanned Aerial Vehicle (UAV) is currently being conducted at Bandung Institute of Technology (ITB). Previously, the 1st generation of HALE UAV ITB used balsa wood for most of its structure. Flight test gave the result of broken wings due to extreme side-wind that causes large bending to its high aspect ratio wing. This paper conducted a study on designing the 2nd generation of HALE UAV ITB which used composite materials in order to substitute balsa wood at some critical parts of the wing’s structure. Finite element software ABAQUS/CAE is used to predict the stress and deformation that occurred. Tsai-Wu and Von-Mises failure criteria were applied to check whether the structure failed or not. The initial configuration gave the results that the structure experienced material failure. A second iteration was done by proposing a new configuration and it was proven safe against the load given.
ITER Construction—Plant System Integration
NASA Astrophysics Data System (ADS)
Tada, E.; Matsuda, S.
2009-02-01
This brief paper introduces how the ITER will be built in the international collaboration. The ITER Organization plays a central role in constructing ITER and leading it into operation. Since most of the ITER components are to be provided in-kind from the member countries, integral project management should be scoped in advance of real work. Those include design, procurement, system assembly, testing, licensing and commissioning of ITER.
Development of the ITER magnetic diagnostic set and specification.
Vayakis, G; Arshad, S; Delhom, D; Encheva, A; Giacomin, T; Jones, L; Patel, K M; Pérez-Lasala, M; Portales, M; Prieto, D; Sartori, F; Simrock, S; Snipes, J A; Udintsev, V S; Watts, C; Winter, A; Zabeo, L
2012-10-01
ITER magnetic diagnostics are now in their detailed design and R&D phase. They have passed their conceptual design reviews and a working diagnostic specification has been prepared aimed at the ITER project requirements. This paper highlights specific design progress, in particular, for the in-vessel coils, steady state sensors, saddle loops and divertor sensors. Key changes in the measurement specifications, and a working concept of software and electronics are also outlined.
Wang, G; Doyle, E J; Peebles, W A
2016-11-01
A monostatic antenna array arrangement has been designed for the microwave front-end of the ITER low-field-side reflectometer (LFSR) system. This paper presents details of the antenna coupling coefficient analyses performed using GENRAY, a 3-D ray tracing code, to evaluate the plasma height accommodation capability of such an antenna array design. Utilizing modeled data for the plasma equilibrium and profiles for the ITER baseline and half-field scenarios, a design study was performed for measurement locations varying from the plasma edge to inside the top of the pedestal. A front-end antenna configuration is recommended for the ITER LFSR system based on the results of this coupling analysis.
Conceptual Design Oriented Wing Structural Analysis and Optimization
NASA Technical Reports Server (NTRS)
Lau, May Yuen
1996-01-01
Airplane optimization has always been the goal of airplane designers. In the conceptual design phase, a designer's goal could be tradeoffs between maximum structural integrity, minimum aerodynamic drag, or maximum stability and control, many times achieved separately. Bringing all of these factors into an iterative preliminary design procedure was time consuming, tedious, and not always accurate. For example, the final weight estimate would often be based upon statistical data from past airplanes. The new design would be classified based on gross characteristics, such as number of engines, wingspan, etc., to see which airplanes of the past most closely resembled the new design. This procedure works well for conventional airplane designs, but not very well for new innovative designs. With the computing power of today, new methods are emerging for the conceptual design phase of airplanes. Using finite element methods, computational fluid dynamics, and other computer techniques, designers can make very accurate disciplinary-analyses of an airplane design. These tools are computationally intensive, and when used repeatedly, they consume a great deal of computing time. In order to reduce the time required to analyze a design and still bring together all of the disciplines (such as structures, aerodynamics, and controls) into the analysis, simplified design computer analyses are linked together into one computer program. These design codes are very efficient for conceptual design. The work in this thesis is focused on a finite element based conceptual design oriented structural synthesis capability (CDOSS) tailored to be linked into ACSYNT.
Topology optimization of natural convection: Flow in a differentially heated cavity
NASA Astrophysics Data System (ADS)
Saglietti, Clio; Schlatter, Philipp; Berggren, Martin; Henningson, Dan
2017-11-01
The goal of the present work is to develop methods for optimization of the design of natural convection cooled heat sinks, using resolved simulation of both fluid flow and heat transfer. We rely on mathematical programming techniques combined with direct numerical simulations in order to iteratively update the topology of a solid structure towards optimality, i.e. until the design yielding the best performance is found, while satisfying a specific set of constraints. The investigated test case is a two-dimensional differentially heated cavity, in which the two vertical walls are held at different temperatures. The buoyancy force induces a swirling convective flow around a solid structure, whose topology is optimized to maximize the heat flux through the cavity. We rely on the spectral-element code Nek5000 to compute a high-order accurate solution of the natural convection flow arising from the conjugate heat transfer in the cavity. The laminar, steady-state solution of the problem is evaluated with a time-marching scheme that has an increased convergence rate; the actual iterative optimization is obtained using a steepest-decent algorithm, and the gradients are conveniently computed using the continuous adjoint equations for convective heat transfer.
NASA Astrophysics Data System (ADS)
Liansheng, Sui; Yin, Cheng; Bing, Li; Ailing, Tian; Krishna Asundi, Anand
2018-07-01
A novel computational ghost imaging scheme based on specially designed phase-only masks, which can be efficiently applied to encrypt an original image into a series of measured intensities, is proposed in this paper. First, a Hadamard matrix with a certain order is generated, where the number of elements in each row is equal to the size of the original image to be encrypted. Each row of the matrix is rearranged into the corresponding 2D pattern. Then, each pattern is encoded into the phase-only masks by making use of an iterative phase retrieval algorithm. These specially designed masks can be wholly or partially used in the process of computational ghost imaging to reconstruct the original information with high quality. When a significantly small number of phase-only masks are used to record the measured intensities in a single-pixel bucket detector, the information can be authenticated without clear visualization by calculating the nonlinear correlation map between the original image and its reconstruction. The results illustrate the feasibility and effectiveness of the proposed computational ghost imaging mechanism, which will provide an effective alternative for enriching the related research on the computational ghost imaging technique.
NASA Technical Reports Server (NTRS)
Barnes, Bruce W.; Sessions, Alaric M.; Beyon, Jeffrey; Petway, Larry B.
2014-01-01
Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. The existing power system was analyzed to rank components in terms of inefficiency, power dissipation, footprint and mass. Design considerations and priorities are compared along with the results of each design iteration. Overall power system improvements are summarized for design implementations.
Probabilistic Structures Analysis Methods (PSAM) for select space propulsion system components
NASA Technical Reports Server (NTRS)
1991-01-01
The basic formulation for probabilistic finite element analysis is described and demonstrated on a few sample problems. This formulation is based on iterative perturbation that uses the factorized stiffness on the unperturbed system as the iteration preconditioner for obtaining the solution to the perturbed problem. This approach eliminates the need to compute, store and manipulate explicit partial derivatives of the element matrices and force vector, which not only reduces memory usage considerably, but also greatly simplifies the coding and validation tasks. All aspects for the proposed formulation were combined in a demonstration problem using a simplified model of a curved turbine blade discretized with 48 shell elements, and having random pressure and temperature fields with partial correlation, random uniform thickness, and random stiffness at the root.
NASA Astrophysics Data System (ADS)
Fursdon, M.; Barrett, T.; Domptail, F.; Evans, Ll M.; Luzginova, N.; Greuner, N. H.; You, J.-H.; Li, M.; Richou, M.; Gallay, F.; Visca, E.
2017-12-01
The design and development of a novel plasma facing component (for fusion power plants) is described. The component uses the existing ‘monoblock’ construction which consists of a tungsten ‘block’ joined via a copper interlayer to a through CuCrZr cooling pipe. In the new concept the interlayer stiffness and conductivity properties are tuned so that stress in the principal structural element of the component (the cooling pipe) is reduced. Following initial trials with off-the-shelf materials, the concept was realized by machined features in an otherwise solid copper interlayer. The shape and distribution of the features were tuned by finite element analyses subject to ITER structural design criterion in-vessel components (SDC-IC) design rules. Proof of concept mock-ups were manufactured using a two stage brazing process verified by tomography and micrographic inspection. Full assemblies were inspected using ultrasound and thermographic (SATIR) test methods at ENEA and CEA respectively. High heat flux tests using IPP’s GLADIS facility showed that 200 cycles at 20 MW m-2 and five cycles at 25 MW m-2 could be sustained without apparent component damage. Further testing and component development is planned.
Aerodynamic optimization by simultaneously updating flow variables and design parameters
NASA Technical Reports Server (NTRS)
Rizk, M. H.
1990-01-01
The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.
Multidisciplinary systems optimization by linear decomposition
NASA Technical Reports Server (NTRS)
Sobieski, J.
1984-01-01
In a typical design process major decisions are made sequentially. An illustrated example is given for an aircraft design in which the aerodynamic shape is usually decided first, then the airframe is sized for strength and so forth. An analogous sequence could be laid out for any other major industrial product, for instance, a ship. The loops in the discipline boxes symbolize iterative design improvements carried out within the confines of a single engineering discipline, or subsystem. The loops spanning several boxes depict multidisciplinary design improvement iterations. Omitted for graphical simplicity is parallelism of the disciplinary subtasks. The parallelism is important in order to develop a broad workfront necessary to shorten the design time. If all the intradisciplinary and interdisciplinary iterations were carried out to convergence, the process could yield a numerically optimal design. However, it usually stops short of that because of time and money limitations. This is especially true for the interdisciplinary iterations.
Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine
NASA Astrophysics Data System (ADS)
Sharma, Gulshan B.; Robertson, Douglas D.
2013-07-01
Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respond over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula's material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element's remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than actual specimen. Low predicted bone density was lower than actual specimen. Differences were probably due to applied muscle and joint reaction loads, boundary conditions, and values of constants used. Work is underway to study this. Nonetheless, the results demonstrate three dimensional bone remodeling simulation validity and potential. Such adaptive predictions take physiological bone remodeling simulations one step closer to reality. Computational analyses are needed that integrate biological remodeling rules and predict how bone will respond over time. We expect the combination of computational static stress analyses together with adaptive bone remodeling simulations to become effective tools for regenerative medicine research.
Design Optimization Programmable Calculators versus Campus Computers.
ERIC Educational Resources Information Center
Savage, Michael
1982-01-01
A hypothetical design optimization problem and technical information on the three design parameters are presented. Although this nested iteration problem can be solved on a computer (flow diagram provided), this article suggests that several hand held calculators can be used to perform the same design iteration. (SK)
2014-01-01
Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584
Optimization of a Lunar Pallet Lander Reinforcement Structure Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Burt, Adam O.; Hull, Patrick V.
2014-01-01
This paper presents a design automation process using optimization via a genetic algorithm to design the conceptual structure of a Lunar Pallet Lander. The goal is to determine a design that will have the primary natural frequencies at or above a target value as well as minimize the total mass. Several iterations of the process are presented. First, a concept optimization is performed to determine what class of structure would produce suitable candidate designs. From this a stiffened sheet metal approach was selected leading to optimization of beam placement through generating a two-dimensional mesh and varying the physical location of reinforcing beams. Finally, the design space is reformulated as a binary problem using 1-dimensional beam elements to truncate the design space to allow faster convergence and additional mechanical failure criteria to be included in the optimization responses. Results are presented for each design space configuration. The final flight design was derived from these results.
Iteration in Early-Elementary Engineering Design
NASA Astrophysics Data System (ADS)
McFarland Kendall, Amber Leigh
K-12 standards and curricula are beginning to include engineering design as a key practice within Science Technology Engineering and Mathematics (STEM) education. However, there is little research on how the youngest students engage in engineering design within the elementary classroom. This dissertation focuses on iteration as an essential aspect of engineering design, and because research at the college and professional level suggests iteration improves the designer's understanding of problems and the quality of design solutions. My research presents qualitative case studies of students in kindergarten and third-grade as they engage in classroom engineering design challenges which integrate with traditional curricula standards in mathematics, science, and literature. I discuss my results through the lens of activity theory, emphasizing practices, goals, and mediating resources. Through three chapters, I provide insight into how early-elementary students iterate upon their designs by characterizing the ways in which lesson design impacts testing and revision, by analyzing the plan-driven and experimentation-driven approaches that student groups use when solving engineering design challenges, and by investigating how students attend to constraints within the challenge. I connect these findings to teacher practices and curriculum design in order to suggest methods of promoting iteration within open-ended, classroom-based engineering design challenges. This dissertation contributes to the field of engineering education by providing evidence of productive engineering practices in young students and support for the value of engineering design challenges in developing students' participation and agency in these practices.
The Design Implementation Framework: Iterative Design from the Lab to the Classroom
ERIC Educational Resources Information Center
Stone, Melissa L.; Kent, Kevin M.; Roscoe, Rod D.; Corley, Kathleen M.; Allen, Laura K.; McNamara, Danielle S.
2017-01-01
This chapter explores three broad principles of user-centered design methodologies: participatory design, iteration, and usability considerations. The authors highlight the importance of considering teachers as a prominent type of ITS end user, by describing the barriers teachers face as users and their role in educational technology design. To…
Progress in the Design and Development of the ITER Low-Field Side Reflectometer (LFSR) System
NASA Astrophysics Data System (ADS)
Doyle, E. J.; Wang, G.; Peebles, W. A.; US LFSR Team
2015-11-01
The US has formed a team, comprised of personnel from PPPL, ORNL, GA and UCLA, to develop the LFSR system for ITER. The LFSR system will contribute to the measurement of a number of plasma parameters on ITER, including edge plasma electron density profiles, monitor Edge Localized Modes (ELMs) and L-H transitions, and provide physics measurements relating to high frequency instabilities, plasma flows, and other density transients. An overview of the status of design activities and component testing for the system will be presented. Since the 2011 conceptual design review, the number of microwave transmission lines (TLs) and antennas has been reduced from twelve (12) to seven (7) due to space constraint in the ITER Tokamak Port Plug. This change has required a reconfiguration and recalculation of the performance of the front-end antenna design, which now includes use of monostatic transmission lines and antennas. Work supported by US ITER/PPPL Subcontracts S013252-C and S012340, and PO 4500051400 from GA to UCLA.
NASA Astrophysics Data System (ADS)
Cao, Huijun; Cao, Yong; Chu, Yuchuan; He, Xiaoming; Lin, Tao
2018-06-01
Surface evolution is an unavoidable issue in engineering plasma applications. In this article an iterative method for modeling plasma-surface interactions with moving interface is proposed and validated. In this method, the plasma dynamics is simulated by an immersed finite element particle-in-cell (IFE-PIC) method, and the surface evolution is modeled by the Huygens wavelet method which is coupled with the iteration of the IFE-PIC method. Numerical experiments, including prototypical engineering applications, such as the erosion of Hall thruster channel wall, are presented to demonstrate features of this Huygens IFE-PIC method for simulating the dynamic plasma-surface interactions.
ITER ECE Diagnostic: Design Progress of IN-DA and the diagnostic role for Physics
NASA Astrophysics Data System (ADS)
Pandya, H. K. B.; Kumar, Ravinder; Danani, S.; Shrishail, P.; Thomas, Sajal; Kumar, Vinay; Taylor, G.; Khodak, A.; Rowan, W. L.; Houshmandyar, S.; Udintsev, V. S.; Casal, N.; Walsh, M. J.
2017-04-01
The ECE Diagnostic system in ITER will be used for measuring the electron temperature profile evolution, electron temperature fluctuations, the runaway electron spectrum, and the radiated power in the electron cyclotron frequency range (70-1000 GHz), These measurements will be used for advanced real time plasma control (e.g. steering the electron cyclotron heating beams), and physics studies. The scope of the Indian Domestic Agency (IN-DA) is to design and develop the polarizer splitter units; the broadband (70 to 1000 GHz) transmission lines; a high temperature calibration source in the Diagnostics Hall; two Michelson Interferometers (70 to 1000 GHz) and a 122-230 GHz radiometer. The remainder of the ITER ECE diagnostic system is the responsibility of the US domestic agency and the ITER Organization (IO). The design needs to conform to the ITER Organization’s strict requirements for reliability, availability, maintainability and inspect-ability. Progress in the design and development of various subsystems and components considering various engineering challenges and solutions will be discussed in this paper. This paper will also highlight how various ECE measurements can enhance understanding of plasma physics in ITER.
NASA Astrophysics Data System (ADS)
Chen, Lei; Liu, Xiang; Lian, Youyun; Cai, Laizhong
2015-09-01
The hypervapotron (HV), as an enhanced heat transfer technique, will be used for ITER divertor components in the dome region as well as the enhanced heat flux first wall panels. W-Cu brazing technology has been developed at SWIP (Southwestern Institute of Physics), and one W/CuCrZr/316LN component of 450 mm×52 mm×166 mm with HV cooling channels will be fabricated for high heat flux (HHF) tests. Before that a relevant analysis was carried out to optimize the structure of divertor component elements. ANSYS-CFX was used in CFD analysis and ABAQUS was adopted for thermal-mechanical calculations. Commercial code FE-SAFE was adopted to compute the fatigue life of the component. The tile size, thickness of tungsten tiles and the slit width among tungsten tiles were optimized and its HHF performances under International Thermonuclear Experimental Reactor (ITER) loading conditions were simulated. One brand new tokamak HL-2M with advanced divertor configuration is under construction in SWIP, where ITER-like flat-tile divertor components are adopted. This optimized design is expected to supply valuable data for HL-2M tokamak. supported by the National Magnetic Confinement Fusion Science Program of China (Nos. 2011GB110001 and 2011GB110004)
2017-10-01
chronic mental and physical health problems. Therefore, the project aims to: (1) iteratively design a new web-based PTS and Motivational Interviewing...result in missed opportunities to intervene to prevent chronic mental and physical health problems. The project aims are to: (1) iteratively design a new...intervene to prevent chronic mental and physical health problems. We propose to: (1) Iteratively design a new web-based PTS and Motivational
Boundary formulations for sensitivity analysis without matrix derivatives
NASA Technical Reports Server (NTRS)
Kane, J. H.; Guru Prasad, K.
1993-01-01
A new hybrid approach to continuum structural shape sensitivity analysis employing boundary element analysis (BEA) is presented. The approach uses iterative reanalysis to obviate the need to factor perturbed matrices in the determination of surface displacement and traction sensitivities via a univariate perturbation/finite difference (UPFD) step. The UPFD approach makes it possible to immediately reuse existing subroutines for computation of BEA matrix coefficients in the design sensitivity analysis process. The reanalysis technique computes economical response of univariately perturbed models without factoring perturbed matrices. The approach provides substantial computational economy without the burden of a large-scale reprogramming effort.
Effect of Geometrical Imperfection on Buckling Failure of ITER VVPSS Tank
NASA Astrophysics Data System (ADS)
Jha, Saroj Kumar; Gupta, Girish Kumar; Pandey, Manish Kumar; Bhattacharya, Avik; Jogi, Gaurav; Bhardwaj, Anil Kumar
2017-04-01
The ‘Vacuum Vessel Pressure Suppression System’ (VVPSS) is part of ITER machine, which is designed to protect the ITER Vacuum Vessel and its connected systems, from an over-pressure situation. It is comprised of a partially evacuated tank of stainless steel approximately 46 m long and 6 m in diameter and thickness 30 mm. It is to hold approximately 675 tonnes of water at room temperature to condense the steam resulting from the adverse water leakage into the Vacuum Vessel chamber. For any vacuum vessel, geometrical imperfection has significant effect on buckling failure and structural integrity. Major geometrical imperfection in VVPSS tank depends on form tolerances. To study the effect of geometrical imperfection on buckling failure of VVPSS tank, finite element analysis (FEA) has been performed in line with ASME section VIII division 2 part 5 [1], ‘design by analysis method’. Linear buckling analysis has been performed to get the buckled shape and displacement. Geometrical imperfection due to form tolerance is incorporated in FEA model of VVPSS tank by scaling the resulted buckled shape by a factor ‘60’. This buckled shape model is used as input geometry for plastic collapse and buckling failure assessment. Plastic collapse and buckling failure of VVPSS tank has been assessed by using the elastic-plastic analysis method. This analysis has been performed for different values of form tolerance. The results of analysis show that displacement and load proportionality factor (LPF) vary inversely with form tolerance. For higher values of form tolerance LPF reduces significantly with high values of displacement.
Fall prevention walker during rehabilitation
NASA Astrophysics Data System (ADS)
Tee, Kian Sek; E, Chun Zhi; Saim, Hashim; Zakaria, Wan Nurshazwani Wan; Khialdin, Safinaz Binti Mohd; Isa, Hazlita; Awad, M. I.; Soon, Chin Fhong
2017-09-01
This paper proposes on the design of a walker for the prevention of falling among elderlies or patients during rehabilitation whenever they use a walker to assist them. Fall happens due to impaired balance or gait problem. The assistive device is designed by applying stability concept and an accelerometric fall detection system is included. The accelerometric fall detection system acts as an alerting device that acquires body accelerometric data and detect fall. Recorded accelerometric data could be useful for further assessment. Structural strength of the walker was verified via iterations of simulation using finite element analysis, before being fabricated. Experiments were conducted to identify the fall patterns using accelerometric data. The design process and detection of fall pattern demonstrates the design of a walker that could support the user without fail and alerts the helper, thus salvaging the users from injuries due to fall and unattended situation.
Xing, Li; McDonald, Joseph J; Kolodziej, Steve A; Kurumbail, Ravi G; Williams, Jennifer M; Warren, Chad J; O'Neal, Janet M; Skepner, Jill E; Roberds, Steven L
2011-03-10
Structure-based virtual screening was applied to design combinatorial libraries to discover novel and potent soluble epoxide hydrolase (sEH) inhibitors. X-ray crystal structures revealed unique interactions for a benzoxazole template in addition to the conserved hydrogen bonds with the catalytic machinery of sEH. By exploitation of the favorable binding elements, two iterations of library design based on amide coupling were employed, guided principally by the docking results of the enumerated virtual products. Biological screening of the libraries demonstrated as high as 90% hit rate, of which over two dozen compounds were single digit nanomolar sEH inhibitors by IC(50) determination. In total the library design and synthesis produced more than 300 submicromolar sEH inhibitors. In cellular systems consistent activities were demonstrated with biochemical measurements. The SAR understanding of the benzoxazole template provides valuable insights into discovery of novel sEH inhibitors as therapeutic agents.
Design of a Smart Ultrasonic Transducer for Interconnecting Machine Applications
Yan, Tian-Hong; Wang, Wei; Chen, Xue-Dong; Li, Qing; Xu, Chang
2009-01-01
A high-frequency ultrasonic transducer for copper or gold wire bonding has been designed, analyzed, prototyped and tested. Modeling techniques were used in the design phase and a practical design procedure was established and used. The transducer was decomposed into its elementary components. For each component, an initial design was obtained with simulations using a finite elements model (FEM). Simulated ultrasonic modules were built and characterized experimentally through the Laser Doppler Vibrometer (LDV) and electrical resonance spectra. Compared with experimental data, the FEM could be iteratively adjusted and updated. Having achieved a remarkably highly-predictive FEM of the whole transducer, the design parameters could be tuned for the desired applications, then the transducer is fixed on the wire bonder with a complete holder clamping was calculated by the FEM. The approach to mount ultrasonic transducers on wire bonding machines also is of major importance for wire bonding in modern electronic packaging. The presented method can lead to obtaining a nearly complete decoupling clamper design of the transducer to the wire bonder. PMID:22408564
On the safety of ITER accelerators.
Li, Ge
2013-01-01
Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate -1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER.
On the safety of ITER accelerators
Li, Ge
2013-01-01
Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate −1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER. PMID:24008267
Parallel fast multipole boundary element method applied to computational homogenization
NASA Astrophysics Data System (ADS)
Ptaszny, Jacek
2018-01-01
In the present work, a fast multipole boundary element method (FMBEM) and a parallel computer code for 3D elasticity problem is developed and applied to the computational homogenization of a solid containing spherical voids. The system of equation is solved by using the GMRES iterative solver. The boundary of the body is dicretized by using the quadrilateral serendipity elements with an adaptive numerical integration. Operations related to a single GMRES iteration, performed by traversing the corresponding tree structure upwards and downwards, are parallelized by using the OpenMP standard. The assignment of tasks to threads is based on the assumption that the tree nodes at which the moment transformations are initialized can be partitioned into disjoint sets of equal or approximately equal size and assigned to the threads. The achieved speedup as a function of number of threads is examined.
A constrained modulus reconstruction technique for breast cancer assessment.
Samani, A; Bishop, J; Plewes, D B
2001-09-01
A reconstruction technique for breast tissue elasticity modulus is described. This technique assumes that the geometry of normal and suspicious tissues is available from a contrast-enhanced magnetic resonance image. Furthermore, it is assumed that the modulus is constant throughout each tissue volume. The technique, which uses quasi-static strain data, is iterative where each iteration involves modulus updating followed by stress calculation. Breast mechanical stimulation is assumed to be done by two compressional rigid plates. As a result, stress is calculated using the finite element method based on the well-controlled boundary conditions of the compression plates. Using the calculated stress and the measured strain, modulus updating is done element-by-element based on Hooke's law. Breast tissue modulus reconstruction using simulated data and phantom modulus reconstruction using experimental data indicate that the technique is robust.
Active control for stabilization of neoclassical tearing modesa)
NASA Astrophysics Data System (ADS)
Humphreys, D. A.; Ferron, J. R.; La Haye, R. J.; Luce, T. C.; Petty, C. C.; Prater, R.; Welander, A. S.
2006-05-01
This work describes active control algorithms used by DIII-D [J. L. Luxon, Nucl. Fusion 42, 614 (2002)] to stabilize and maintain suppression of 3/2 or 2/1 neoclassical tearing modes (NTMs) by application of electron cyclotron current drive (ECCD) at the rational q surface. The DIII-D NTM control system can determine the correct q-surface/ECCD alignment and stabilize existing modes within 100-500ms of activation, or prevent mode growth with preemptive application of ECCD, in both cases enabling stable operation at normalized beta values above 3.5. Because NTMs can limit performance or cause plasma-terminating disruptions in tokamaks, their stabilization is essential to the high performance operation of ITER [R. Aymar et al., ITER Joint Central Team, ITER Home Teams, Nucl. Fusion 41, 1301 (2001)]. The DIII-D NTM control system has demonstrated many elements of an eventual ITER solution, including general algorithms for robust detection of q-surface/ECCD alignment and for real-time maintenance of alignment following the disappearance of the mode. This latter capability, unique to DIII-D, is based on real-time reconstruction of q-surface geometry by a Grad-Shafranov solver using external magnetics and internal motional Stark effect measurements. Alignment is achieved by varying either the plasma major radius (and the rational q surface) or the toroidal field (and the deposition location). The requirement to achieve and maintain q-surface/ECCD alignment with accuracy on the order of 1cm is routinely met by the DIII-D Plasma Control System and these algorithms. We discuss the integrated plasma control design process used for developing these and other general control algorithms, which includes physics-based modeling and testing of the algorithm implementation against simulations of actuator and plasma responses. This systematic design/test method and modeling environment enabled successful mode suppression by the NTM control system upon first-time use in an experimental discharge.
Gaussian beam and physical optics iteration technique for wideband beam waveguide feed design
NASA Technical Reports Server (NTRS)
Veruttipong, W.; Chen, J. C.; Bathker, D. A.
1991-01-01
The Gaussian beam technique has become increasingly popular for wideband beam waveguide (BWG) design. However, it is observed that the Gaussian solution is less accurate for smaller mirrors (approximately less than 30 lambda in diameter). Therefore, a high-performance wideband BWG design cannot be achieved by using the Gaussian beam technique alone. This article demonstrates a new design approach by iterating Gaussian beam and BWG parameters simultaneously at various frequencies to obtain a wideband BWG. The result is further improved by comparing it with physical optics results and repeating the iteration.
Development and Evaluation of an Intuitive Operations Planning Process
2006-03-01
designed to be iterative and also prescribes the way in which iterations should occur. On the other hand, participants’ perceived level of trust and...16 4. DESIGN AND METHOD OF THE EXPERIMENTAL EVALUATION OF THE INTUITIVE PLANNING PROCESS...20 4.1.3 Design
Iterative optimization method for design of quantitative magnetization transfer imaging experiments.
Levesque, Ives R; Sled, John G; Pike, G Bruce
2011-09-01
Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.
Lessons from a Space Analog on Adaptation for Long-Duration Exploration Missions.
Anglin, Katlin M; Kring, Jason P
2016-04-01
Exploration missions to asteroids and Mars will bring new challenges associated with communication delays and more autonomy for crews. Mission safety and success will rely on how well the entire system, from technology to the human elements, is adaptable and resilient to disruptive, novel, or potentially catastrophic events. The recent NASA Extreme Environment Missions Operations (NEEMO) 20 mission highlighted this need and produced valuable "lessons learned" that will inform future research on team adaptation and resilience. A team of NASA, industry, and academic members used an iterative process to design a tripod shaped structure, called the CORAL Tower, for two astronauts to assemble underwater with minimal tools. The team also developed assembly procedures, administered training to the crew, and provided support during the mission. During the design, training, and assembly of the Tower, the team learned first-hand how adaptation in extreme environments depends on incremental testing, thorough procedures and contingency plans that predict possible failure scenarios, and effective team adaptation and resiliency for the crew and support personnel. Findings from NEEMO 20 provide direction on the design and testing process for future space systems and crews to maximize adaptation. This experience also underscored the need for more research on team adaptation, particularly how input and process factors affect adaption outcomes, the team adaptation iterative process, and new ways to measure the adaptation process.
Experimental validation of prototype high voltage bushing
NASA Astrophysics Data System (ADS)
Shah, Sejal; Tyagi, H.; Sharma, D.; Parmar, D.; M. N., Vishnudev; Joshi, K.; Patel, K.; Yadav, A.; Patel, R.; Bandyopadhyay, M.; Rotti, C.; Chakraborty, A.
2017-08-01
Prototype High voltage bushing (PHVB) is a scaled down configuration of DNB High Voltage Bushing (HVB) of ITER. It is designed for operation at 50 kV DC to ensure operational performance and thereby confirming the design configuration of DNB HVB. Two concentric insulators viz. Ceramic and Fiber reinforced polymer (FRP) rings are used as double layered vacuum boundary for 50 kV isolation between grounded and high voltage flanges. Stress shields are designed for smooth electric field distribution. During ceramic to Kovar brazing, spilling cannot be controlled which may lead to high localized electrostatic stress. To understand spilling phenomenon and precise stress calculation, quantitative analysis was performed using Scanning Electron Microscopy (SEM) of brazed sample and similar configuration modeled while performing the Finite Element (FE) analysis. FE analysis of PHVB is performed to find out electrical stresses on different areas of PHVB and are maintained similar to DNB HV Bushing. With this configuration, the experiment is performed considering ITER like vacuum and electrical parameters. Initial HV test is performed by temporary vacuum sealing arrangements using gaskets/O-rings at both ends in order to achieve desired vacuum and keep the system maintainable. During validation test, 50 kV voltage withstand is performed for one hour. Voltage withstand test for 60 kV DC (20% higher rated voltage) have also been performed without any breakdown. Successful operation of PHVB confirms the design of DNB HV Bushing. In this paper, configuration of PHVB with experimental validation data is presented.
NASA Astrophysics Data System (ADS)
Kump, P.; Vogel-Mikuš, K.
2018-05-01
Two fundamental-parameter (FP) based models for quantification of 2D elemental distribution maps of intermediate-thick biological samples by synchrotron low energy μ-X-ray fluorescence spectrometry (SR-μ-XRF) are presented and applied to the elemental analysis in experiments with monochromatic focused photon beam excitation at two low energy X-ray fluorescence beamlines—TwinMic, Elettra Sincrotrone Trieste, Italy, and ID21, ESRF, Grenoble, France. The models assume intermediate-thick biological samples composed of measured elements, the sources of the measurable spectral lines, and by the residual matrix, which affects the measured intensities through absorption. In the first model a fixed residual matrix of the sample is assumed, while in the second model the residual matrix is obtained by the iteration refinement of elemental concentrations and an adjusted residual matrix. The absorption of the incident focused beam in the biological sample at each scanned pixel position, determined from the output of a photodiode or a CCD camera, is applied as a control in the iteration procedure of quantification.
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor); Awwal, Abdul A. S. (Inventor); Karim, Mohammad A. (Inventor)
1993-01-01
An inner-product array processor is provided with thresholding of the inner product during each iteration to make more significant the inner product employed in estimating a vector to be used as the input vector for the next iteration. While stored vectors and estimated vectors are represented in bipolar binary (1,-1), only those elements of an initial partial input vector that are believed to be common with those of a stored vector are represented in bipolar binary; the remaining elements of a partial input vector are set to 0. This mode of representation, in which the known elements of a partial input vector are in bipolar binary form and the remaining elements are set equal to 0, is referred to as trinary representation. The initial inner products corresponding to the partial input vector will then be equal to the number of known elements. Inner-product thresholding is applied to accelerate convergence and to avoid convergence to a negative input product.
Solution of a tridiagonal system of equations on the finite element machine
NASA Technical Reports Server (NTRS)
Bostic, S. W.
1984-01-01
Two parallel algorithms for the solution of tridiagonal systems of equations were implemented on the Finite Element Machine. The Accelerated Parallel Gauss method, an iterative method, and the Buneman algorithm, a direct method, are discussed and execution statistics are presented.
ERIC Educational Resources Information Center
Varjo, Janne; Kalalahti, Mira; Silvennoinen, Heikki
2014-01-01
This article analyzes the ways in which the right to education and freedom of education are expressed in local school choice policies in Finland. We aim to discover the elements that form democratic iterations on the right to education and freedom of education by contrasting their manifestations in three local institutional spaces for parental…
Antenna array geometry optimization for a passive coherent localisation system
NASA Astrophysics Data System (ADS)
Knott, Peter; Kuschel, Heiner; O'Hagan, Daniel
2012-11-01
Passive Coherent Localisation (PCL), also known as Passive Radar, making use of RF sources of opportunity such as Radio or TV Broadcasting Stations, Cellular Phone Network Base Stations, etc. is an advancing technology for covert operation because no active radar transmitter is required. It is also an attractive addition to existing active radar stations because it has the potential to discover low-flying and low-observable targets. The CORA (Covert Radar) experimental passive radar system currently developed at Fraunhofer-FHR features a multi-channel digital radar receiver and a circular antenna array with separate elements for the VHF- and the UHF-range and is used to exploit alternatively Digital Audio (DAB) or Video Broadcasting (DVB-T) signals. For an extension of the system, a wideband antenna array is being designed for which a new discone antenna element has been developed covering the full DVB-T frequency range. The present paper describes the outline of the system and the numerical modelling and optimisation methods applied to solve the complex task of antenna array design: Electromagnetic full wave analysis is required for the parametric design of the antenna elements while combinatorial optimization methods are applied to find the best array positions and excitation coefficients for a regular omni-directional antenna performance. The different steps are combined in an iterative loop until the optimum array layout is found. Simulation and experimental results for the current system will be shown.
Single element ultrasonic imaging of limb geometry: an in-vivo study with comparison to MRI
NASA Astrophysics Data System (ADS)
Zhang, Xiang; Fincke, Jonathan R.; Anthony, Brian W.
2016-04-01
Despite advancements in medical imaging, current prosthetic fitting methods remain subjective, operator dependent, and non-repeatable. The standard plaster casting method relies on prosthetist experience and tactile feel of the limb to design the prosthetic socket. Often times, many fitting iterations are required to achieve an acceptable fit. Use of improper socket fittings can lead to painful pathologies including neuromas, inflammation, soft tissue calcification, and pressure sores, often forcing the wearer to into a wheelchair and reducing mobility and quality of life. Computer software along with MRI/CT imaging has already been explored to aid the socket design process. In this paper, we explore the use of ultrasound instead of MRI/CT to accurately obtain the underlying limb geometry to assist the prosthetic socket design process. Using a single element ultrasound system, multiple subjects' proximal limbs were imaged using 1, 2.25, and 5 MHz single element transducers. Each ultrasound transducer was calibrated to ensure acoustic exposure within the limits defined by the FDA. To validate image quality, each patient was also imaged in an MRI. Fiducial markers visible in both MRI and ultrasound were used to compare the same limb cross-sectional image for each patient. After applying a migration algorithm, B-mode ultrasound cross-sections showed sufficiently high image resolution to characterize the skin and bone boundaries along with the underlying tissue structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khaleel, Mohammad A.; Lin, Zijing; Singh, Prabhakar
2004-05-03
A 3D simulation tool for modeling solid oxide fuel cells is described. The tool combines the versatility and efficiency of a commercial finite element analysis code, MARC{reg_sign}, with an in-house developed robust and flexible electrochemical (EC) module. Based upon characteristic parameters obtained experimentally and assigned by the user, the EC module calculates the current density distribution, heat generation, and fuel and oxidant species concentration, taking the temperature profile provided by MARC{reg_sign} and operating conditions such as the fuel and oxidant flow rate and the total stack output voltage or current as the input. MARC{reg_sign} performs flow and thermal analyses basedmore » on the initial and boundary thermal and flow conditions and the heat generation calculated by the EC module. The main coupling between MARC{reg_sign} and EC is for MARC{reg_sign} to supply the temperature field to EC and for EC to give the heat generation profile to MARC{reg_sign}. The loosely coupled, iterative scheme is advantageous in terms of memory requirement, numerical stability and computational efficiency. The coupling is iterated to self-consistency for a steady-state solution. Sample results for steady states as well as the startup process for stacks with different flow designs are presented to illustrate the modeling capability and numerical performance characteristic of the simulation tool.« less
LOW-ENGINE-FRICTION TECHNOLOGY FOR ADVANCED NATURAL-GAS RECIPROCATING ENGINES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Victor Wong; Tian Tian; Luke Moughon
2005-09-30
This program aims at improving the efficiency of advanced natural-gas reciprocating engines (ANGRE) by reducing piston and piston ring assembly friction without major adverse effects on engine performance, such as increased oil consumption and wear. An iterative process of simulation, experimentation and analysis is being followed towards achieving the goal of demonstrating a complete optimized low-friction engine system. To date, a detailed set of piston and piston-ring dynamic and friction models have been developed and applied that illustrate the fundamental relationships between design parameters and friction losses. Low friction ring designs have already been recommended in a previous phase, withmore » full-scale engine validation partially completed. Current accomplishments include the addition of several additional power cylinder design areas to the overall system analysis. These include analyses of lubricant and cylinder surface finish and a parametric study of piston design. The Waukesha engine was found to be already well optimized in the areas of lubricant, surface skewness and honing cross-hatch angle, where friction reductions of 12% for lubricant, and 5% for surface characteristics, are projected. For the piston, a friction reduction of up to 50% may be possible by controlling waviness alone, while additional friction reductions are expected when other parameters are optimized. A total power cylinder friction reduction of 30-50% is expected, translating to an engine efficiency increase of two percentage points from its current baseline towards the goal of 50% efficiency. Key elements of the continuing work include further analysis and optimization of the engine piston design, in-engine testing of recommended lubricant and surface designs, design iteration and optimization of previously recommended technologies, and full-engine testing of a complete, optimized, low-friction power cylinder system.« less
Two-Level Hierarchical FEM Method for Modeling Passive Microwave Devices
NASA Astrophysics Data System (ADS)
Polstyanko, Sergey V.; Lee, Jin-Fa
1998-03-01
In recent years multigrid methods have been proven to be very efficient for solving large systems of linear equations resulting from the discretization of positive definite differential equations by either the finite difference method or theh-version of the finite element method. In this paper an iterative method of the multiple level type is proposed for solving systems of algebraic equations which arise from thep-version of the finite element analysis applied to indefinite problems. A two-levelV-cycle algorithm has been implemented and studied with a Gauss-Seidel iterative scheme used as a smoother. The convergence of the method has been investigated, and numerical results for a number of numerical examples are presented.
ITER Central Solenoid Module Fabrication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, John
The fabrication of the modules for the ITER Central Solenoid (CS) has started in a dedicated production facility located in Poway, California, USA. The necessary tools have been designed, built, installed, and tested in the facility to enable the start of production. The current schedule has first module fabrication completed in 2017, followed by testing and subsequent shipment to ITER. The Central Solenoid is a key component of the ITER tokamak providing the inductive voltage to initiate and sustain the plasma current and to position and shape the plasma. The design of the CS has been a collaborative effort betweenmore » the US ITER Project Office (US ITER), the international ITER Organization (IO) and General Atomics (GA). GA’s responsibility includes: completing the fabrication design, developing and qualifying the fabrication processes and tools, and then completing the fabrication of the seven 110 tonne CS modules. The modules will be shipped separately to the ITER site, and then stacked and aligned in the Assembly Hall prior to insertion in the core of the ITER tokamak. A dedicated facility in Poway, California, USA has been established by GA to complete the fabrication of the seven modules. Infrastructure improvements included thick reinforced concrete floors, a diesel generator for backup power, along with, cranes for moving the tooling within the facility. The fabrication process for a single module requires approximately 22 months followed by five months of testing, which includes preliminary electrical testing followed by high current (48.5 kA) tests at 4.7K. The production of the seven modules is completed in a parallel fashion through ten process stations. The process stations have been designed and built with most stations having completed testing and qualification for carrying out the required fabrication processes. The final qualification step for each process station is achieved by the successful production of a prototype coil. Fabrication of the first ITER module is in progress. The seven modules will be individually shipped to Cadarache, France upon their completion. This paper describes the processes and status of the fabrication of the CS Modules for ITER.« less
Iteration in Early-Elementary Engineering Design
ERIC Educational Resources Information Center
McFarland Kendall, Amber Leigh
2017-01-01
K-12 standards and curricula are beginning to include engineering design as a key practice within Science Technology Engineering and Mathematics (STEM) education. However, there is little research on how the youngest students engage in engineering design within the elementary classroom. This dissertation focuses on iteration as an essential aspect…
Study of 3D printing method for GRIN micro-optics devices
NASA Astrophysics Data System (ADS)
Wang, P. J.; Yeh, J. A.; Hsu, W. Y.; Cheng, Y. C.; Lee, W.; Wu, N. H.; Wu, C. Y.
2016-03-01
Conventional optical elements are based on either refractive or reflective optics theory to fulfill the design specifications via optics performance data. In refractive optical lenses, the refractive index of materials and radius of curvature of element surfaces determine the optical power and wavefront aberrations so that optical performance can be further optimized iteratively. Although gradient index (GRIN) phenomenon in optical materials is well studied for more than a half century, the optics theory in lens design via GRIN materials is still yet to be comprehensively investigated before realistic GRIN lenses are manufactured. In this paper, 3D printing method for manufacture of micro-optics devices with special features has been studied based on methods reported in the literatures. Due to the additive nature of the method, GRIN lenses in micro-optics devices seem to be readily achievable if a design methodology is available. First, derivation of ray-tracing formulae is introduced for all possible structures in GRIN lenses. Optics simulation program is employed for characterization of GRIN lenses with performance data given by aberration coefficients in Zernike polynomial. Finally, a proposed structure of 3D printing machine is described with conceptual illustration.
Analytical Solution for Optimum Design of Furrow Irrigation Systems
NASA Astrophysics Data System (ADS)
Kiwan, M. E.
1996-05-01
An analytical solution for the optimum design of furrow irrigation systems is derived. The non-linear calculus optimization method is used to formulate a general form for designing the optimum system elements under circumstances of maximizing the water application efficiency of the system during irrigation. Different system bases and constraints are considered in the solution. A full irrigation water depth is considered to be achieved at the tail of the furrow line. The solution is based on neglecting the recession and depletion times after off-irrigation. This assumption is valid in the case of open-end (free gradient) furrow systems rather than closed-end (closed dike) systems. Illustrative examples for different systems are presented and the results are compared with the output obtained using an iterative numerical solution method. The final derived solution is expressed as a function of the furrow length ratio (the furrow length to the water travelling distance). The function of water travelling developed by Reddy et al. is considered for reaching the optimum solution. As practical results from the study, the optimum furrow elements for free gradient systems can be estimated to achieve the maximum application efficiency, i.e. furrow length, water inflow rate and cutoff irrigation time.
Joanny, M; Salasca, S; Dapena, M; Cantone, B; Travère, J M; Thellier, C; Fermé, J J; Marot, L; Buravand, O; Perrollaz, G; Zeile, C
2012-10-01
ITER first mirrors (FMs), as the first components of most ITER optical diagnostics, will be exposed to high plasma radiation flux and neutron load. To reduce the FMs heating and optical surface deformation induced during ITER operation, the use of relevant materials and cooling system are foreseen. The calculations led on different materials and FMs designs and geometries (100 mm and 200 mm) show that the use of CuCrZr and TZM, and a complex integrated cooling system can limit efficiently the FMs heating and reduce their optical surface deformation under plasma radiation flux and neutron load. These investigations were used to evaluate, for the ITER equatorial port visible∕infrared wide angle viewing system, the impact of the FMs properties change during operation on the instrument main optical performances. The results obtained are presented and discussed.
Campbell, Megan M; Susser, Ezra; Mall, Sumaya; Mqulwana, Sibonile G; Mndini, Michael M; Ntola, Odwa A; Nagdee, Mohamed; Zingela, Zukiswa; Van Wyk, Stephanus; Stein, Dan J
2017-01-01
Obtaining informed consent is a great challenge in global health research. There is a need for tools that can screen for and improve potential research participants' understanding of the research study at the time of recruitment. Limited empirical research has been conducted in low and middle income countries, evaluating informed consent processes in genomics research. We sought to investigate the quality of informed consent obtained in a South African psychiatric genomics study. A Xhosa language version of the University of California, San Diego Brief Assessment of Capacity to Consent Questionnaire (UBACC) was used to screen for capacity to consent and improve understanding through iterative learning in a sample of 528 Xhosa people with schizophrenia and 528 controls. We address two questions: firstly, whether research participants' understanding of the research study improved through iterative learning; and secondly, what were predictors for better understanding of the research study at the initial screening? During screening 290 (55%) cases and 172 (33%) controls scored below the 14.5 cut-off for acceptable understanding of the research study elements, however after iterative learning only 38 (7%) cases and 13 (2.5%) controls continued to score below this cut-off. Significant variables associated with increased understanding of the consent included the psychiatric nurse recruiter conducting the consent screening, higher participant level of education, and being a control. The UBACC proved an effective tool to improve understanding of research study elements during consent, for both cases and controls. The tool holds utility for complex studies such as those involving genomics, where iterative learning can be used to make significant improvements in understanding of research study elements. The UBACC may be particularly important in groups with severe mental illness and lower education levels. Study recruiters play a significant role in managing the quality of the informed consent process.
Varela, P; Belo, J H; Quental, P B
2016-11-01
The design of the in-vessel antennas for the ITER plasma position reflectometry diagnostic is very challenging due to the need to cope both with the space restrictions inside the vacuum vessel and with the high mechanical and thermal loads during ITER operation. Here, we present the work carried out to assess and optimise the design of the antenna. We show that the blanket modules surrounding the antenna strongly modify its characteristics and need to be considered from the early phases of the design. We also show that it is possible to optimise the antenna performance, within the design restrictions.
Method and program product for determining a radiance field in an optical environment
NASA Technical Reports Server (NTRS)
Reinersman, Phillip N. (Inventor); Carder, Kendall L. (Inventor)
2007-01-01
A hybrid method is presented by which Monte Carlo techniques are combined with iterative relaxation techniques to solve the Radiative Transfer Equation in arbitrary one-, two- or three-dimensional optical environments. The optical environments are first divided into contiguous regions, or elements, with Monte Carlo techniques then being employed to determine the optical response function of each type of element. The elements are combined, and the iterative relaxation techniques are used to determine simultaneously the radiance field on the boundary and throughout the interior of the modeled environment. This hybrid model is capable of providing estimates of the under-water light field needed to expedite inspection of ship hulls and port facilities. It is also capable of providing estimates of the subaerial light field for structured, absorbing or non-absorbing environments such as shadows of mountain ranges within and without absorption spectral bands such as water vapor or CO.sub.2 bands.
Characterization of the ITER CS conductor and projection to the ITER CS performance
Martovetsky, N.; Isono, T.; Bessette, D.; ...
2017-06-20
The ITER Central Solenoid (CS) is one of the critical elements of the machine. The CS conductor went through an intense optimization and qualification program, which included characterization of the strands, a conductor straight short sample testing in the SULTAN facility at the Swiss Plasma Center (SPC), Villigen, Switzerland, and a single-layer CS Insert coil recently tested in the Central Solenoid Model Coil (CSMC) facility in QST-Naka, Japan. In this paper, we obtained valuable data in a wide range of the parameters (current, magnetic field, temperature, and strain), which allowed a credible characterization of the CS conductor in different conditions.more » Finally, using this characterization, we will make a projection to the performance of the CS in the ITER reference scenario.« less
Characterization of the ITER CS conductor and projection to the ITER CS performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martovetsky, N.; Isono, T.; Bessette, D.
The ITER Central Solenoid (CS) is one of the critical elements of the machine. The CS conductor went through an intense optimization and qualification program, which included characterization of the strands, a conductor straight short sample testing in the SULTAN facility at the Swiss Plasma Center (SPC), Villigen, Switzerland, and a single-layer CS Insert coil recently tested in the Central Solenoid Model Coil (CSMC) facility in QST-Naka, Japan. In this paper, we obtained valuable data in a wide range of the parameters (current, magnetic field, temperature, and strain), which allowed a credible characterization of the CS conductor in different conditions.more » Finally, using this characterization, we will make a projection to the performance of the CS in the ITER reference scenario.« less
Shape reanalysis and sensitivities utilizing preconditioned iterative boundary solvers
NASA Technical Reports Server (NTRS)
Guru Prasad, K.; Kane, J. H.
1992-01-01
The computational advantages associated with the utilization of preconditined iterative equation solvers are quantified for the reanalysis of perturbed shapes using continuum structural boundary element analysis (BEA). Both single- and multi-zone three-dimensional problems are examined. Significant reductions in computer time are obtained by making use of previously computed solution vectors and preconditioners in subsequent analyses. The effectiveness of this technique is demonstrated for the computation of shape response sensitivities required in shape optimization. Computer times and accuracies achieved using the preconditioned iterative solvers are compared with those obtained via direct solvers and implicit differentiation of the boundary integral equations. It is concluded that this approach employing preconditioned iterative equation solvers in reanalysis and sensitivity analysis can be competitive with if not superior to those involving direct solvers.
Prospects for Advanced Tokamak Operation of ITER
NASA Astrophysics Data System (ADS)
Neilson, George H.
1996-11-01
Previous studies have identified steady-state (or "advanced") modes for ITER, based on reverse-shear profiles and significant bootstrap current. A typical example has 12 MA of plasma current, 1,500 MW of fusion power, and 100 MW of heating and current-drive power. The implementation of these and other steady-state operating scenarios in the ITER device is examined in order to identify key design modifications that can enhance the prospects for successfully achieving advanced tokamak operating modes in ITER compatible with a single null divertor design. In particular, we examine plasma configurations that can be achieved by the ITER poloidal field system with either a monolithic central solenoid (as in the ITER Interim Design), or an alternate "hybrid" central solenoid design which provides for greater flexibility in the plasma shape. The increased control capability and expanded operating space provided by the hybrid central solenoid allows operation at high triangularity (beneficial for improving divertor performance through control of edge-localized modes and for increasing beta limits), and will make it much easier for ITER operators to establish an optimum startup trajectory leading to a high-performance, steady-state scenario. Vertical position control is examined because plasmas made accessible by the hybrid central solenoid can be more elongated and/or less well coupled to the conducting structure. Control of vertical-displacements using the external PF coils remains feasible over much of the expanded operating space. Further work is required to define the full spectrum of axisymmetric plasma disturbances requiring active control In addition to active axisymmetric control, advanced tokamak modes in ITER may require active control of kink modes on the resistive time scale of the conducting structure. This might be accomplished in ITER through the use of active control coils external to the vacuum vessel which are actuated by magnetic sensors near the first wall. The enhanced shaping and positioning flexibility provides a range of options for reducing the ripple-induced losses of fast alpha particles--a major limitation on ITER steady-state modes. An alternate approach that we are pursuing in parallel is the inclusion of ferromagnetic inserts to reduce the toroidal field ripple within the plasma chamber. The inclusion of modest design changes such as the hybrid central solenoid, active control coils for kink modes, and ferromagnetic inserts for TF ripple reduction show can greatly increase the flexibility to accommodate advance tokamak operation in ITER. Increased flexibility is important because the optimum operating scenario for ITER cannot be predicted with certainty. While low-inductance, reverse shear modes appear attractive for steady-state operation, high-inductance, high-beta modes are also viable candidates, and it is important that ITER have the flexibility to explore both these, and other, operating regimes.
NASA Astrophysics Data System (ADS)
de Almeida, Valmor F.
2017-07-01
A phase-space discontinuous Galerkin (PSDG) method is presented for the solution of stellar radiative transfer problems. It allows for greater adaptivity than competing methods without sacrificing generality. The method is extensively tested on a spherically symmetric, static, inverse-power-law scattering atmosphere. Results for different sizes of atmospheres and intensities of scattering agreed with asymptotic values. The exponentially decaying behavior of the radiative field in the diffusive-transparent transition region, and the forward peaking behavior at the surface of extended atmospheres were accurately captured. The integrodifferential equation of radiation transfer is solved iteratively by alternating between the radiative pressure equation and the original equation with the integral term treated as an energy density source term. In each iteration, the equations are solved via an explicit, flux-conserving, discontinuous Galerkin method. Finite elements are ordered in wave fronts perpendicular to the characteristic curves so that elemental linear algebraic systems are solved quickly by sweeping the phase space element by element. Two implementations of a diffusive boundary condition at the origin are demonstrated wherein the finite discontinuity in the radiation intensity is accurately captured by the proposed method. This allows for a consistent mechanism to preserve photon luminosity. The method was proved to be robust and fast, and a case is made for the adequacy of parallel processing. In addition to classical two-dimensional plots, results of normalized radiation intensity were mapped onto a log-polar surface exhibiting all distinguishing features of the problem studied.
Two-dimensional over-all neutronics analysis of the ITER device
NASA Astrophysics Data System (ADS)
Zimin, S.; Takatsu, Hideyuki; Mori, Seiji; Seki, Yasushi; Satoh, Satoshi; Tada, Eisuke; Maki, Koichi
1993-07-01
The present work attempts to carry out a comprehensive neutronics analysis of the International Thermonuclear Experimental Reactor (ITER) developed during the Conceptual Design Activities (CDA). The two-dimensional cylindrical over-all calculational models of ITER CDA device including the first wall, blanket, shield, vacuum vessel, magnets, cryostat and support structures were developed for this purpose with a help of the DOGII code. Two dimensional DOT 3.5 code with the FUSION-40 nuclear data library was employed for transport calculations of neutron and gamma ray fluxes, tritium breeding ratio (TBR), and nuclear heating in reactor components. The induced activity calculational code CINAC was employed for the calculations of exposure dose rate after reactor shutdown around the ITER CDA device. The two-dimensional over-all calculational model includes the design specifics such as the pebble bed Li2O/Be layered blanket, the thin double wall vacuum vessel, the concrete cryostat integrated with the over-all ITER design, the top maintenance shield plug, the additional ring biological shield placed under the top cryostat lid around the above-mentioned top maintenance shield plug etc. All the above-mentioned design specifics were included in the employed calculational models. Some alternative design options, such as the water-rich shielding blanket instead of lithium-bearing one, the additional biological shield plug at the top zone between the poloidal field (PF) coil No. 5, and the maintenance shield plug, were calculated as well. Much efforts have been focused on analyses of obtained results. These analyses aimed to obtain necessary recommendations on improving the ITER CDA design.
LOFA analysis in helium and Pb-Li circuits of LLCB TBM by FE simulation
NASA Astrophysics Data System (ADS)
Chaudhuri, Paritosh; Ranjithkumar, S.; Sharma, Deepak; Danani, Chandan
2017-04-01
One of the main ITER objectives is to demonstrate the feasibility of the breeding blanket concepts that would lead to tritium self-sufficiency and the extraction of a high-grade heat for electricity production. India has developed the LLCB TBM to be tested in ITER for the validation of design concepts for tritium breeding blankets relevant DEMO and future power reactor. LLCB concept has the unique features of combination of both solid (lithium titanate as packed pebble bed) and liquid breeders (molten lead lithium). India specific IN-RAFMS is the structural material for TBM. The First Wall is actively cooled by high-pressure helium (He) gas [1]. It is important to validate the design of TBM to withstand various loads acting on it including accident analysis like LOCA, LOFA etc. Detailed thermal-hydraulic simulation studies including LOFA in helium and Pb-Li circuits of LLCB TBM have been performed using Finite Element using ANSYS. These analyses will provide important information about the temperature distribution in different materials used in TBM during steady state and transient condition. Thermal-hydraulic safety requirement has also been envisaged for the initiation the FPPS (Fusion Power Shutdown System) during LOFA. All these analysis will be presented in detail in this paper.
Physics and Engineering Design of the ITER Electron Cyclotron Emission Diagnostic
NASA Astrophysics Data System (ADS)
Rowan, W. L.; Austin, M. E.; Houshmandyar, S.; Phillips, P. E.; Beno, J. H.; Ouroua, A.; Weeks, D. A.; Hubbard, A. E.; Stillerman, J. A.; Feder, R. E.; Khodak, A.; Taylor, G.; Pandya, H. K.; Danani, S.; Kumar, R.
2015-11-01
Electron temperature (Te) measurements and consequent electron thermal transport inferences will be critical to the non-active phases of ITER operation and will take on added importance during the alpha heating phase. Here, we describe our design for the diagnostic that will measure spatial and temporal profiles of Te using electron cyclotron emission (ECE). Other measurement capability includes high frequency instabilities (e.g. ELMs, NTMs, and TAEs). Since results from TFTR and JET suggest that Thomson Scattering and ECE differ at high Te due to driven non-Maxwellian distributions, non-thermal features of the ITER electron distribution must be documented. The ITER environment presents other challenges including space limitations, vacuum requirements, and very high-neutron-fluence. Plasma control in ITER will require real-time Te. The diagnosic design that evolved from these sometimes-conflicting needs and requirements will be described component by component with special emphasis on the integration to form a single effective diagnostic system. Supported by PPPL/US-DA via subcontract S013464-C to UT Austin.
Shi, Hongli; Yang, Zhi; Luo, Shuqian
2017-01-01
The beam hardening artifact is one of most important modalities of metal artifact for polychromatic X-ray computed tomography (CT), which can impair the image quality seriously. An iterative approach is proposed to reduce beam hardening artifact caused by metallic components in polychromatic X-ray CT. According to Lambert-Beer law, the (detected) projections can be expressed as monotonic nonlinear functions of element geometry projections, which are the theoretical projections produced only by the pixel intensities (image grayscale) of certain element (component). With help of a prior knowledge on spectrum distribution of X-ray beam source and energy-dependent attenuation coefficients, the functions have explicit expressions. Newton-Raphson algorithm is employed to solve the functions. The solutions are named as the synthetical geometry projections, which are the nearly linear weighted sum of element geometry projections with respect to mean of each attenuation coefficient. In this process, the attenuation coefficients are modified to make Newton-Raphson iterative functions satisfy the convergence conditions of fixed pointed iteration(FPI) so that the solutions will approach the true synthetical geometry projections stably. The underlying images are obtained using the projections by general reconstruction algorithms such as the filtered back projection (FBP). The image gray values are adjusted according to the attenuation coefficient means to obtain proper CT numbers. Several examples demonstrate the proposed approach is efficient in reducing beam hardening artifacts and has satisfactory performance in the term of some general criteria. In a simulation example, the normalized root mean square difference (NRMSD) can be reduced 17.52% compared to a newest algorithm. Since the element geometry projections are free from the effect of beam hardening, the nearly linear weighted sum of them, the synthetical geometry projections, are almost free from the effect of beam hardening. By working out the synthetical geometry projections, the proposed approach becomes quite efficient in reducing beam hardening artifacts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Y.; Parsons, T.; King, R.
This report summarizes the theory, verification, and validation of a new sizing tool for wind turbine drivetrain components, the Drivetrain Systems Engineering (DriveSE) tool. DriveSE calculates the dimensions and mass properties of the hub, main shaft, main bearing(s), gearbox, bedplate, transformer if up-tower, and yaw system. The level of fi¬ delity for each component varies depending on whether semiempirical parametric or physics-based models are used. The physics-based models have internal iteration schemes based on system constraints and design criteria. Every model is validated against available industry data or finite-element analysis. The verification and validation results show that the models reasonablymore » capture primary drivers for the sizing and design of major drivetrain components.« less
An overview of ITER diagnostics (invited)
NASA Astrophysics Data System (ADS)
Young, Kenneth M.; Costley, A. E.; ITER-JCT Home Team; ITER Diagnostics Expert Group
1997-01-01
The requirements for plasma measurements for operating and controlling the ITER device have now been determined. Initial criteria for the measurement quality have been set, and the diagnostics that might be expected to achieve these criteria have been chosen. The design of the first set of diagnostics to achieve these goals is now well under way. The design effort is concentrating on the components that interact most strongly with the other ITER systems, particularly the vacuum vessel, blankets, divertor modules, cryostat, and shield wall. The relevant details of the ITER device and facility design and specific examples of diagnostic design to provide the necessary measurements are described. These designs have to take account of the issues associated with very high 14 MeV neutron fluxes and fluences, nuclear heating, high heat loads, and high mechanical forces that can arise during disruptions. The design work is supported by an extensive research and development program, which to date has concentrated on the effects these levels of radiation might cause on diagnostic components. A brief outline of the organization of the diagnostic development program is given.
Design and fabrication of a boron reinforced intertank skirt
NASA Technical Reports Server (NTRS)
Henshaw, J.; Roy, P. A.; Pylypetz, P.
1974-01-01
Analytical and experimental studies were performed to evaluate the structural efficiency of a boron reinforced shell, where the medium of reinforcement consists of hollow aluminum extrusions infiltrated with boron epoxy. Studies were completed for the design of a one-half scale minimum weight shell using boron reinforced stringers and boron reinforced rings. Parametric and iterative studies were completed for the design of minimum weight stringers, rings, shells without rings and shells with rings. Computer studies were completed for the final evaluation of a minimum weight shell using highly buckled minimum gage skin. The detail design is described of a practical minimum weight test shell which demonstrates a weight savings of 30% as compared to an all aluminum longitudinal stiffened shell. Sub-element tests were conducted on representative segments of the compression surface at maximum stress and also on segments of the load transfer joint. A 10 foot long, 77 inch diameter shell was fabricated from the design and delivered for further testing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cristescu, I.; Cristescu, I. R.; Doerr, L.
2008-07-15
The ITER Isotope Separation System (ISS) and Water Detritiation System (WDS) should be integrated in order to reduce potential chronic tritium emissions from the ISS. This is achieved by routing the top (protium) product from the ISS to a feed point near the bottom end of the WDS Liquid Phase Catalytic Exchange (LPCE) column. This provides an additional barrier against ISS emissions and should mitigate the memory effects due to process parameter fluctuations in the ISS. To support the research activities needed to characterize the performances of various components for WDS and ISS processes under various working conditions and configurationsmore » as needed for ITER design, an experimental facility called TRENTA representative of the ITER WDS and ISS protium separation column, has been commissioned and is in operation at TLK The experimental program on TRENTA facility is conducted to provide the necessary design data related to the relevant ITER operating modes. The operation availability and performances of ISS-WDS have impact on ITER fuel cycle subsystems with consequences on the design integration. The preliminary experimental data on TRENTA facility are presented. (authors)« less
Refractive and relativistic effects on ITER low field side reflectometer design.
Wang, G; Rhodes, T L; Peebles, W A; Harvey, R W; Budny, R V
2010-10-01
The ITER low field side reflectometer faces some unique design challenges, among which are included the effect of relativistic electron temperatures and refraction of probing waves. This paper utilizes GENRAY, a 3D ray tracing code, to investigate these effects. Using a simulated ITER operating scenario, characteristics of the reflected millimeter waves after return to the launch plane are quantified as a function of a range of design parameters, including antenna height, antenna diameter, and antenna radial position. Results for edge/SOL measurement with both O- and X-mode polarizations using proposed antennas are reported.
The role of simulation in the design of a neural network chip
NASA Technical Reports Server (NTRS)
Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.
1993-01-01
An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.
A new approach to enforce element-wise mass/species balance using the augmented Lagrangian method
NASA Astrophysics Data System (ADS)
Chang, J.; Nakshatrala, K.
2015-12-01
The least-squares finite element method (LSFEM) is one of many ways in which one can discretize and express a set of first ordered partial differential equations as a mixed formulation. However, the standard LSFEM is not locally conservative by design. The absence of this physical property can have serious implications in the numerical simulation of subsurface flow and transport. Two commonly employed ways to circumvent this issue is through the Lagrange multiplier method, which explicitly satisfies the element-wise divergence by introducing new unknowns, or through appending a penalty factor to the continuity constraint, which reduces the violation in the mass balance. However, these methodologies have some well-known drawbacks. Herein, we propose a new approach to improve the local balance of species/mass balance. The approach augments constraints to a least-square function by a novel mathematical construction of the local species/mass balance, which is different from the conventional ways. The resulting constrained optimization problem is solved using the augmented Lagrangian, which corrects the balance errors in an iterative fashion. The advantages of this methodology are that the problem size is not increased (thus preserving the symmetry and positive definite-ness) and that one need not provide an accurate guess for the initial penalty to reach a prescribed mass balance tolerance. We derive the least-squares weighting needed to ensure accurate solutions. We also demonstrate the robustness of the weighted LSFEM coupled with the augmented Lagrangian by solving large-scale heterogenous and variably saturated flow through porous media problems. The performance of the iterative solvers with respect to various user-defined augmented Lagrangian parameters will be documented.
Joint Calibration of 3d Laser Scanner and Digital Camera Based on Dlt Algorithm
NASA Astrophysics Data System (ADS)
Gao, X.; Li, M.; Xing, L.; Liu, Y.
2018-04-01
Design a calibration target that can be scanned by 3D laser scanner while shot by digital camera, achieving point cloud and photos of a same target. A method to joint calibrate 3D laser scanner and digital camera based on Direct Linear Transformation algorithm was proposed. This method adds a distortion model of digital camera to traditional DLT algorithm, after repeating iteration, it can solve the inner and external position element of the camera as well as the joint calibration of 3D laser scanner and digital camera. It comes to prove that this method is reliable.
Pseudo-time methods for constrained optimization problems governed by PDE
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1995-01-01
In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.
NASA Astrophysics Data System (ADS)
Tsuru, Daigo; Tanigawa, Hisashi; Hirose, Takanori; Mohri, Kensuke; Seki, Yohji; Enoeda, Mikio; Ezato, Koichiro; Suzuki, Satoshi; Nishi, Hiroshi; Akiba, Masato
2009-06-01
As the primary candidate of ITER Test Blanket Module (TBM) to be tested under the leadership of Japan, a water cooled solid breeder (WCSB) TBM is being developed. This paper shows the recent achievements towards the milestones of ITER TBMs prior to the installation, which consist of design integration in ITER, module qualification and safety assessment. With respect to the design integration, targeting the detailed design final report in 2012, structure designs of the WCSB TBM and the interfacing components (common frame and backside shielding) that are placed in a test port of ITER and the layout of the cooling system are presented. As for the module qualification, a real-scale first wall mock-up fabricated by using the hot isostatic pressing method by structural material of reduced activation martensitic ferritic steel, F82H, and flow and irradiation test of the mock-up are presented. As for safety milestones, the contents of the preliminary safety report in 2008 consisting of source term identification, failure mode and effect analysis (FMEA) and identification of postulated initiating events (PIEs) and safety analyses are presented.
Hsu, Wei-Feng; Lin, Shih-Chih
2018-01-01
This paper presents a novel approach to optimizing the design of phase-only computer-generated holograms (CGH) for the creation of binary images in an optical Fourier transform system. Optimization begins by selecting an image pixel with a temporal change in amplitude. The modulated image function undergoes an inverse Fourier transform followed by the imposition of a CGH constraint and the Fourier transform to yield an image function associated with the change in amplitude of the selected pixel. In iterations where the quality of the image is improved, that image function is adopted as the input for the next iteration. In cases where the image quality is not improved, the image function before the pixel changed is used as the input. Thus, the proposed approach is referred to as the pixelwise hybrid input-output (PHIO) algorithm. The PHIO algorithm was shown to achieve image quality far exceeding that of the Gerchberg-Saxton (GS) algorithm. The benefits were particularly evident when the PHIO algorithm was equipped with a dynamic range of image intensities equivalent to the amplitude freedom of the image signal. The signal variation of images reconstructed from the GS algorithm was 1.0223, but only 0.2537 when using PHIO, i.e., a 75% improvement. Nonetheless, the proposed scheme resulted in a 10% degradation in diffraction efficiency and signal-to-noise ratio.
Material migration studies with an ITER first wall panel proxy on EAST
NASA Astrophysics Data System (ADS)
Ding, R.; Pitts, R. A.; Borodin, D.; Carpentier, S.; Ding, F.; Gong, X. Z.; Guo, H. Y.; Kirschner, A.; Kocan, M.; Li, J. G.; Luo, G.-N.; Mao, H. M.; Qian, J. P.; Stangeby, P. C.; Wampler, W. R.; Wang, H. Q.; Wang, W. Z.
2015-02-01
The ITER beryllium (Be) first wall (FW) panels are shaped to protect leading edges between neighbouring panels arising from assembly tolerances. This departure from a perfectly cylindrical surface automatically leads to magnetically shadowed regions where eroded Be can be re-deposited, together with co-deposition of tritium fuel. To provide a benchmark for a series of erosion/re-deposition simulation studies performed for the ITER FW panels, dedicated experiments have been performed on the EAST tokamak using a specially designed, instrumented test limiter acting as a proxy for the FW panel geometry. Carbon coated molybdenum plates forming the limiter front surface were exposed to the outer midplane boundary plasma of helium discharges using the new Material and Plasma Evaluation System (MAPES). Net erosion and deposition patterns are estimated using ion beam analysis to measure the carbon layer thickness variation across the surface after exposure. The highest erosion of about 0.8 µm is found near the midplane, where the surface is closest to the plasma separatrix. No net deposition above the measurement detection limit was found on the proxy wall element, even in shadowed regions. The measured 2D surface erosion distribution has been modelled with the 3D Monte Carlo code ERO, using the local plasma parameter measurements together with a diffusive transport assumption. Excellent agreement between the experimentally observed net erosion and the modelled erosion profile has been obtained.
A Block Iterative Finite Element Model for Nonlinear Leaky Aquifer Systems
NASA Astrophysics Data System (ADS)
Gambolati, Giuseppe; Teatini, Pietro
1996-01-01
A new quasi three-dimensional finite element model of groundwater flow is developed for highly compressible multiaquifer systems where aquitard permeability and elastic storage are dependent on hydraulic drawdown. The model is solved by a block iterative strategy, which is naturally suggested by the geological structure of the porous medium and can be shown to be mathematically equivalent to a block Gauss-Seidel procedure. As such it can be generalized into a block overrelaxation procedure and greatly accelerated by the use of the optimum overrelaxation factor. Results for both linear and nonlinear multiaquifer systems emphasize the excellent computational performance of the model and indicate that convergence in leaky systems can be improved up to as much as one order of magnitude.
A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.
De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc
2010-09-01
In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.
ERIC Educational Resources Information Center
Mavrikis, Manolis; Gutierrez-Santos, Sergio
2010-01-01
This paper presents a methodology for the design of intelligent learning environments. We recognise that in the educational technology field, theory development and system-design should be integrated and rely on an iterative process that addresses: (a) the difficulty to elicit precise, concise, and operationalized knowledge from "experts" and (b)…
From Amorphous to Defined: Balancing the Risks of Spiral Development
2007-04-30
630 675 720 765 810 855 900 Time (Week) Work started and active PhIt [Requirements,Iter1] : JavelinCalibration work packages1 1 1 Work started and...active PhIt [Technology,Iter1] : JavelinCalibration work packages2 2 2 Work started and active PhIt [Design,Iter1] : JavelinCalibration work packages3 3 3 3...Work started and active PhIt [Manufacturing,Iter1] : JavelinCalibration work packages4 4 Work started and active PhIt [Use,Iter1] : JavelinCalibration
NASA Astrophysics Data System (ADS)
Wilson, J. R.; Bonoli, P. T.
2015-02-01
Ion cyclotron range of frequency (ICRF) heating is foreseen as an integral component of the initial ITER operation. The status of ICRF preparations for ITER and supporting research were updated in the 2007 [Gormezano et al., Nucl. Fusion 47, S285 (2007)] report on the ITER physics basis. In this report, we summarize progress made toward the successful application of ICRF power on ITER since that time. Significant advances have been made in support of the technical design by development of new techniques for arc protection, new algorithms for tuning and matching, carrying out experimental tests of more ITER like antennas and demonstration on mockups that the design assumptions are correct. In addition, new applications of the ICRF system, beyond just bulk heating, have been proposed and explored.
Spectral element multigrid. Part 2: Theoretical justification
NASA Technical Reports Server (NTRS)
Maday, Yvon; Munoz, Rafael
1988-01-01
A multigrid algorithm is analyzed which is used for solving iteratively the algebraic system resulting from tha approximation of a second order problem by spectral or spectral element methods. The analysis, performed here in the one dimensional case, justifies the good smoothing properties of the Jacobi preconditioner that was presented in Part 1 of this paper.
Extended Kalman filtering for the detection of damage in linear mechanical structures
NASA Astrophysics Data System (ADS)
Liu, X.; Escamilla-Ambrosio, P. J.; Lieven, N. A. J.
2009-09-01
This paper addresses the problem of assessing the location and extent of damage in a vibrating structure by means of vibration measurements. Frequency domain identification methods (e.g. finite element model updating) have been widely used in this area while time domain methods such as the extended Kalman filter (EKF) method, are more sparsely represented. The difficulty of applying EKF in mechanical system damage identification and localisation lies in: the high computational cost, the dependence of estimation results on the initial estimation error covariance matrix P(0), the initial value of parameters to be estimated, and on the statistics of measurement noise R and process noise Q. To resolve these problems in the EKF, a multiple model adaptive estimator consisting of a bank of EKF in modal domain was designed, each filter in the bank is based on different P(0). The algorithm was iterated by using the weighted global iteration method. A fuzzy logic model was incorporated in each filter to estimate the variance of the measurement noise R. The application of the method is illustrated by simulated and real examples.
Conceptual Design of the ITER ECE Diagnostic - An Update
NASA Astrophysics Data System (ADS)
Austin, M. E.; Pandya, H. K. B.; Beno, J.; Bryant, A. D.; Danani, S.; Ellis, R. F.; Feder, R.; Hubbard, A. E.; Kumar, S.; Ouroua, A.; Phillips, P. E.; Rowan, W. L.
2012-09-01
The ITER ECE diagnostic has recently been through a conceptual design review for the entire system including front end optics, transmission line, and back-end instruments. The basic design of two viewing lines, each with a single ellipsoidal mirror focussing into the plasma near the midplane of the typical operating scenarios is agreed upon. The location and design of the hot calibration source and the design of the shutter that directs its radiation to the transmission line are issues that need further investigation. In light of recent measurements and discussion, the design of the broadband transmission line is being revisited and new options contemplated. For the instruments, current systems for millimeter wave radiometers and broad-band spectrometers will be adequate for ITER, but the option for employing new state-of-the-art techniques will be left open.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuznetsov, A. P., E-mail: APKuznetsov@mephi.ru; Buzinskij, O. I.; Gubsky, K. L.
A set of optical diagnostics is expected for measuring the plasma characteristics in ITER. Optical elements located inside discharge chambers are exposed to an intense radiation load, sputtering due to collisions with energetic atoms formed in the charge transfer processes, and contamination due to recondensation of materials sputtered from different parts of the construction of the chamber. Removing the films of the sputtered materials from the mirrors with the aid of pulsed laser radiation is an efficient cleaning method enabling recovery of the optical properties of the mirrors. In this work, we studied the efficiency of removal of metal oxidemore » films by pulsed radiation of a fiber laser. Optimization of the laser cleaning conditions was carried out on samples representing metal substrates polished with optical quality with deposition of films on them imitating the chemical composition and conditions expected in ITER. It is shown that, by a proper selection of modes of radiation exposure to the surface with a deposited film, it is feasible to restore the original high reflection characteristics of optical elements.« less
NASA Astrophysics Data System (ADS)
Zhang, B.; Sang, Jun; Alam, Mohammad S.
2013-03-01
An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm was proposed. Firstly, the original secret image was encrypted into two phase-only masks M1 and M2 via cascaded iterative Fourier transform (CIFT) algorithm. Then, the public-key encryption algorithm RSA was adopted to encrypt M2 into M2' . Finally, a host image was enlarged by extending one pixel into 2×2 pixels and each element in M1 and M2' was multiplied with a superimposition coefficient and added to or subtracted from two different elements in the 2×2 pixels of the enlarged host image. To recover the secret image from the stego-image, the two masks were extracted from the stego-image without the original host image. By applying public-key encryption algorithm, the key distribution was facilitated, and also compared with the image hiding method based on optical interference, the proposed method may reach higher robustness by employing the characteristics of the CIFT algorithm. Computer simulations show that this method has good robustness against image processing.
Finite element analysis of heat load of tungsten relevant to ITER conditions
NASA Astrophysics Data System (ADS)
Zinovev, A.; Terentyev, D.; Delannay, L.
2017-12-01
A computational procedure is proposed in order to predict the initiation of intergranular cracks in tungsten with ITER specification microstructure (i.e. characterised by elongated micrometre-sized grains). Damage is caused by a cyclic heat load, which emerges from plasma instabilities during operation of thermonuclear devices. First, a macroscopic thermo-mechanical simulation is performed in order to obtain temperature- and strain field in the material. The strain path is recorded at a selected point of interest of the macroscopic specimen, and is then applied at the microscopic level to a finite element mesh of a polycrystal. In the microscopic simulation, the stress state at the grain boundaries serves as the marker of cracking initiation. The simulated heat load cycle is a representative of edge-localized modes, which are anticipated during normal operations of ITER. Normal stresses at the grain boundary interfaces were shown to strongly depend on the direction of grain orientation with respect to the heat flux direction and to attain higher values if the flux is perpendicular to the elongated grains, where it apparently promotes crack initiation.
Conceptual Design of the ITER Plasma Control System
NASA Astrophysics Data System (ADS)
Snipes, J. A.
2013-10-01
The conceptual design of the ITER Plasma Control System (PCS) has been approved and the preliminary design has begun for the 1st plasma PCS. This is a collaboration of many plasma control experts from existing devices to design and test plasma control techniques applicable to ITER on existing machines. The conceptual design considered all phases of plasma operation, ranging from non-active H/He plasmas through high fusion gain inductive DT plasmas to fully non-inductive steady-state operation, to ensure that the PCS control functionality and architecture can satisfy the demands of the ITER Research Plan. The PCS will control plasma equilibrium and density, plasma heat exhaust, a range of MHD instabilities (including disruption mitigation), and the non-inductive current profile required to maintain stable steady-state scenarios. The PCS architecture requires sophisticated shared actuator management and event handling systems to prioritize control goals, algorithms, and actuators according to dynamic control needs and monitor plasma and plant system events to trigger automatic changes in the control algorithms or operational scenario, depending on real-time operating limits and conditions.
Description of the prototype diagnostic residual gas analyzer for ITER.
Younkin, T R; Biewer, T M; Klepper, C C; Marcus, C
2014-11-01
The diagnostic residual gas analyzer (DRGA) system to be used during ITER tokamak operation is being designed at Oak Ridge National Laboratory to measure fuel ratios (deuterium and tritium), fusion ash (helium), and impurities in the plasma. The eventual purpose of this instrument is for machine protection, basic control, and physics on ITER. Prototyping is ongoing to optimize the hardware setup and measurement capabilities. The DRGA prototype is comprised of a vacuum system and measurement technologies that will overlap to meet ITER measurement requirements. Three technologies included in this diagnostic are a quadrupole mass spectrometer, an ion trap mass spectrometer, and an optical penning gauge that are designed to document relative and absolute gas concentrations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devine, K.D.; Hennigan, G.L.; Hutchinson, S.A.
1999-01-01
The theoretical background for the finite element computer program, MPSalsa Version 1.5, is presented in detail. MPSalsa is designed to solve laminar or turbulent low Mach number, two- or three-dimensional incompressible and variable density reacting fluid flows on massively parallel computers, using a Petrov-Galerkin finite element formulation. The code has the capability to solve coupled fluid flow (with auxiliary turbulence equations), heat transport, multicomponent species transport, and finite-rate chemical reactions, and to solve coupled multiple Poisson or advection-diffusion-reaction equations. The program employs the CHEMKIN library to provide a rigorous treatment of multicomponent ideal gas kinetics and transport. Chemical reactions occurringmore » in the gas phase and on surfaces are treated by calls to CHEMKIN and SURFACE CHEMK3N, respectively. The code employs unstructured meshes, using the EXODUS II finite element database suite of programs for its input and output files. MPSalsa solves both transient and steady flows by using fully implicit time integration, an inexact Newton method and iterative solvers based on preconditioned Krylov methods as implemented in the Aztec. solver library.« less
Development and verification of local/global analysis techniques for laminated composites
NASA Technical Reports Server (NTRS)
Griffin, O. Hayden, Jr.
1989-01-01
Analysis and design methods for laminated composite materials have been the subject of considerable research over the past 20 years, and are currently well developed. In performing the detailed three-dimensional analyses which are often required in proximity to discontinuities, however, analysts often encounter difficulties due to large models. Even with the current availability of powerful computers, models which are too large to run, either from a resource or time standpoint, are often required. There are several approaches which can permit such analyses, including substructuring, use of superelements or transition elements, and the global/local approach. This effort is based on the so-called zoom technique to global/local analysis, where a global analysis is run, with the results of that analysis applied to a smaller region as boundary conditions, in as many iterations as is required to attain an analysis of the desired region. Before beginning the global/local analyses, it was necessary to evaluate the accuracy of the three-dimensional elements currently implemented in the Computational Structural Mechanics (CSM) Testbed. It was also desired to install, using the Experimental Element Capability, a number of displacement formulation elements which have well known behavior when used for analysis of laminated composites.
NASA Astrophysics Data System (ADS)
Allen, Jeffery M.
This research involves a few First-Order System Least Squares (FOSLS) formulations of a nonlinear-Stokes flow model for ice sheets. In Glen's flow law, a commonly used constitutive equation for ice rheology, the viscosity becomes infinite as the velocity gradients approach zero. This typically occurs near the ice surface or where there is basal sliding. The computational difficulties associated with the infinite viscosity are often overcome by an arbitrary modification of Glen's law that bounds the maximum viscosity. The FOSLS formulations developed in this thesis are designed to overcome this difficulty. The first FOSLS formulation is just the first-order representation of the standard nonlinear, full-Stokes and is known as the viscosity formulation and suffers from the problem above. To overcome the problem of infinite viscosity, two new formulation exploit the fact that the deviatoric stress, the product of viscosity and strain-rate, approaches zero as the viscosity goes to infinity. Using the deviatoric stress as the basis for a first-order system results in the the basic fluidity system. Augmenting the basic fluidity system with a curl-type equation results in the augmented fluidity system, which is more amenable to the iterative solver, Algebraic MultiGrid (AMG). A Nested Iteration (NI) Newton-FOSLS-AMG approach is used to solve the nonlinear-Stokes problems. Several test problems from the ISMIP set of benchmarks is examined to test the effectiveness of the various formulations. These test show that the viscosity based method is more expensive and less accurate. The basic fluidity system shows optimal finite-element convergence. However, there is not yet an efficient iterative solver for this type of system and this is the topic of future research. Alternatively, AMG performs better on the augmented fluidity system when using specific scaling. Unfortunately, this scaling results in reduced finite-element convergence.
Life Support Systems for Lunar Landers
NASA Technical Reports Server (NTRS)
Anderson, Molly
2008-01-01
Engineers designing life support systems for NASA s next Lunar Landers face unique challenges. As with any vehicle that enables human spaceflight, the needs of the crew drive most of the lander requirements. The lander is also a key element of the architecture NASA will implement in the Constellation program. Many requirements, constraints, or optimization goals will be driven by interfaces with other projects, like the Crew Exploration Vehicle, the Lunar Surface Systems, and the Extravehicular Activity project. Other challenges in the life support system will be driven by the unique location of the vehicle in the environments encountered throughout the mission. This paper examines several topics that may be major design drivers for the lunar lander life support system. There are several functional requirements for the lander that may be different from previous vehicles or programs and recent experience. Some of the requirements or design drivers will change depending on the overall Lander configuration. While the configuration for a lander design is not fixed, designers can examine how these issues would impact their design and be prepared for the quick design iterations required to optimize a spacecraft.
Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Gulshan B., E-mail: gbsharma@ucalgary.ca; University of Pittsburgh, Swanson School of Engineering, Department of Bioengineering, Pittsburgh, Pennsylvania 15213; University of Calgary, Schulich School of Engineering, Department of Mechanical and Manufacturing Engineering, Calgary, Alberta T2N 1N4
Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respondmore » over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula’s material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element’s remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than actual specimen. Low predicted bone density was lower than actual specimen. Differences were probably due to applied muscle and joint reaction loads, boundary conditions, and values of constants used. Work is underway to study this. Nonetheless, the results demonstrate three dimensional bone remodeling simulation validity and potential. Such adaptive predictions take physiological bone remodeling simulations one step closer to reality. Computational analyses are needed that integrate biological remodeling rules and predict how bone will respond over time. We expect the combination of computational static stress analyses together with adaptive bone remodeling simulations to become effective tools for regenerative medicine research.« less
RF Pulse Design using Nonlinear Gradient Magnetic Fields
Kopanoglu, Emre; Constable, R. Todd
2014-01-01
Purpose An iterative k-space trajectory and radio-frequency (RF) pulse design method is proposed for Excitation using Nonlinear Gradient Magnetic fields (ENiGMa). Theory and Methods The spatial encoding functions (SEFs) generated by nonlinear gradient fields (NLGFs) are linearly dependent in Cartesian-coordinates. Left uncorrected, this may lead to flip-angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a Matching-Pursuit algorithm, and the RF pulse is designed using a Conjugate-Gradient algorithm. Three variants of the proposed approach are given: the full-algorithm, a computationally-cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. Results The method is compared to other iterative (Matching-Pursuit and Conjugate Gradient) and non-iterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity significantly. Conclusion An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. PMID:25203286
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Almeida, Valmor F.
In this work, a phase-space discontinuous Galerkin (PSDG) method is presented for the solution of stellar radiative transfer problems. It allows for greater adaptivity than competing methods without sacrificing generality. The method is extensively tested on a spherically symmetric, static, inverse-power-law scattering atmosphere. Results for different sizes of atmospheres and intensities of scattering agreed with asymptotic values. The exponentially decaying behavior of the radiative field in the diffusive-transparent transition region, and the forward peaking behavior at the surface of extended atmospheres were accurately captured. The integrodifferential equation of radiation transfer is solved iteratively by alternating between the radiative pressure equationmore » and the original equation with the integral term treated as an energy density source term. In each iteration, the equations are solved via an explicit, flux-conserving, discontinuous Galerkin method. Finite elements are ordered in wave fronts perpendicular to the characteristic curves so that elemental linear algebraic systems are solved quickly by sweeping the phase space element by element. Two implementations of a diffusive boundary condition at the origin are demonstrated wherein the finite discontinuity in the radiation intensity is accurately captured by the proposed method. This allows for a consistent mechanism to preserve photon luminosity. The method was proved to be robust and fast, and a case is made for the adequacy of parallel processing. In addition to classical two-dimensional plots, results of normalized radiation intensity were mapped onto a log-polar surface exhibiting all distinguishing features of the problem studied.« less
de Almeida, Valmor F.
2017-04-19
In this work, a phase-space discontinuous Galerkin (PSDG) method is presented for the solution of stellar radiative transfer problems. It allows for greater adaptivity than competing methods without sacrificing generality. The method is extensively tested on a spherically symmetric, static, inverse-power-law scattering atmosphere. Results for different sizes of atmospheres and intensities of scattering agreed with asymptotic values. The exponentially decaying behavior of the radiative field in the diffusive-transparent transition region, and the forward peaking behavior at the surface of extended atmospheres were accurately captured. The integrodifferential equation of radiation transfer is solved iteratively by alternating between the radiative pressure equationmore » and the original equation with the integral term treated as an energy density source term. In each iteration, the equations are solved via an explicit, flux-conserving, discontinuous Galerkin method. Finite elements are ordered in wave fronts perpendicular to the characteristic curves so that elemental linear algebraic systems are solved quickly by sweeping the phase space element by element. Two implementations of a diffusive boundary condition at the origin are demonstrated wherein the finite discontinuity in the radiation intensity is accurately captured by the proposed method. This allows for a consistent mechanism to preserve photon luminosity. The method was proved to be robust and fast, and a case is made for the adequacy of parallel processing. In addition to classical two-dimensional plots, results of normalized radiation intensity were mapped onto a log-polar surface exhibiting all distinguishing features of the problem studied.« less
Electromagnetic Analysis For The Design Of ITER Diagnostic Port Plugs During Plasma Disruptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Y
2014-03-03
ITER diagnostic port plugs perform many functions including structural support of diagnostic systems under high electromagnetic loads while allowing for diagnostic access to plasma. The design of diagnotic equatorial port plugs (EPP) are largely driven by electromagnetic loads and associate response of EPP structure during plasma disruptions and VDEs. This paper summarizes results of transient electromagnetic analysis using Opera 3d in support of the design activities for ITER diagnostic EPP. A complete distribution of disruption loads on the Diagnostic First Walls (DFWs). Diagnostic Shield Modules (DSMs) and the EPP structure, as well as impact on the system design integration duemore » to electrical contact among various EPP structural components are discussed.« less
From Intent to Action: An Iterative Engineering Process
ERIC Educational Resources Information Center
Mouton, Patrice; Rodet, Jacques; Vacaresse, Sylvain
2015-01-01
Quite by chance, and over the course of a few haphazard meetings, a Master's degree in "E-learning Design" gradually developed in a Faculty of Economics. Its original and evolving design was the result of an iterative process carried out, not by a single Instructional Designer (ID), but by a full ID team. Over the last 10 years it has…
Loads specification and embedded plate definition for the ITER cryoline system
NASA Astrophysics Data System (ADS)
Badgujar, S.; Benkheira, L.; Chalifour, M.; Forgeas, A.; Shah, N.; Vaghela, H.; Sarkar, B.
2015-12-01
ITER cryolines (CLs) are complex network of vacuum-insulated multi and single process pipe lines, distributed in three different areas at ITER site. The CLs will support different operating loads during the machine life-time; either considered as nominal, occasional or exceptional. The major loads, which form the design basis are inertial, pressure, temperature, assembly, magnetic, snow, wind, enforced relative displacement and are put together in loads specification. Based on the defined load combinations, conceptual estimation of reaction loads have been carried out for the lines located inside the Tokamak building. Adequate numbers of embedded plates (EPs) per line have been defined and integrated in the building design. The finalization of building EPs to support the lines, before the detailed design, is one of the major design challenges as the usual logic of the design may alter. At the ITER project level, it was important to finalize EPs to allow adequate design and timely availability of the Tokamak building. The paper describes the single loads, load combinations considered in load specification and the approach for conceptual load estimation and selection of EPs for Toroidal Field (TF) Cryoline as an example by converting the load combinations in two main load categories; pressure and seismic.
Programmable logic construction kits for hyper-real-time neuronal modeling.
Guerrero-Rivera, Ruben; Morrison, Abigail; Diesmann, Markus; Pearce, Tim C
2006-11-01
Programmable logic designs are presented that achieve exact integration of leaky integrate-and-fire soma and dynamical synapse neuronal models and incorporate spike-time dependent plasticity and axonal delays. Highly accurate numerical performance has been achieved by modifying simpler forward-Euler-based circuitry requiring minimal circuit allocation, which, as we show, behaves equivalently to exact integration. These designs have been implemented and simulated at the behavioral and physical device levels, demonstrating close agreement with both numerical and analytical results. By exploiting finely grained parallelism and single clock cycle numerical iteration, these designs achieve simulation speeds at least five orders of magnitude faster than the nervous system, termed here hyper-real-time operation, when deployed on commercially available field-programmable gate array (FPGA) devices. Taken together, our designs form a programmable logic construction kit of commonly used neuronal model elements that supports the building of large and complex architectures of spiking neuron networks for real-time neuromorphic implementation, neurophysiological interfacing, or efficient parameter space investigations.
A superlinear interior points algorithm for engineering design optimization
NASA Technical Reports Server (NTRS)
Herskovits, J.; Asquier, J.
1990-01-01
We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.
Kushniruk, Andre W; Borycki, Elizabeth M
2015-01-01
The development of more usable and effective healthcare information systems has become a critical issue. In the software industry methodologies such as agile and iterative development processes have emerged to lead to more effective and usable systems. These approaches highlight focusing on user needs and promoting iterative and flexible development practices. Evaluation and testing of iterative agile development cycles is considered an important part of the agile methodology and iterative processes for system design and re-design. However, the issue of how to effectively integrate usability testing methods into rapid and flexible agile design cycles has remained to be fully explored. In this paper we describe our application of an approach known as low-cost rapid usability testing as it has been applied within agile system development in healthcare. The advantages of the integrative approach are described, along with current methodological considerations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Litaudon, X; Bernard, J. M.; Colas, L.
2013-01-01
To support the design of an ITER ion-cyclotron range of frequency heating (ICRH) system and to mitigate risks of operation in ITER, CEA has initiated an ambitious Research & Development program accompanied by experiments on Tore Supra or test-bed facility together with a significant modelling effort. The paper summarizes the recent results in the following areas: Comprehensive characterization (experiments and modelling) of a new Faraday screen concept tested on the Tore Supra antenna. A new model is developed for calculating the ICRH sheath rectification at the antenna vicinity. The model is applied to calculate the local heat flux on Toremore » Supra and ITER ICRH antennas. Full-wave modelling of ITER ICRH heating and current drive scenarios with the EVE code. With 20 MW of power, a current of 400 kA could be driven on axis in the DT scenario. Comparison between DT and DT(3He) scenario is given for heating and current drive efficiencies. First operation of CW test-bed facility, TITAN, designed for ITER ICRH components testing and could host up to a quarter of an ITER antenna. R&D of high permittivity materials to improve load of test facilities to better simulate ITER plasma antenna loading conditions.« less
Influence of Primary Gage Sensitivities on the Convergence of Balance Load Iterations
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2012-01-01
The connection between the convergence of wind tunnel balance load iterations and the existence of the primary gage sensitivities of a balance is discussed. First, basic elements of two load iteration equations that the iterative method uses in combination with results of a calibration data analysis for the prediction of balance loads are reviewed. Then, the connection between the primary gage sensitivities, the load format, the gage output format, and the convergence characteristics of the load iteration equation choices is investigated. A new criterion is also introduced that may be used to objectively determine if the primary gage sensitivity of a balance gage exists. Then, it is shown that both load iteration equations will converge as long as a suitable regression model is used for the analysis of the balance calibration data, the combined influence of non linear terms of the regression model is very small, and the primary gage sensitivities of all balance gages exist. The last requirement is fulfilled, e.g., if force balance calibration data is analyzed in force balance format. Finally, it is demonstrated that only one of the two load iteration equation choices, i.e., the iteration equation used by the primary load iteration method, converges if one or more primary gage sensitivities are missing. This situation may occur, e.g., if force balance calibration data is analyzed in direct read format using the original gage outputs. Data from the calibration of a six component force balance is used to illustrate the connection between the convergence of the load iteration equation choices and the existence of the primary gage sensitivities.
Iterative algorithms for large sparse linear systems on parallel computers
NASA Technical Reports Server (NTRS)
Adams, L. M.
1982-01-01
Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.
A new least-squares transport equation compatible with voids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, J. B.; Morel, J. E.
2013-07-01
We define a new least-squares transport equation that is applicable in voids, can be solved using source iteration with diffusion-synthetic acceleration, and requires only the solution of an independent set of second-order self-adjoint equations for each direction during each source iteration. We derive the equation, discretize it using the S{sub n} method in conjunction with a linear-continuous finite-element method in space, and computationally demonstrate various of its properties. (authors)
Design and optimization of membrane-type acoustic metamaterials
NASA Astrophysics Data System (ADS)
Blevins, Matthew Grant
One of the most common problems in noise control is the attenuation of low frequency noise. Typical solutions require barriers with high density and/or thickness. Membrane-type acoustic metamaterials are a novel type of engineered material capable of high low-frequency transmission loss despite their small thickness and light weight. These materials are ideally suited to applications with strict size and weight limitations such as aircraft, automobiles, and buildings. The transmission loss profile can be manipulated by changing the micro-level substructure, stacking multiple unit cells, or by creating multi-celled arrays. To date, analysis has focused primarily on experimental studies in plane-wave tubes and numerical modeling using finite element methods. These methods are inefficient when used for applications that require iterative changes to the structure of the material. To facilitate design and optimization of membrane-type acoustic metamaterials, computationally efficient dynamic models based on the impedance-mobility approach are proposed. Models of a single unit cell in a waveguide and in a baffle, a double layer of unit cells in a waveguide, and an array of unit cells in a baffle are studied. The accuracy of the models and the validity of assumptions used are verified using a finite element method. The remarkable computational efficiency of the impedance-mobility models compared to finite element methods enables implementation in design tools based on a graphical user interface and in optimization schemes. Genetic algorithms are used to optimize the unit cell design for a variety of noise reduction goals, including maximizing transmission loss for broadband, narrow-band, and tonal noise sources. The tools for design and optimization created in this work will enable rapid implementation of membrane-type acoustic metamaterials to solve real-world noise control problems.
Evolution Of USDOE Performance Assessments Over 20 Years
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seitz, Roger R.; Suttora, Linda C.
2013-02-26
Performance assessments (PAs) have been used for many years for the analysis of post-closure hazards associated with a radioactive waste disposal facility and to provide a reasonable expectation of the ability of the site and facility design to meet objectives for the protection of members of the public and the environment. The use of PA to support decision-making for LLW disposal facilities has been mandated in United States Department of Energy (USDOE) directives governing radioactive waste management since 1988 (currently DOE Order 435.1, Radioactive Waste Management). Prior to that time, PAs were also used in a less formal role. Overmore » the past 20+ years, the USDOE approach to conduct, review and apply PAs has evolved into an efficient, rigorous and mature process that includes specific requirements for continuous improvement and independent reviews. The PA process has evolved through refinement of a graded and iterative approach designed to help focus efforts on those aspects of the problem expected to have the greatest influence on the decision being made. Many of the evolutionary changes to the PA process are linked to the refinement of the PA maintenance concept that has proven to be an important element of USDOE PA requirements in the context of supporting decision-making for safe disposal of LLW. The PA maintenance concept represents the evolution of the graded and iterative philosophy and has helped to drive the evolution of PAs from a deterministic compliance calculation into a systematic approach that helps to focus on critical aspects of the disposal system in a manner designed to provide a more informed basis for decision-making throughout the life of a disposal facility (e.g., monitoring, research and testing, waste acceptance criteria, design improvements, data collection, model refinements). A significant evolution in PA modeling has been associated with improved use of uncertainty and sensitivity analysis techniques to support efficient implementation of the graded and iterative approach. Rather than attempt to exactly predict the migration of radionuclides in a disposal unit, the best PAs have evolved into tools that provide a range of results to guide decision-makers in planning the most efficient, cost effective, and safe disposal of radionuclides.« less
NASA Astrophysics Data System (ADS)
Melentjev, Vladimir S.; Gvozdev, Alexander S.
2018-01-01
Improving the reliability of modern turbine engines is actual task. This is achieved due to prevent a vibration damage of the operating blades. On the department of structure and design of aircraft engines have accumulated a lot of experimental data on the protection of the blades of the gas turbine engine from a vibration. In this paper we proposed a method for calculating the characteristics of wire rope dampers in the root attachment of blade of a gas turbine engine. The method is based on the use of the finite element method and transient analysis. Contact interaction (Lagrange-Euler method) between the compressor blade and the disc of the rotor has been taken into account. Contribution of contact interaction between details in damping of the system was measured. The proposed method provides a convenient way for the iterative selection of the required parameters the wire rope elastic-damping element. This element is able to provide the necessary protection from the vibration for the blade of a gas turbine engine.
Finding the Optimal Guidance for Enhancing Anchored Instruction
ERIC Educational Resources Information Center
Zydney, Janet Mannheimer; Bathke, Arne; Hasselbring, Ted S.
2014-01-01
This study investigated the effect of different methods of guidance with anchored instruction on students' mathematical problem-solving performance. The purpose of this research was to iteratively design a learning environment to find the optimal level of guidance. Two iterations of the software were compared. The first iteration used explicit…
Status of the Negative Ion Based Heating and Diagnostic Neutral Beams for ITER
NASA Astrophysics Data System (ADS)
Schunke, B.; Bora, D.; Hemsworth, R.; Tanga, A.
2009-03-01
The current baseline of ITER foresees 2 Heating Neutral Beam (HNB's) systems based on negative ion technology, each accelerating to 1 MeV 40 A of D- and capable of delivering 16.5 MW of D0 to the ITER plasma, with a 3rd HNB injector foreseen as an upgrade option [1]. In addition a dedicated Diagnostic Neutral Beam (DNB) accelerating 60 A of H- to 100 keV will inject ≈15 A equivalent of H0 for charge exchange recombination spectroscopy and other diagnostics. Recently the RF driven negative ion source developed by IPP Garching has replaced the filamented ion source as the reference ITER design. The RF source developed at IPP, which is approximately a quarter scale of the source needed for ITER, is expected to have reduced caesium consumption compared to the filamented arc driven ion source. The RF driven source has demonstrated adequate accelerated D- and H- current densities as well as long-pulse operation [2, 3]. It is foreseen that the HNB's and the DNB will use the same negative ion source. Experiments with a half ITER-size ion source are on-going at IPP and the operation of a full-scale ion source will be demonstrated, at full power and pulse length, in the dedicated Ion Source Test Bed (ISTF), which will be part of the Neutral Beam Test Facility (NBTF), in Padua, Italy. This facility will carry out the necessary R&D for the HNB's for ITER and demonstrate operation of the full-scale HNB beamline. An overview of the current status of the neutral beam (NB) systems and the chosen configuration will be given and the ongoing integration effort into the ITER plant will be highlighted. It will be demonstrated how installation and maintenance logistics have influenced the design, notably the top access scheme facilitating access for maintenance and installation. The impact of the ITER Design Review and recent design change requests (DCRs) will be briefly discussed, including start-up and commissioning issues. The low current hydrogen phase now envisaged for start-up imposed specific requirements for operating the HNB's at full beam power. It has been decided to address the shinethrough issue by installing wall armour protection, which increases the operational space in all scenarios. Other NB related issues identified by the Design Review process will be discussed and the possible changes to the ITER baseline indicated.
Status of the Negative Ion Based Heating and Diagnostic Neutral Beams for ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schunke, B.; Bora, D.; Hemsworth, R.
2009-03-12
The current baseline of ITER foresees 2 Heating Neutral Beam (HNB's) systems based on negative ion technology, each accelerating to 1 MeV 40 A of D{sup -} and capable of delivering 16.5 MW of D{sup 0} to the ITER plasma, with a 3rd HNB injector foreseen as an upgrade option. In addition a dedicated Diagnostic Neutral Beam (DNB) accelerating 60 A of H{sup -} to 100 keV will inject {approx_equal}15 A equivalent of H{sup 0} for charge exchange recombination spectroscopy and other diagnostics. Recently the RF driven negative ion source developed by IPP Garching has replaced the filamented ion sourcemore » as the reference ITER design. The RF source developed at IPP, which is approximately a quarter scale of the source needed for ITER, is expected to have reduced caesium consumption compared to the filamented arc driven ion source. The RF driven source has demonstrated adequate accelerated D{sup -} and H{sup -} current densities as well as long-pulse operation. It is foreseen that the HNB's and the DNB will use the same negative ion source. Experiments with a half ITER-size ion source are on-going at IPP and the operation of a full-scale ion source will be demonstrated, at full power and pulse length, in the dedicated Ion Source Test Bed (ISTF), which will be part of the Neutral Beam Test Facility (NBTF), in Padua, Italy. This facility will carry out the necessary R and D for the HNB's for ITER and demonstrate operation of the full-scale HNB beamline. An overview of the current status of the neutral beam (NB) systems and the chosen configuration will be given and the ongoing integration effort into the ITER plant will be highlighted. It will be demonstrated how installation and maintenance logistics have influenced the design, notably the top access scheme facilitating access for maintenance and installation. The impact of the ITER Design Review and recent design change requests (DCRs) will be briefly discussed, including start-up and commissioning issues. The low current hydrogen phase now envisaged for start-up imposed specific requirements for operating the HNB's at full beam power. It has been decided to address the shinethrough issue by installing wall armour protection, which increases the operational space in all scenarios. Other NB related issues identified by the Design Review process will be discussed and the possible changes to the ITER baseline indicated.« less
Integrated Collaborative Model in Research and Education with Emphasis on Small Satellite Technology
1996-01-01
feedback; the number of iterations in a complete iteration is referred to as loop depth or iteration depth, g (i). A data packet or packet is data...loop depth, g (i)) is either a finite (constant or variable) or an infinite value. 1) Finite loop depth, variable number of iterations Some problems...design time. The time needed for the first packet to leave and a new initial data to be introduced to the iteration is min(R * ( g (k) * (N+I) + k-1
ITER EDA Newsletter. Volume 3, no. 2
NASA Astrophysics Data System (ADS)
1994-02-01
This issue of the ITER EDA (Engineering Design Activities) Newsletter contains reports on the Fifth ITER Council Meeting held in Garching, Germany, January 27-28, 1994, a visit (January 28, 1994) of an international group of Harvard Fellows to the San Diego Joint Work Site, the Inauguration Ceremony of the EC-hosted ITER joint work site in Garching (January 28, 1994), on an ITER Technical Meeting on Assembly and Maintenance held in Garching, Germany, January 19-26, 1994, and a report on a Technical Committee Meeting on radiation effects on in-vessel components held in Garching, Germany, November 15-19, 1993, as well as an ITER Status Report.
BEAMR: An interactive graphic computer program for design of charged particle beam transport systems
NASA Technical Reports Server (NTRS)
Leonard, R. F.; Giamati, C. C.
1973-01-01
A computer program for a PDP-15 is presented which calculates, to first order, the characteristics of charged-particle beam as it is transported through a sequence of focusing and bending magnets. The maximum dimensions of the beam envelope normal to the transport system axis are continuously plotted on an oscilloscope as a function of distance along the axis. Provision is made to iterate the calculation by changing the types of magnets, their positions, and their field strengths. The program is especially useful for transport system design studies because of the ease and rapidity of altering parameters from panel switches. A typical calculation for a system with eight elements is completed in less than 10 seconds. An IBM 7094 version containing more-detailed printed output but no oscilloscope display is also presented.
NASA Astrophysics Data System (ADS)
Zhao, Liang; Ge, Jian-Hua
2012-12-01
Single-carrier (SC) transmission with frequency-domain equalization (FDE) is today recognized as an attractive alternative to orthogonal frequency-division multiplexing (OFDM) for communication application with the inter-symbol interference (ISI) caused by multi-path propagation, especially in shallow water channel. In this paper, we investigate an iterative receiver based on minimum mean square error (MMSE) decision feedback equalizer (DFE) with symbol rate and fractional rate samplings in the frequency domain (FD) and serially concatenated trellis coded modulation (SCTCM) decoder. Based on sound speed profiles (SSP) measured in the lake and finite-element ray tracking (Bellhop) method, the shallow water channel is constructed to evaluate the performance of the proposed iterative receiver. Performance results show that the proposed iterative receiver can significantly improve the performance and obtain better data transmission than FD linear and adaptive decision feedback equalizers, especially in adopting fractional rate sampling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanford, M.
1997-12-31
Most commercially-available quasistatic finite element programs assemble element stiffnesses into a global stiffness matrix, then use a direct linear equation solver to obtain nodal displacements. However, for large problems (greater than a few hundred thousand degrees of freedom), the memory size and computation time required for this approach becomes prohibitive. Moreover, direct solution does not lend itself to the parallel processing needed for today`s multiprocessor systems. This talk gives an overview of the iterative solution strategy of JAS3D, the nonlinear large-deformation quasistatic finite element program. Because its architecture is derived from an explicit transient-dynamics code, it does not ever assemblemore » a global stiffness matrix. The author describes the approach he used to implement the solver on multiprocessor computers, and shows examples of problems run on hundreds of processors and more than a million degrees of freedom. Finally, he describes some of the work he is presently doing to address the challenges of iterative convergence for ill-conditioned problems.« less
NASA Astrophysics Data System (ADS)
Schanz, Martin; Ye, Wenjing; Xiao, Jinyou
2016-04-01
Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.
Inelastic strain analogy for piecewise linear computation of creep residues in built-up structures
NASA Technical Reports Server (NTRS)
Jenkins, Jerald M.
1987-01-01
An analogy between inelastic strains caused by temperature and those caused by creep is presented in terms of isotropic elasticity. It is shown how the theoretical aspects can be blended with existing finite-element computer programs to exact a piecewise linear solution. The creep effect is determined by using the thermal stress computational approach, if appropriate alterations are made to the thermal expansion of the individual elements. The overall transient solution is achieved by consecutive piecewise linear iterations. The total residue caused by creep is obtained by accumulating creep residues for each iteration and then resubmitting the total residues for each element as an equivalent input. A typical creep law is tested for incremental time convergence. The results indicate that the approach is practical, with a valid indication of the extent of creep after approximately 20 hr of incremental time. The general analogy between body forces and inelastic strain gradients is discussed with respect to how an inelastic problem can be worked as an elastic problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Jan; Ferrada, Juan J; Curd, Warren
During inductive plasma operation of ITER, fusion power will reach 500 MW with an energy multiplication factor of 10. The heat will be transferred by the Tokamak Cooling Water System (TCWS) to the environment using the secondary cooling system. Plasma operations are inherently safe even under the most severe postulated accident condition a large, in-vessel break that results in a loss-of-coolant accident. A functioning cooling water system is not required to ensure safe shutdown. Even though ITER is inherently safe, TCWS equipment (e.g., heat exchangers, piping, pressurizers) are classified as safety important components. This is because the water is predictedmore » to contain low-levels of radionuclides (e.g., activated corrosion products, tritium) with activity levels high enough to require the design of components to be in accordance with French regulations for nuclear pressure equipment, i.e., the French Order dated 12 December 2005 (ESPN). ESPN has extended the practical application of the methodology established by the Pressure Equipment Directive (97/23/EC) to nuclear pressure equipment, under French Decree 99-1046 dated 13 December 1999, and Order dated 21 December 1999 (ESP). ASME codes and supplementary analyses (e.g., Failure Modes and Effects Analysis) will be used to demonstrate that the TCWS equipment meets these essential safety requirements. TCWS is being designed to provide not only cooling, with a capacity of approximately 1 GW energy removal, but also elevated temperature baking of first-wall/blanket, vacuum vessel, and divertor. Additional TCWS functions include chemical control of water, draining and drying for maintenance, and facilitation of leak detection/localization. The TCWS interfaces with the majority of ITER systems, including the secondary cooling system. U.S. ITER is responsible for design, engineering, and procurement of the TCWS with industry support from an Engineering Services Organization (ESO) (AREVA Federal Services, with support from Northrop Grumman, and OneCIS). ITER International Organization (ITER-IO) is responsible for design oversight and equipment installation in Cadarache, France. TCWS equipment will be fabricated using ASME design codes with quality assurance and oversight by an Agreed Notified Body (approved by the French regulator) that will ensure regulatory compliance. This paper describes the TCWS design and how U.S. ITER and fabricators will use ASME codes to comply with EU Directives and French Orders and Decrees.« less
Modal Test/Analysis Correlation of Space Station Structures Using Nonlinear Sensitivity
NASA Technical Reports Server (NTRS)
Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan
1992-01-01
The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlation. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.
Modal test/analysis correlation of Space Station structures using nonlinear sensitivity
NASA Technical Reports Server (NTRS)
Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan
1992-01-01
The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlations. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.
Numerical solution of quadratic matrix equations for free vibration analysis of structures
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1975-01-01
This paper is concerned with the efficient and accurate solution of the eigenvalue problem represented by quadratic matrix equations. Such matrix forms are obtained in connection with the free vibration analysis of structures, discretized by finite 'dynamic' elements, resulting in frequency-dependent stiffness and inertia matrices. The paper presents a new numerical solution procedure of the quadratic matrix equations, based on a combined Sturm sequence and inverse iteration technique enabling economical and accurate determination of a few required eigenvalues and associated vectors. An alternative procedure based on a simultaneous iteration procedure is also described when only the first few modes are the usual requirement. The employment of finite dynamic elements in conjunction with the presently developed eigenvalue routines results in a most significant economy in the dynamic analysis of structures.
Design of ITER divertor VUV spectrometer and prototype test at KSTAR tokamak
NASA Astrophysics Data System (ADS)
Seon, Changrae; Hong, Joohwan; Song, Inwoo; Jang, Juhyeok; Lee, Hyeonyong; An, Younghwa; Kim, Bosung; Jeon, Taemin; Park, Jaesun; Choe, Wonho; Lee, Hyeongon; Pak, Sunil; Cheon, MunSeong; Choi, Jihyeon; Kim, Hyeonseok; Biel, Wolfgang; Bernascolle, Philippe; Barnsley, Robin; O'Mullane, Martin
2017-12-01
Design and development of the ITER divertor VUV spectrometer have been performed from the year 1998, and it is planned to be installed in the year 2027. Currently, the design of the ITER divertor VUV spectrometer is in the phase of detail design. It is optimized for monitoring of chord-integrated VUV signals from divertor plasmas, chosen to contain representative lines emission from the tungsten as the divertor material, and other impurities. Impurity emission from overall divertor plasmas is collimated through the relay optics onto the entrance slit of a VUV spectrometer with working wavelength range of 14.6-32 nm. To validate the design of the ITER divertor VUV spectrometer, two sets of VUV spectrometers have been developed and tested at KSTAR tokamak. One set of spectrometer without the field mirror employs a survey spectrometer with the wavelength ranging from 14.6 nm to 32 nm, and it provides the same optical specification as the spectrometer part of the ITER divertor VUV spectrometer system. The other spectrometer with the wavelength range of 5-25 nm consists of a commercial spectrometer with a concave grating, and the relay mirrors with the same geometry as the relay mirrors of the ITER divertor VUV spectrometer. From test of these prototypes, alignment method using backward laser illumination could be verified. To validate the feasibility of tungsten emission measurement, furthermore, the tungsten powder was injected in KSTAR plasmas, and the preliminary result could be obtained successfully with regard to the evaluation of photon throughput. Contribution to the Topical Issue "Atomic and Molecular Data and their Applications", edited by Gordon W.F. Drake, Jung-Sik Yoon, Daiji Kato, Grzegorz Karwasz.
Static aeroelastic analysis and tailoring of a single-element racing car wing
NASA Astrophysics Data System (ADS)
Sadd, Christopher James
This thesis presents the research from an Engineering Doctorate research programme in collaboration with Reynard Motorsport Ltd, a manufacturer of racing cars. Racing car wing design has traditionally considered structures to be rigid. However, structures are never perfectly rigid and the interaction between aerodynamic loading and structural flexibility has a direct impact on aerodynamic performance. This interaction is often referred to as static aeroelasticity and the focus of this research has been the development of a computational static aeroelastic analysis method to improve the design of a single-element racing car wing. A static aeroelastic analysis method has been developed by coupling a Reynolds-Averaged Navier-Stokes CFD analysis method with a Finite Element structural analysis method using an iterative scheme. Development of this method has included assessment of CFD and Finite Element analysis methods and development of data transfer and mesh deflection methods. Experimental testing was also completed to further assess the computational analyses. The computational and experimental results show a good correlation and these studies have also shown that a Navier-Stokes static aeroelastic analysis of an isolated wing can be performed at an acceptable computational cost. The static aeroelastic analysis tool was used to assess methods of tailoring the structural flexibility of the wing to increase its aerodynamic performance. These tailoring methods were then used to produce two final wing designs to increase downforce and reduce drag respectively. At the average operating dynamic pressure of the racing car, the computational analysis predicts that the downforce-increasing wing has a downforce of C[1]=-1.377 in comparison to C[1]=-1.265 for the original wing. The computational analysis predicts that the drag-reducing wing has a drag of C[d]=0.115 in comparison to C[d]=0.143 for the original wing.
Reducing Design Cycle Time and Cost Through Process Resequencing
NASA Technical Reports Server (NTRS)
Rogers, James L.
2004-01-01
In today's competitive environment, companies are under enormous pressure to reduce the time and cost of their design cycle. One method for reducing both time and cost is to develop an understanding of the flow of the design processes and the effects of the iterative subcycles that are found in complex design projects. Once these aspects are understood, the design manager can make decisions that take advantage of decomposition, concurrent engineering, and parallel processing techniques to reduce the total time and the total cost of the design cycle. One software tool that can aid in this decision-making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). The DeMAID software minimizes the feedback couplings that create iterative subcycles, groups processes into iterative subcycles, and decomposes the subcycles into a hierarchical structure. The real benefits of producing the best design in the least time and at a minimum cost are obtained from sequencing the processes in the subcycles.
TH-AB-BRA-09: Stability Analysis of a Novel Dose Calculation Algorithm for MRI Guided Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelyak, O; Fallone, B; Cross Cancer Institute, Edmonton, AB
2016-06-15
Purpose: To determine the iterative deterministic solution stability of the Linear Boltzmann Transport Equation (LBTE) in the presence of magnetic fields. Methods: The LBTE with magnetic fields under investigation is derived using a discrete ordinates approach. The stability analysis is performed using analytical and numerical methods. Analytically, the spectral Fourier analysis is used to obtain the convergence rate of the source iteration procedures based on finding the largest eigenvalue of the iterative operator. This eigenvalue is a function of relevant physical parameters, such as magnetic field strength and material properties, and provides essential information about the domain of applicability requiredmore » for clinically optimal parameter selection and maximum speed of convergence. The analytical results are reinforced by numerical simulations performed using the same discrete ordinates method in angle, and a discontinuous finite element spatial approach. Results: The spectral radius for the source iteration technique of the time independent transport equation with isotropic and anisotropic scattering centers inside infinite 3D medium is equal to the ratio of differential and total cross sections. The result is confirmed numerically by solving LBTE and is in full agreement with previously published results. The addition of magnetic field reveals that the convergence becomes dependent on the strength of magnetic field, the energy group discretization, and the order of anisotropic expansion. Conclusion: The source iteration technique for solving the LBTE with magnetic fields with the discrete ordinates method leads to divergent solutions in the limiting cases of small energy discretizations and high magnetic field strengths. Future investigations into non-stationary Krylov subspace techniques as an iterative solver will be performed as this has been shown to produce greater stability than source iteration. Furthermore, a stability analysis of a discontinuous finite element space-angle approach (which has been shown to provide the greatest stability) will also be investigated. Dr. B Gino Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.
In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetrymore » with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.« less
NASA Astrophysics Data System (ADS)
Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.
2013-07-01
In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.
TAP 1: A Finite Element Program for Steady-State Thermal Analysis of Convectively Cooled Structures
NASA Technical Reports Server (NTRS)
Thornton, E. A.
1976-01-01
The program has a finite element library of six elements: two conduction/convection elements to model heat transfer in a solid, two convection elements to model heat transfer in a fluid, and two integrated conduction/convection elements to represent combined heat transfer in tubular and plate/fin fluid passages. Nonlinear thermal analysis due to temperature dependent thermal parameters is performed using the Newton-Raphson iteration method. Program output includes nodal temperatures and element heat fluxes. Pressure drops in fluid passages may be computed as an option. A companion plotting program for displaying the finite element model and predicted temperature distributions is presented. User instructions and sample problems are presented in appendixes.
Perl Modules for Constructing Iterators
NASA Technical Reports Server (NTRS)
Tilmes, Curt
2009-01-01
The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.
Electromagnetic Analysis of ITER Diagnostic Equatorial Port Plugs During Plasma Disruptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Y. Zhai, R. Feder, A. Brooks, M. Ulrickson, C.S. Pitcher and G.D. Loesser
2012-08-27
ITER diagnostic port plugs perform many functionsincluding structural support of diagnostic systems under high electromagnetic loads while allowing for diagnostic access to the plasma. The design of diagnostic equatorial port plugs (EPP) are largely driven by electromagnetic loads and associate responses of EPP structure during plasma disruptions and VDEs. This paper summarizes results of transient electromagnetic analysis using Opera 3d in support of the design activities for ITER diagnostic EPP. A complete distribution of disruption loads on the Diagnostic First Walls (DFWs), Diagnostic Shield Modules (DSMs) and the EPP structure, as well as impact on the system design integration duemore » to electrical contact among various EPP structural components are discussed.« less
Observer-based distributed adaptive iterative learning control for linear multi-agent systems
NASA Astrophysics Data System (ADS)
Li, Jinsha; Liu, Sanyang; Li, Junmin
2017-10-01
This paper investigates the consensus problem for linear multi-agent systems from the viewpoint of two-dimensional systems when the state information of each agent is not available. Observer-based fully distributed adaptive iterative learning protocol is designed in this paper. A local observer is designed for each agent and it is shown that without using any global information about the communication graph, all agents achieve consensus perfectly for all undirected connected communication graph when the number of iterations tends to infinity. The Lyapunov-like energy function is employed to facilitate the learning protocol design and property analysis. Finally, simulation example is given to illustrate the theoretical analysis.
NASA Astrophysics Data System (ADS)
Paul, Prakash
2009-12-01
The finite element method (FEM) is used to solve three-dimensional electromagnetic scattering and radiation problems. Finite element (FE) solutions of this kind contain two main types of error: discretization error and boundary error. Discretization error depends on the number of free parameters used to model the problem, and on how effectively these parameters are distributed throughout the problem space. To reduce the discretization error, the polynomial order of the finite elements is increased, either uniformly over the problem domain or selectively in those areas with the poorest solution quality. Boundary error arises from the condition applied to the boundary that is used to truncate the computational domain. To reduce the boundary error, an iterative absorbing boundary condition (IABC) is implemented. The IABC starts with an inexpensive boundary condition and gradually improves the quality of the boundary condition as the iteration continues. An automatic error control (AEC) is implemented to balance the two types of error. With the AEC, the boundary condition is improved when the discretization error has fallen to a low enough level to make this worth doing. The AEC has these characteristics: (i) it uses a very inexpensive truncation method initially; (ii) it allows the truncation boundary to be very close to the scatterer/radiator; (iii) it puts more computational effort on the parts of the problem domain where it is most needed; and (iv) it can provide as accurate a solution as needed depending on the computational price one is willing to pay. To further reduce the computational cost, disjoint scatterers and radiators that are relatively far from each other are bounded separately and solved using a multi-region method (MRM), which leads to savings in computational cost. A simple analytical way to decide whether the MRM or the single region method will be computationally cheaper is also described. To validate the accuracy and savings in computation time, different shaped metallic and dielectric obstacles (spheres, ogives, cube, flat plate, multi-layer slab etc.) are used for the scattering problems. For the radiation problems, waveguide excited antennas (horn antenna, waveguide with flange, microstrip patch antenna) are used. Using the AEC the peak reduction in computation time during the iteration is typically a factor of 2, compared to the IABC using the same element orders throughout. In some cases, it can be as high as a factor of 4.
Micromagnetic Simulation of Thermal Effects in Magnetic Nanostructures
2003-01-01
NiFe magnetic nano- elements are calculated. INTRODUCTION With decreasing size of magnetic nanostructures thermal effects become increasingly important...thermal field. The thermal field is assumed to be a Gaussian random process with the following statistical properties : (H,,,(t))=0 and (H,I.(t),H,.1(t...following property DI " =VE(M’’) - [VE(M"’)• t] t =0, for k =1.m (12) 186 The optimal path can be found using an iterative scheme. In each iteration step the
Kouri, Donald J [Houston, TX; Vijay, Amrendra [Houston, TX; Zhang, Haiyan [Houston, TX; Zhang, Jingfeng [Houston, TX; Hoffman, David K [Ames, IA
2007-05-01
A method and system for solving the inverse acoustic scattering problem using an iterative approach with consideration of half-off-shell transition matrix elements (near-field) information, where the Volterra inverse series correctly predicts the first two moments of the interaction, while the Fredholm inverse series is correct only for the first moment and that the Volterra approach provides a method for exactly obtaining interactions which can be written as a sum of delta functions.
A new model for graduate education and innovation in medical technology.
Yazdi, Youseph; Acharya, Soumyadipta
2013-09-01
We describe a new model of graduate education in bioengineering innovation and design- a year long Master's degree program that educates engineers in the process of healthcare technology innovation for both advanced and low-resource global markets. Students are trained in an iterative "Spiral Innovation" approach that ensures early, staged, and repeated examination of all key elements of a successful medical device. This includes clinical immersion based problem identification and assessment (at Johns Hopkins Medicine and abroad), team based concept and business model development, and project planning based on iterative technical and business plan de-risking. The experiential, project based learning process is closely supported by several core courses in business, design, and engineering. Students in the program work on two team based projects, one focused on addressing healthcare needs in advanced markets and a second focused on low-resource settings. The program recently completed its fourth year of existence, and has graduated 61 students, who have continued on to industry or startups (one half), additional graduate education, or medical school (one third), or our own Global Health Innovation Fellowships. Over the 4 years, the program has sponsored 10 global health teams and 14 domestic/advanced market medtech teams, and launched 5 startups, of which 4 are still active. Projects have attracted over US$2.5M in follow-on awards and grants, that are supporting the continued development of over a dozen projects.
NASA Astrophysics Data System (ADS)
Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo
2017-08-01
The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.
Conjecture Mapping to Optimize the Educational Design Research Process
ERIC Educational Resources Information Center
Wozniak, Helen
2015-01-01
While educational design research promotes closer links between practice and theory, reporting its outcomes from iterations across multiple contexts is often constrained by the volumes of data generated, and the context bound nature of the research outcomes. Reports tend to focus on a single iteration of implementation without further research to…
OVERVIEW OF NEUTRON MEASUREMENTS IN JET FUSION DEVICE.
Batistoni, P; Villari, R; Obryk, B; Packer, L W; Stamatelatos, I E; Popovichev, S; Colangeli, A; Colling, B; Fonnesu, N; Loreti, S; Klix, A; Klosowski, M; Malik, K; Naish, J; Pillon, M; Vasilopoulou, T; De Felice, P; Pimpinella, M; Quintieri, L
2017-10-05
The design and operation of ITER experimental fusion reactor requires the development of neutron measurement techniques and numerical tools to derive the fusion power and the radiation field in the device and in the surrounding areas. Nuclear analyses provide essential input to the conceptual design, optimisation, engineering and safety case in ITER and power plant studies. The required radiation transport calculations are extremely challenging because of the large physical extent of the reactor plant, the complexity of the geometry, and the combination of deep penetration and streaming paths. This article reports the experimental activities which are carried-out at JET to validate the neutronics measurements methods and numerical tools used in ITER and power plant design. A new deuterium-tritium campaign is proposed in 2019 at JET: the unique 14 MeV neutron yields produced will be exploited as much as possible to validate measurement techniques, codes, procedures and data currently used in ITER design thus reducing the related uncertainties and the associated risks in the machine operation. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Design optimization of first wall and breeder unit module size for the Indian HCCB blanket module
NASA Astrophysics Data System (ADS)
Deepak, SHARMA; Paritosh, CHAUDHURI
2018-04-01
The Indian test blanket module (TBM) program in ITER is one of the major steps in the Indian fusion reactor program for carrying out the R&D activities in the critical areas like design of tritium breeding blankets relevant to future Indian fusion devices (ITER relevant and DEMO). The Indian Lead–Lithium Cooled Ceramic Breeder (LLCB) blanket concept is one of the Indian DEMO relevant TBM, to be tested in ITER as a part of the TBM program. Helium-Cooled Ceramic Breeder (HCCB) is an alternative blanket concept that consists of lithium titanate (Li2TiO3) as ceramic breeder (CB) material in the form of packed pebble beds and beryllium as the neutron multiplier. Specifically, attentions are given to the optimization of first wall coolant channel design and size of breeder unit module considering coolant pressure and thermal loads for the proposed Indian HCCB blanket based on ITER relevant TBM and loading conditions. These analyses will help proceeding further in designing blankets for loads relevant to the future fusion device.
Improved electron probe microanalysis of trace elements in quartz
Donovan, John J.; Lowers, Heather; Rusk, Brian G.
2011-01-01
Quartz occurs in a wide range of geologic environments throughout the Earth's crust. The concentration and distribution of trace elements in quartz provide information such as temperature and other physical conditions of formation. Trace element analyses with modern electron-probe microanalysis (EPMA) instruments can achieve 99% confidence detection of ~100 ppm with fairly minimal effort for many elements in samples of low to moderate average atomic number such as many common oxides and silicates. However, trace element measurements below 100 ppm in many materials are limited, not only by the precision of the background measurement, but also by the accuracy with which background levels are determined. A new "blank" correction algorithm has been developed and tested on both Cameca and JEOL instruments, which applies a quantitative correction to the emitted X-ray intensities during the iteration of the sample matrix correction based on a zero level (or known trace) abundance calibration standard. This iterated blank correction, when combined with improved background fit models, and an "aggregate" intensity calculation utilizing multiple spectrometer intensities in software for greater geometric efficiency, yields a detection limit of 2 to 3 ppm for Ti and 6 to 7 ppm for Al in quartz at 99% t-test confidence with similar levels for absolute accuracy.
In-vessel tritium retention and removal in ITER
NASA Astrophysics Data System (ADS)
Federici, G.; Anderl, R. A.; Andrew, P.; Brooks, J. N.; Causey, R. A.; Coad, J. P.; Cowgill, D.; Doerner, R. P.; Haasz, A. A.; Janeschitz, G.; Jacob, W.; Longhurst, G. R.; Nygren, R.; Peacock, A.; Pick, M. A.; Philipps, V.; Roth, J.; Skinner, C. H.; Wampler, W. R.
Tritium retention inside the vacuum vessel has emerged as a potentially serious constraint in the operation of the International Thermonuclear Experimental Reactor (ITER). In this paper we review recent tokamak and laboratory data on hydrogen, deuterium and tritium retention for materials and conditions which are of direct relevance to the design of ITER. These data, together with significant advances in understanding the underlying physics, provide the basis for modelling predictions of the tritium inventory in ITER. We present the derivation, and discuss the results, of current predictions both in terms of implantation and codeposition rates, and critically discuss their uncertainties and sensitivity to important design and operation parameters such as the plasma edge conditions, the surface temperature, the presence of mixed-materials, etc. These analyses are consistent with recent tokamak findings and show that codeposition of tritium occurs on the divertor surfaces primarily with carbon eroded from a limited area of the divertor near the strike zones. This issue remains an area of serious concern for ITER. The calculated codeposition rates for ITER are relatively high and the in-vessel tritium inventory limit could be reached, under worst assumptions, in approximately a week of continuous operation. We discuss the implications of these estimates on the design, operation and safety of ITER and present a strategy for resolving the issues. We conclude that as long as carbon is used in ITER - and more generically in any other next-step experimental fusion facility fuelled with tritium - the efficient control and removal of the codeposited tritium is essential. There is a critical need to develop and test in situ cleaning techniques and procedures that are beyond the current experience of present-day tokamaks. We review some of the principal methods that are being investigated and tested, in conjunction with the R&D work still required to extrapolate their applicability to ITER. Finally, unresolved issues are identified and recommendations are made on potential R&D avenues for their resolution.
NASA Astrophysics Data System (ADS)
An, Hyunuk; Ichikawa, Yutaka; Tachikawa, Yasuto; Shiiba, Michiharu
2012-11-01
SummaryThree different iteration methods for a three-dimensional coordinate-transformed saturated-unsaturated flow model are compared in this study. The Picard and Newton iteration methods are the common approaches for solving Richards' equation. The Picard method is simple to implement and cost-efficient (on an individual iteration basis). However it converges slower than the Newton method. On the other hand, although the Newton method converges faster, it is more complex to implement and consumes more CPU resources per iteration than the Picard method. The comparison of the two methods in finite-element model (FEM) for saturated-unsaturated flow has been well evaluated in previous studies. However, two iteration methods might exhibit different behavior in the coordinate-transformed finite-difference model (FDM). In addition, the Newton-Krylov method could be a suitable alternative for the coordinate-transformed FDM because it requires the evaluation of a 19-point stencil matrix. The formation of a 19-point stencil is quite a complex and laborious procedure. Instead, the Newton-Krylov method calculates the matrix-vector product, which can be easily approximated by calculating the differences of the original nonlinear function. In this respect, the Newton-Krylov method might be the most appropriate iteration method for coordinate-transformed FDM. However, this method involves the additional cost of taking an approximation at each Krylov iteration in the Newton-Krylov method. In this paper, we evaluated the efficiency and robustness of three iteration methods—the Picard, Newton, and Newton-Krylov methods—for simulating saturated-unsaturated flow through porous media using a three-dimensional coordinate-transformed FDM.
Status of the ITER Electron Cyclotron Heating and Current Drive System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darbos, Caroline; Albajar, Ferran; Bonicelli, Tullio
2015-10-07
We present that the electron cyclotron (EC) heating and current drive (H&CD) system developed for the ITER is made of 12 sets of high-voltage power supplies feeding 24 gyrotrons connected through 24 transmission lines (TL), to five launchers, four located in upper ports and one at the equatorial level. Nearly all procurements are in-kind, following general ITER philosophy, and will come from Europe, India, Japan, Russia and the USA. The full system is designed to couple to the plasma 20 MW among the 24 MW generated power, at the frequency of 170 GHz, for various physics applications such as plasmamore » start-up, central H&CD and magnetohydrodynamic (MHD) activity control. The design takes present day technology and extends toward high-power continuous operation, which represents a large step forward as compared to the present state of the art. The ITER EC system will be a stepping stone to future EC systems for DEMO and beyond.The development of the EC system is facing significant challenges, which includes not only an advanced microwave system but also compliance with stringent requirements associated with nuclear safety as ITER became the first fusion device licensed as basic nuclear installations as of 9 November 2012. Finally, since the conceptual design of the EC system was established in 2007, the EC system has progressed to a preliminary design stage in 2012 and is now moving forward toward a final design.« less
Fast Acting Eddy Current Driven Valve for Massive Gas Injection on ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyttle, Mark S; Baylor, Larry R; Carmichael, Justin R
2015-01-01
Tokamak plasma disruptions present a significant challenge to ITER as they can result in intense heat flux, large forces from halo and eddy currents, and potential first-wall damage from the generation of multi-MeV runaway electrons. Massive gas injection (MGI) of high Z material using fast acting valves is being explored on existing tokamaks and is planned for ITER as a method to evenly distribute the thermal load of the plasma to prevent melting, control the rate of the current decay to minimize mechanical loads, and to suppress the generation of runaway electrons. A fast acting valve and accompanying power supplymore » have been designed and first test articles produced to meet the requirements for a disruption mitigation system on ITER. The test valve incorporates a flyer plate actuator similar to designs deployed on TEXTOR, ASDEX upgrade, and JET [1 3] of a size useful for ITER with special considerations to mitigate the high mechanical forces developed during actuation due to high background magnetic fields. The valve includes a tip design and all-metal valve stem sealing for compatibility with tritium and high neutron and gamma fluxes.« less
Zhang, Xiao C; Bermudez, Ana M; Reddy, Pranav M; Sarpatwari, Ravi R; Chheng, Darin B; Mezoian, Taylor J; Schwartz, Victoria R; Simmons, Quinneil J; Jay, Gregory D; Kobayashi, Leo
2017-03-01
A stable and readily accessible work surface for bedside medical procedures represents a valuable tool for acute care providers. In emergency department (ED) settings, the design and implementation of traditional Mayo stands and related surface devices often limit their availability, portability, and usability, which can lead to suboptimal clinical practice conditions that may affect the safe and effective performance of medical procedures and delivery of patient care. We designed and built a novel, open-source, portable, bedside procedural surface through an iterative development process with use testing in simulated and live clinical environments. The procedural surface development project was conducted between October 2014 and June 2016 at an academic referral hospital and its affiliated simulation facility. An interdisciplinary team of emergency physicians, mechanical engineers, medical students, and design students sought to construct a prototype bedside procedural surface out of off-the-shelf hardware during a collaborative university course on health care design. After determination of end-user needs and core design requirements, multiple prototypes were fabricated and iteratively modified, with early variants featuring undermattress stabilizing supports or ratcheting clamp mechanisms. Versions 1 through 4 underwent 2 hands-on usability-testing simulation sessions; version 5 was presented at a design critique held jointly by a panel of clinical and industrial design faculty for expert feedback. Responding to select feedback elements over several surface versions, investigators arrived at a near-final prototype design for fabrication and use testing in a live clinical setting. This experimental procedural surface (version 8) was constructed and then deployed for controlled usability testing against the standard Mayo stands in use at the study site ED. Clinical providers working in the ED who opted to participate in the study were provided with the prototype surface and just-in-time training on its use when performing bedside procedures. Subjects completed the validated 10-point System Usability Scale postshift for the surface that they had used. The study protocol was approved by the institutional review board. Multiple prototypes and recursive design revisions resulted in a fully functional, portable, and durable bedside procedural surface that featured a stainless steel tray and intuitive hook-and-lock mechanisms for attachment to ED stretcher bed rails. Forty-two control and 40 experimental group subjects participated and completed questionnaires. The median System Usability Scale score (out of 100; higher scores associated with better usability) was 72.5 (interquartile range [IQR] 51.3 to 86.3) for the Mayo stand; the experimental surface was scored at 93.8 (IQR 84.4 to 97.5 for a difference in medians of 17.5 (95% confidence interval 10 to 27.5). Subjects reported several usability challenges with the Mayo stand; the experimental surface was reviewed as easy to use, simple, and functional. In accordance with experimental live environment deployment, questionnaire responses, and end-user suggestions, the project team finalized the design specification for the experimental procedural surface for open dissemination. An iterative, interdisciplinary approach was used to generate, evaluate, revise, and finalize the design specification for a new procedural surface that met all core end-user requirements. The final surface design was evaluated favorably on a validated usability tool against Mayo stands when use tested in simulated and live clinical settings. Copyright © 2016 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Siko, Jason Paul
2012-01-01
This design-based research study examined the effects of a game design project on student test performance, with refinements made to the implementation after each of the three iterations of the study. The changes to the implementation over the three iterations were based on the literature for the three justifications for the use of homemade…
Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes
NASA Technical Reports Server (NTRS)
Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.
1996-01-01
The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.
Material migration studies with an ITER first wall panel proxy on EAST
Ding, R.; Pitts, R. A.; Borodin, D.; ...
2015-01-23
The ITER beryllium (Be) first wall (FW) panels are shaped to protect leading edges between neighbouring panels arising from assembly tolerances. This departure from a perfectly cylindrical surface automatically leads to magnetically shadowed regions where eroded Be can be re-deposited, together with co-deposition of tritium fuel. To provide a benchmark for a series of erosion/re-deposition simulation studies performed for the ITER FW panels, dedicated experiments have been performed on the EAST tokamak using a specially designed, instrumented test limiter acting as a proxy for the FW panel geometry. Carbon coated molybdenum plates forming the limiter front surface were exposed tomore » the outer midplane boundary plasma of helium discharges using the new Material and Plasma Evaluation System (MAPES). Net erosion and deposition patterns are estimated using ion beam analysis to measure the carbon layer thickness variation across the surface after exposure. The highest erosion of about 0.8 µm is found near the midplane, where the surface is closest to the plasma separatrix. No net deposition above the measurement detection limit was found on the proxy wall element, even in shadowed regions. The measured 2D surface erosion distribution has been modelled with the 3D Monte Carlo code ERO, using the local plasma parameter measurements together with a diffusive transport assumption. In conclusion, excellent agreement between the experimentally observed net erosion and the modelled erosion profile has been obtained.« less
Final Report on ITER Task Agreement 81-08
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard L. Moore
As part of an ITER Implementing Task Agreement (ITA) between the ITER US Participant Team (PT) and the ITER International Team (IT), the INL Fusion Safety Program was tasked to provide the ITER IT with upgrades to the fusion version of the MELCOR 1.8.5 code including a beryllium dust oxidation model. The purpose of this model is to allow the ITER IT to investigate hydrogen production from beryllium dust layers on hot surfaces inside the ITER vacuum vessel (VV) during in-vessel loss-of-cooling accidents (LOCAs). Also included in the ITER ITA was a task to construct a RELAP5/ATHENA model of themore » ITER divertor cooling loop to model the draining of the loop during a large ex-vessel pipe break followed by an in-vessel divertor break and compare the results to a simular MELCOR model developed by the ITER IT. This report, which is the final report for this agreement, documents the completion of the work scope under this ITER TA, designated as TA 81-08.« less
Lecture Notes on Multigrid Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vassilevski, P S
The Lecture Notes are primarily based on a sequence of lectures given by the author while been a Fulbright scholar at 'St. Kliment Ohridski' University of Sofia, Sofia, Bulgaria during the winter semester of 2009-2010 academic year. The notes are somewhat expanded version of the actual one semester class he taught there. The material covered is slightly modified and adapted version of similar topics covered in the author's monograph 'Multilevel Block-Factorization Preconditioners' published in 2008 by Springer. The author tried to keep the notes as self-contained as possible. That is why the lecture notes begin with some basic introductory matrix-vectormore » linear algebra, numerical PDEs (finite element) facts emphasizing the relations between functions in finite dimensional spaces and their coefficient vectors and respective norms. Then, some additional facts on the implementation of finite elements based on relation tables using the popular compressed sparse row (CSR) format are given. Also, typical condition number estimates of stiffness and mass matrices, the global matrix assembly from local element matrices are given as well. Finally, some basic introductory facts about stationary iterative methods, such as Gauss-Seidel and its symmetrized version are presented. The introductory material ends up with the smoothing property of the classical iterative methods and the main definition of two-grid iterative methods. From here on, the second part of the notes begins which deals with the various aspects of the principal TG and the numerous versions of the MG cycles. At the end, in part III, we briefly introduce algebraic versions of MG referred to as AMG, focusing on classes of AMG specialized for finite element matrices.« less
An iterative synthetic approach to engineer a high-performing PhoB-specific reporter.
Stoudenmire, Julie L; Essock-Burns, Tara; Weathers, Erena N; Solaimanpour, Sina; Mrázek, Jan; Stabb, Eric V
2018-05-11
Transcriptional reporters are common tools for analyzing either the transcription of a gene of interest or the activity of a specific transcriptional regulator. Unfortunately, the latter application has the shortcoming that native promoters did not evolve as optimal readouts for the activity of a particular regulator. We sought to synthesize an optimized transcriptional reporter for assessing PhoB activity, aiming for maximal "on" expression when PhoB is active, minimal background in the "off" state, and no control elements for other regulators. We designed specific sequences for promoter elements with appropriately spaced PhoB-binding sites, and at nineteen additional intervening nucleotide positions for which we did not predict sequence-specific effects the bases were randomized. Eighty-three such constructs were screened in Vibrio fischeri , enabling us to identify bases at particular randomized positions that significantly correlated with high "on" or low "off" expression. A second round of promoter design rationally constrained thirteen additional positions, leading to a reporter with high PhoB-dependent expression, essentially no background, and no other known regulatory elements. As expressed reporters, we used both stable and destabilized GFP, the latter with a half-life of eighty-one minutes in V. fischeri In culture, PhoB induced the reporter when phosphate was depleted below 10 μM. During symbiotic colonization of its host squid Euprymna scolopes , the reporter indicated heterogeneous phosphate availability in different light-organ microenvironments. Finally, testing this construct in other Proteobacteria demonstrated its broader utility. The results illustrate how a limited ability to predict synthetic promoter-reporter performance can be overcome through iterative screening and re-engineering. IMPORTANCE Transcriptional reporters can be powerful tools for assessing when a particular regulator is active; however, native promoters may not be ideal for this purpose. Optimal reporters should be specific to the regulator being examined and should maximize the difference between "on" and "off" states; however, these properties are distinct from the selective pressures driving the evolution of natural promoters. Synthetic promoters offer a promising alternative, but our understanding often does not enable fully predictive promoter design, and the large number of alternative sequence possibilities can be intractable. In a synthetic promoter region with over thirty-four billion sequence variants, we identified bases correlated with favorable performance by screening only eighty-three candidates, allowing us to rationally constrain our design. We thereby generated an optimized reporter that is induced by PhoB and used it to explore the low-phosphate response of V. fischeri This promoter-design strategy will facilitate the engineering of other regulator-specific reporters. Copyright © 2018 American Society for Microbiology.
ERIC Educational Resources Information Center
Kim, Rae Young
2009-01-01
This study is an initial analytic attempt to iteratively develop a conceptual framework informed by both theoretical and practical perspectives that may be used to analyze non-textual elements in mathematics textbooks. Despite the importance of visual representations in teaching and learning, little effort has been made to specify in any…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diaz, Aaron A.; Chamberlin, Clyde E.; Edwards, Matthew K.
This section of the Joint summary technical letter report (TLR) describes work conducted at the Pacific Northwest National Laboratory (PNNL) during FY 2016 (FY16) on the under-sodium viewing (USV) PNNL project 58745, work package AT-16PN230102. This section of the TLR satisfies PNNL’s M3AT-16PN2301025 milestone and is focused on summarizing the design, development, and evaluation of two different phased-array ultrasonic testing (PA-UT) probe designs—a two-dimensional (2D) matrix phased-array probe, and two one-dimensional (1D) linear array probes, referred to as serial number 4 (SN4) engineering test units (ETUs). The 2D probe is a pulse-echo (PE), 32×2, 64-element matrix phased-array ETU. The 1Dmore » probes are 32×1 element linear array ETUs. This TLR also provides the results from a performance demonstration (PD) of in-sodium target detection trials at 260°C using both probe designs. This effort continues the iterative evolution supporting the longer term goal of producing and demonstrating a pre-manufacturing prototype ultrasonic probe that possesses the fundamental performance characteristics necessary to enable the development of a high-temperature sodium-cooled fast reactor (SFR) inspection system for in-sodium detection and imaging.« less
NASA Astrophysics Data System (ADS)
Raj, Prasoon; Angelone, Maurizio; Döring, Toralf; Eberhardt, Klaus; Fischer, Ulrich; Klix, Axel; Schwengner, Ronald
2018-01-01
Neutron and gamma flux measurements in designated positions in the test blanket modules (TBM) of ITER will be important tasks during ITER's campaigns. As part of the ongoing task on development of nuclear instrumentation for application in European ITER TBMs, experimental investigations on self-powered detectors (SPD) are undertaken. This paper reports the findings of neutron and photon irradiation tests performed with a test SPD in flat sandwich-like geometry. Whereas both neutrons and gammas can be detected with appropriate optimization of geometries, materials and sizes of the components, the present sandwich-like design is more sensitive to gammas than 14 MeV neutrons. Range of SPD current signals achievable under TBM conditions are predicted based on the SPD sensitivities measured in this work.
Stretchable Materials for Robust Soft Actuators towards Assistive Wearable Devices
NASA Astrophysics Data System (ADS)
Agarwal, Gunjan; Besuchet, Nicolas; Audergon, Basile; Paik, Jamie
2016-09-01
Soft actuators made from elastomeric active materials can find widespread potential implementation in a variety of applications ranging from assistive wearable technologies targeted at biomedical rehabilitation or assistance with activities of daily living, bioinspired and biomimetic systems, to gripping and manipulating fragile objects, and adaptable locomotion. In this manuscript, we propose a novel two-component soft actuator design and design tool that produces actuators targeted towards these applications with enhanced mechanical performance and manufacturability. Our numerical models developed using the finite element method can predict the actuator behavior at large mechanical strains to allow efficient design iterations for system optimization. Based on two distinctive actuator prototypes’ (linear and bending actuators) experimental results that include free displacement and blocked-forces, we have validated the efficacy of the numerical models. The presented extensive investigation of mechanical performance for soft actuators with varying geometric parameters demonstrates the practical application of the design tool, and the robustness of the actuator hardware design, towards diverse soft robotic systems for a wide set of assistive wearable technologies, including replicating the motion of several parts of the human body.
Development of a Piezoelectric Rotary Hammer Drill
NASA Technical Reports Server (NTRS)
Domm, Lukas N.
2011-01-01
The Piezoelectric Rotary Hammer Drill is designed to core through rock using a combination of rotation and high frequency hammering powered by a single piezoelectric actuator. It is designed as a low axial preload, low mass, and low power device for sample acquisition on future missions to extraterrestrial bodies. The purpose of this internship is to develop and test a prototype of the Piezoelectric Rotary Hammer Drill in order to verify the use of a horn with helical or angled cuts as a hammering and torque inducing mechanism. Through an iterative design process using models in ANSYS Finite Element software and a Mason's Equivalent Circuit model in MATLAB, a horn design was chosen for fabrication based on the predicted horn tip motion, electromechanical coupling, and neutral plane location. The design was then machined and a test bed assembled. The completed prototype has proven that a single piezoelectric actuator can be used to produce both rotation and hammering in a drill string through the use of a torque inducing horn. Final data results include bit rotation produced versus input power, and best drilling rate achieved with the prototype.
The MEOW lunar project for education and science based on concurrent engineering approach
NASA Astrophysics Data System (ADS)
Roibás-Millán, E.; Sorribes-Palmer, F.; Chimeno-Manguán, M.
2018-07-01
The use of concurrent engineering in the design of space missions allows to take into account in an interrelated methodology the high level of coupling and iteration of mission subsystems in the preliminary conceptual phase. This work presents the result of applying concurrent engineering in a short time lapse to design the main elements of the preliminary design for a lunar exploration mission, developed within ESA Academy Concurrent Engineering Challenge 2017. During this program, students of the Master in Space Systems at Technical University of Madrid designed a low cost satellite to find water on the Moon south pole as prospect of a future human lunar base. The resulting mission, The Moon Explorer And Observer of Water/Ice (MEOW) compromises a 262 kg spacecraft to be launched into a Geostationary Transfer Orbit as a secondary payload in the 2023/2025 time frame. A three months Weak Stability Boundary transfer via the Sun-Earth L1 Lagrange point allows for a high launch timeframe flexibility. The different aspects of the mission (orbit analysis, spacecraft design and payload) and possibilities of concurrent engineering are described.
Stretchable Materials for Robust Soft Actuators towards Assistive Wearable Devices
Agarwal, Gunjan; Besuchet, Nicolas; Audergon, Basile; Paik, Jamie
2016-01-01
Soft actuators made from elastomeric active materials can find widespread potential implementation in a variety of applications ranging from assistive wearable technologies targeted at biomedical rehabilitation or assistance with activities of daily living, bioinspired and biomimetic systems, to gripping and manipulating fragile objects, and adaptable locomotion. In this manuscript, we propose a novel two-component soft actuator design and design tool that produces actuators targeted towards these applications with enhanced mechanical performance and manufacturability. Our numerical models developed using the finite element method can predict the actuator behavior at large mechanical strains to allow efficient design iterations for system optimization. Based on two distinctive actuator prototypes’ (linear and bending actuators) experimental results that include free displacement and blocked-forces, we have validated the efficacy of the numerical models. The presented extensive investigation of mechanical performance for soft actuators with varying geometric parameters demonstrates the practical application of the design tool, and the robustness of the actuator hardware design, towards diverse soft robotic systems for a wide set of assistive wearable technologies, including replicating the motion of several parts of the human body. PMID:27670953
Stretchable Materials for Robust Soft Actuators towards Assistive Wearable Devices.
Agarwal, Gunjan; Besuchet, Nicolas; Audergon, Basile; Paik, Jamie
2016-09-27
Soft actuators made from elastomeric active materials can find widespread potential implementation in a variety of applications ranging from assistive wearable technologies targeted at biomedical rehabilitation or assistance with activities of daily living, bioinspired and biomimetic systems, to gripping and manipulating fragile objects, and adaptable locomotion. In this manuscript, we propose a novel two-component soft actuator design and design tool that produces actuators targeted towards these applications with enhanced mechanical performance and manufacturability. Our numerical models developed using the finite element method can predict the actuator behavior at large mechanical strains to allow efficient design iterations for system optimization. Based on two distinctive actuator prototypes' (linear and bending actuators) experimental results that include free displacement and blocked-forces, we have validated the efficacy of the numerical models. The presented extensive investigation of mechanical performance for soft actuators with varying geometric parameters demonstrates the practical application of the design tool, and the robustness of the actuator hardware design, towards diverse soft robotic systems for a wide set of assistive wearable technologies, including replicating the motion of several parts of the human body.
Design of a -1 MV dc UHV power supply for ITER NBI
NASA Astrophysics Data System (ADS)
Watanabe, K.; Yamamoto, M.; Takemoto, J.; Yamashita, Y.; Dairaku, M.; Kashiwagi, M.; Taniguchi, M.; Tobari, H.; Umeda, N.; Sakamoto, K.; Inoue, T.
2009-05-01
Procurement of a dc -1 MV power supply system for the ITER neutral beam injector (NBI) is shared by Japan and the EU. The Japan Atomic Energy Agency as the Japan Domestic Agency (JADA) for ITER contributes to the procurement of dc -1 MV ultra-high voltage (UHV) components such as a dc -1 MV generator, a transmission line and a -1 MV insulating transformer for the ITER NBI power supply. The inverter frequency of 150 Hz in the -1 MV power supply and major circuit parameters have been proposed and adopted in the ITER NBI. The dc UHV insulation has been carefully designed since dc long pulse insulation is quite different from conventional ac insulation or dc short pulse systems. A multi-layer insulation structure of the transformer for a long pulse up to 3600 s has been designed with electric field simulation. Based on the simulation the overall dimensions of the dc UHV components have been finalized. A surge energy suppression system is also essential to protect the accelerator from electric breakdowns. The JADA contributes to provide an effective surge suppression system composed of core snubbers and resistors. Input energy into the accelerator from the power supply can be reduced to about 20 J, which satisfies the design criteria of 50 J in total in the case of breakdown at -1 MV.
NASA Technical Reports Server (NTRS)
Rizk, Magdi H.
1988-01-01
A scheme is developed for solving constrained optimization problems in which the objective function and the constraint function are dependent on the solution of the nonlinear flow equations. The scheme updates the design parameter iterative solutions and the flow variable iterative solutions simultaneously. It is applied to an advanced propeller design problem with the Euler equations used as the flow governing equations. The scheme's accuracy, efficiency and sensitivity to the computational parameters are tested.
Iterative design of one- and two-dimensional FIR digital filters. [Finite duration Impulse Response
NASA Technical Reports Server (NTRS)
Suk, M.; Choi, K.; Algazi, V. R.
1976-01-01
The paper describes a new iterative technique for designing FIR (finite duration impulse response) digital filters using a frequency weighted least squares approximation. The technique is as easy to implement (via FFT) and as effective in two dimensions as in one dimension, and there are virtually no limitations on the class of filter frequency spectra approximated. An adaptive adjustment of the frequency weight to achieve other types of design approximation such as Chebyshev type design is discussed.
Design and implementation of an affordable, public sector electronic medical record in rural Nepal.
Raut, Anant; Yarbrough, Chase; Singh, Vivek; Gauchan, Bikash; Citrin, David; Verma, Varun; Hawley, Jessica; Schwarz, Dan; Harsha Bangura, Alex; Shrestha, Biplav; Schwarz, Ryan; Adhikari, Mukesh; Maru, Duncan
2017-06-23
Globally, electronic medical records are central to the infrastructure of modern healthcare systems. Yet the vast majority of electronic medical records have been designed for resource-rich environments and are not feasible in settings of poverty. Here we describe the design and implementation of an electronic medical record at a public sector district hospital in rural Nepal, and its subsequent expansion to an additional public sector facility.DevelopmentThe electronic medical record was designed to solve for the following elements of public sector healthcare delivery: 1) integration of the systems across inpatient, surgical, outpatient, emergency, laboratory, radiology, and pharmacy sites of care; 2) effective data extraction for impact evaluation and government regulation; 3) optimization for longitudinal care provision and patient tracking; and 4) effectiveness for quality improvement initiatives. For these purposes, we adapted Bahmni, a product built with open-source components for patient tracking, clinical protocols, pharmacy, laboratory, imaging, financial management, and supply logistics. In close partnership with government officials, we deployed the system in February of 2015, added on additional functionality, and iteratively improved the system over the following year. This experience enabled us then to deploy the system at an additional district-level hospital in a different part of the country in under four weeks. We discuss the implementation challenges and the strategies we pursued to build an electronic medical record for the public sector in rural Nepal.DiscussionOver the course of 18 months, we were able to develop, deploy and iterate upon the electronic medical record, and then deploy the refined product at an additional facility within only four weeks. Our experience suggests the feasibility of an integrated electronic medical record for public sector care delivery even in settings of rural poverty.
Design and implementation of an affordable, public sector electronic medical record in rural Nepal
Raut, Anant; Yarbrough, Chase; Singh, Vivek; Gauchan, Bikash; Citrin, David; Verma, Varun; Hawley, Jessica; Schwarz, Dan; Harsha, Alex; Shrestha, Biplav; Schwarz, Ryan; Adhikari, Mukesh; Maru, Duncan
2018-01-01
Introduction Globally, electronic medical records are central to the infrastructure of modern healthcare systems. Yet the vast majority of electronic medical records have been designed for resource-rich environments and are not feasible in settings of poverty. Here we describe the design and implementation of an electronic medical record at a public sector district hospital in rural Nepal, and its subsequent expansion to an additional public sector facility. Development The electronic medical record was designed to solve for the following elements of public sector healthcare delivery: 1) integration of the systems across inpatient, surgical, outpatient, emergency, laboratory, radiology, and pharmacy sites of care; 2) effective data extraction for impact evaluation and government regulation; 3) optimization for longitudinal care provision and patient tracking; and 4) effectiveness for quality improvement initiatives. Application For these purposes, we adapted Bahmni, a product built with open-source components for patient tracking, clinical protocols, pharmacy, laboratory, imaging, financial management, and supply logistics. In close partnership with government officials, we deployed the system in February of 2015, added on additional functionality, and iteratively improved the system over the following year. This experience enabled us then to deploy the system at an additional district-level hospital in a different part of the country in under four weeks. We discuss the implementation challenges and the strategies we pursued to build an electronic medical record for the public sector in rural Nepal. Discussion Over the course of 18 months, we were able to develop, deploy and iterate upon the electronic medical record, and then deploy the refined product at an additional facility within only four weeks. Our experience suggests the feasibility of an integrated electronic medical record for public sector care delivery even in settings of rural poverty. PMID:28749321
Inversion of potential field data using the finite element method on parallel computers
NASA Astrophysics Data System (ADS)
Gross, L.; Altinay, C.; Shaw, S.
2015-11-01
In this paper we present a formulation of the joint inversion of potential field anomaly data as an optimization problem with partial differential equation (PDE) constraints. The problem is solved using the iterative Broyden-Fletcher-Goldfarb-Shanno (BFGS) method with the Hessian operator of the regularization and cross-gradient component of the cost function as preconditioner. We will show that each iterative step requires the solution of several PDEs namely for the potential fields, for the adjoint defects and for the application of the preconditioner. In extension to the traditional discrete formulation the BFGS method is applied to continuous descriptions of the unknown physical properties in combination with an appropriate integral form of the dot product. The PDEs can easily be solved using standard conforming finite element methods (FEMs) with potentially different resolutions. For two examples we demonstrate that the number of PDE solutions required to reach a given tolerance in the BFGS iteration is controlled by weighting regularization and cross-gradient but is independent of the resolution of PDE discretization and that as a consequence the method is weakly scalable with the number of cells on parallel computers. We also show a comparison with the UBC-GIF GRAV3D code.
The ITER bolometer diagnostic: Status and plansa)
NASA Astrophysics Data System (ADS)
Meister, H.; Giannone, L.; Horton, L. D.; Raupp, G.; Zeidner, W.; Grunda, G.; Kalvin, S.; Fischer, U.; Serikov, A.; Stickel, S.; Reichle, R.
2008-10-01
A consortium consisting of four EURATOM Associations has been set up to develop the project plan for the full development of the ITER bolometer diagnostic and to continue urgent R&D activities. An overview of the current status is given, including detector development, line-of-sight optimization, performance analysis as well as the design of the diagnostic components and their integration in ITER. This is complemented by the presentation of plans for future activities required to successfully implement the bolometer diagnostic, ranging from the detector development over diagnostic design and prototype testing to RH tools for calibration.
SUMMARY REPORT-FY2006 ITER WORK ACCOMPLISHED
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martovetsky, N N
2006-04-11
Six parties (EU, Japan, Russia, US, Korea, China) will build ITER. The US proposed to deliver at least 4 out of 7 modules of the Central Solenoid. Phillip Michael (MIT) and I were tasked by DoE to assist ITER in development of the ITER CS and other magnet systems. We work to help Magnets and Structure division headed by Neil Mitchell. During this visit I worked on the selected items of the CS design and carried out other small tasks, like PF temperature margin assessment.
Improved Convergence and Robustness of USM3D Solutions on Mixed Element Grids (Invited)
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.
2015-01-01
Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Scheme (HANIS), has been developed and implemented. It provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier Stokes (RANS) equations and a nonlinear control of the solution update. Two variants of the new methodology are assessed on four benchmark cases, namely, a zero-pressure gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the baseline solver technology.
Iterative LQG Controller Design Through Closed-Loop Identification
NASA Technical Reports Server (NTRS)
Hsiao, Min-Hung; Huang, Jen-Kuang; Cox, David E.
1996-01-01
This paper presents an iterative Linear Quadratic Gaussian (LQG) controller design approach for a linear stochastic system with an uncertain open-loop model and unknown noise statistics. This approach consists of closed-loop identification and controller redesign cycles. In each cycle, the closed-loop identification method is used to identify an open-loop model and a steady-state Kalman filter gain from closed-loop input/output test data obtained by using a feedback LQG controller designed from the previous cycle. Then the identified open-loop model is used to redesign the state feedback. The state feedback and the identified Kalman filter gain are used to form an updated LQC controller for the next cycle. This iterative process continues until the updated controller converges. The proposed controller design is demonstrated by numerical simulations and experiments on a highly unstable large-gap magnetic suspension system.
ERIC Educational Resources Information Center
Nguyen, Huy; Xiong, Wenting; Litman, Diane
2017-01-01
A peer-review system that automatically evaluates and provides formative feedback on free-text feedback comments of students was iteratively designed and evaluated in college and high-school classrooms. Classroom assignments required students to write paper drafts and submit them to a peer-review system. When student peers later submitted feedback…
ERIC Educational Resources Information Center
De Lisle, Jerome; Seunarinesingh, Krishna; Mohammed, Rhoda; Lee-Piggott, Rinnelle
2017-01-01
In this study, methodology and theory were linked to explicate the nature of education practice within schools facing exceptionally challenging circumstances (SFECC) in Trinidad and Tobago. The research design was an iterative quan>QUAL-quan>qual multi-method research programme, consisting of 3 independent projects linked together by overall…
Adapting an in-person patient-caregiver communication intervention to a tailored web-based format.
Zulman, Donna M; Schafenacker, Ann; Barr, Kathryn L C; Moore, Ian T; Fisher, Jake; McCurdy, Kathryn; Derry, Holly A; Saunders, Edward W; An, Lawrence C; Northouse, Laurel
2012-03-01
Interventions that target cancer patients and their caregivers have been shown to improve patient-caregiver communication, support, and emotional well-being. To adapt an in-person communication intervention for cancer patients and caregivers to a web-based format, and to examine the usability and acceptability of the web-based program among representative users. A tailored, interactive web-based communication program for cancer patients and their family caregivers was developed based on an existing in-person, nurse-delivered intervention. The development process involved: (1) building a multidisciplinary team of content and web design experts, (2) combining key components of the in-person intervention with the unique tailoring and interactive features of a web-based platform, and (3) conducting focus groups and usability testing to obtain feedback from representative program users at multiple time points. Four focus groups with 2-3 patient-caregiver pairs per group (n = 22 total participants) and two iterations of usability testing with four patient-caregiver pairs per session (n = 16 total participants) were conducted. Response to the program's structure, design, and content was favorable, even among users who were older or had limited computer and Internet experience. The program received high ratings for ease of use and overall usability (mean System Usability Score of 89.5 out of 100). Many elements of a nurse-delivered patient-caregiver intervention can be successfully adapted to a web-based format. A multidisciplinary design team and an iterative evaluation process with representative users were instrumental in the development of a usable and well-received web-based program. Copyright © 2011 John Wiley & Sons, Ltd.
Improvements on a non-invasive, parameter-free approach to inverse form finding
NASA Astrophysics Data System (ADS)
Landkammer, P.; Caspari, M.; Steinmann, P.
2017-08-01
Our objective is to determine the optimal undeformed workpiece geometry (material configuration) within forming processes when the prescribed deformed geometry (spatial configuration) is given. For solving the resulting shape optimization problem—also denoted as inverse form finding—we use a novel parameter-free approach, which relocates in each iteration the material nodal positions as design variables. The spatial nodal positions computed by an elasto-plastic finite element (FE) forming simulation are compared with their prescribed values. The objective function expresses a least-squares summation of the differences between the computed and the prescribed nodal positions. Here, a recently developed shape optimization approach (Landkammer and Steinmann in Comput Mech 57(2):169-191, 2016) is investigated with a view to enhance its stability and efficiency. Motivated by nonlinear optimization theory a detailed justification of the algorithm is given. Furthermore, a classification according to shape changing design, fixed and controlled nodal coordinates is introduced. Two examples with large elasto-plastic strains demonstrate that using a superconvergent patch recovery technique instead of a least-squares (L2 )-smoothing improves the efficiency. Updating the interior discretization nodes by solving a fictitious elastic problem also reduces the number of required FE iterations and avoids severe mesh distortions. Furthermore, the impact of the inclusion of the second deformation gradient in the Hessian of the Quasi-Newton approach is analyzed. Inverse form finding is a crucial issue in metal forming applications. As a special feature, the approach is designed to be coupled in a non-invasive fashion to arbitrary FE software.
Improvements on a non-invasive, parameter-free approach to inverse form finding
NASA Astrophysics Data System (ADS)
Landkammer, P.; Caspari, M.; Steinmann, P.
2018-04-01
Our objective is to determine the optimal undeformed workpiece geometry (material configuration) within forming processes when the prescribed deformed geometry (spatial configuration) is given. For solving the resulting shape optimization problem—also denoted as inverse form finding—we use a novel parameter-free approach, which relocates in each iteration the material nodal positions as design variables. The spatial nodal positions computed by an elasto-plastic finite element (FE) forming simulation are compared with their prescribed values. The objective function expresses a least-squares summation of the differences between the computed and the prescribed nodal positions. Here, a recently developed shape optimization approach (Landkammer and Steinmann in Comput Mech 57(2):169-191, 2016) is investigated with a view to enhance its stability and efficiency. Motivated by nonlinear optimization theory a detailed justification of the algorithm is given. Furthermore, a classification according to shape changing design, fixed and controlled nodal coordinates is introduced. Two examples with large elasto-plastic strains demonstrate that using a superconvergent patch recovery technique instead of a least-squares (L2)-smoothing improves the efficiency. Updating the interior discretization nodes by solving a fictitious elastic problem also reduces the number of required FE iterations and avoids severe mesh distortions. Furthermore, the impact of the inclusion of the second deformation gradient in the Hessian of the Quasi-Newton approach is analyzed. Inverse form finding is a crucial issue in metal forming applications. As a special feature, the approach is designed to be coupled in a non-invasive fashion to arbitrary FE software.
Hakone, Anzu; Harrison, Lane; Ottley, Alvitta; Winters, Nathan; Gutheil, Caitlin; Han, Paul K J; Chang, Remco
2017-01-01
Prostate cancer is the most common cancer among men in the US, and yet most cases represent localized cancer for which the optimal treatment is unclear. Accumulating evidence suggests that the available treatment options, including surgery and conservative treatment, result in a similar prognosis for most men with localized prostate cancer. However, approximately 90% of patients choose surgery over conservative treatment, despite the risk of severe side effects like erectile dysfunction and incontinence. Recent medical research suggests that a key reason is the lack of patient-centered tools that can effectively communicate personalized risk information and enable them to make better health decisions. In this paper, we report the iterative design process and results of developing the PROgnosis Assessment for Conservative Treatment (PROACT) tool, a personalized health risk communication tool for localized prostate cancer patients. PROACT utilizes two published clinical prediction models to communicate the patients' personalized risk estimates and compare treatment options. In collaboration with the Maine Medical Center, we conducted two rounds of evaluations with prostate cancer survivors and urologists to identify the design elements and narrative structure that effectively facilitate patient comprehension under emotional distress. Our results indicate that visualization can be an effective means to communicate complex risk information to patients with low numeracy and visual literacy. However, the visualizations need to be carefully chosen to balance readability with ease of comprehension. In addition, due to patients' charged emotional state, an intuitive narrative structure that considers the patients' information need is critical to aid the patients' comprehension of their risk information.
Zulman, Donna M.; Schafenacker, Ann; Barr, Kathryn L.C.; Moore, Ian T.; Fisher, Jake; McCurdy, Kathryn; Derry, Holly A.; Saunders, Edward W.; An, Lawrence C.; Northouse, Laurel
2011-01-01
Background Interventions that target cancer patients and their caregivers have been shown to improve communication, support, and emotional well-being. Objective To adapt an in-person communication intervention for cancer patients and caregivers to a web-based format, and to examine the usability and acceptability of the web-based program among representative users. Methods A tailored, interactive web-based communication program for cancer patients and their family caregivers was developed based on an existing in-person, nurse-delivered intervention. The development process involved: 1) building a multidisciplinary team of content and web design experts, 2) combining key components of the in-person intervention with the unique tailoring and interactive features of a web-based platform, and 3) conducting focus groups and usability testing to obtain feedback from representative program users at multiple time points. Results Four focus groups with 2 to 3 patient-caregiver pairs per group (n = 22 total participants) and two iterations of usability testing with 4 patient-caregiver pairs per session (n = 16 total participants) were conducted. Response to the program's structure, design, and content was favorable, even among users who were older or had limited computer and internet experience. The program received high ratings for ease of use and overall usability (mean System Usability Score of 89.5 out of 100). Conclusions Many elements of a nurse-delivered patient-caregiver intervention can be successfully adapted to a web-based format. A multidisciplinary design team and an iterative evaluation process with representative users were instrumental in the development of a usable and well-received web-based program. PMID:21830255
Energy and technology review: Engineering modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cabayan, H.S.; Goudreau, G.L.; Ziolkowski, R.W.
1986-10-01
This report presents information concerning: Modeling Canonical Problems in Electromagnetic Coupling Through Apertures; Finite-Element Codes for Computing Electrostatic Fields; Finite-Element Modeling of Electromagnetic Phenomena; Modeling Microwave-Pulse Compression in a Resonant Cavity; Lagrangian Finite-Element Analysis of Penetration Mechanics; Crashworthiness Engineering; Computer Modeling of Metal-Forming Processes; Thermal-Mechanical Modeling of Tungsten Arc Welding; Modeling Air Breakdown Induced by Electromagnetic Fields; Iterative Techniques for Solving Boltzmann's Equations for p-Type Semiconductors; Semiconductor Modeling; and Improved Numerical-Solution Techniques in Large-Scale Stress Analysis.
Prediction of overall and blade-element performance for axial-flow pump configurations
NASA Technical Reports Server (NTRS)
Serovy, G. K.; Kavanagh, P.; Okiishi, T. H.; Miller, M. J.
1973-01-01
A method and a digital computer program for prediction of the distributions of fluid velocity and properties in axial flow pump configurations are described and evaluated. The method uses the blade-element flow model and an iterative numerical solution of the radial equilbrium and continuity conditions. Correlated experimental results are used to generate alternative methods for estimating blade-element turning and loss characteristics. Detailed descriptions of the computer program are included, with example input and typical computed results.
Designing tools for oil exploration using nuclear modeling
NASA Astrophysics Data System (ADS)
Mauborgne, Marie-Laure; Allioli, Françoise; Manclossi, Mauro; Nicoletti, Luisa; Stoller, Chris; Evans, Mike
2017-09-01
When designing nuclear tools for oil exploration, one of the first steps is typically nuclear modeling for concept evaluation and initial characterization. Having an accurate model, including the availability of accurate cross sections, is essential to reduce or avoid time consuming and costly design iterations. During tool response characterization, modeling is benchmarked with experimental data and then used to complement and to expand the database to make it more detailed and inclusive of more measurement environments which are difficult or impossible to reproduce in the laboratory. We present comparisons of our modeling results obtained using the ENDF/B-VI and ENDF/B-VII cross section data bases, focusing on the response to a few elements found in the tool, borehole and subsurface formation. For neutron-induced inelastic and capture gamma ray spectroscopy, major obstacles may be caused by missing or inaccurate cross sections for essential materials. We show examples of the benchmarking of modeling results against experimental data obtained during tool characterization and discuss observed discrepancies.
Integrated Targeting and Guidance for Powered Planetary Descent
NASA Astrophysics Data System (ADS)
Azimov, Dilmurat M.; Bishop, Robert H.
2018-02-01
This paper presents an on-board guidance and targeting design that enables explicit state and thrust vector control and on-board targeting for planetary descent and landing. These capabilities are developed utilizing a new closed-form solution for the constant thrust arc of the braking phase of the powered descent trajectory. The key elements of proven targeting and guidance architectures, including braking and approach phase quartics, are employed. It is demonstrated that implementation of the proposed solution avoids numerical simulation iterations, thereby facilitating on-board execution of targeting procedures during the descent. It is shown that the shape of the braking phase constant thrust arc is highly dependent on initial mass and propulsion system parameters. The analytic solution process is explicit in terms of targeting and guidance parameters, while remaining generic with respect to planetary body and descent trajectory design. These features increase the feasibility of extending the proposed integrated targeting and guidance design to future cargo and robotic landing missions.
User engineering: A new look at system engineering
NASA Technical Reports Server (NTRS)
Mclaughlin, Larry L.
1987-01-01
User Engineering is a new System Engineering perspective responsible for defining and maintaining the user view of the system. Its elements are a process to guide the project and customer, a multidisciplinary team including hard and soft sciences, rapid prototyping tools to build user interfaces quickly and modify them frequently at low cost, and a prototyping center for involving users and designers in an iterative way. The main consideration is reducing the risk that the end user will not or cannot effectively use the system. The process begins with user analysis to produce cognitive and work style models, and task analysis to produce user work functions and scenarios. These become major drivers of the human computer interface design which is presented and reviewed as an interactive prototype by users. Feedback is rapid and productive, and user effectiveness can be measured and observed before the system is built and fielded. Requirements are derived via the prototype and baselined early to serve as an input to the architecture and software design.
Integrated Targeting and Guidance for Powered Planetary Descent
NASA Astrophysics Data System (ADS)
Azimov, Dilmurat M.; Bishop, Robert H.
2018-06-01
This paper presents an on-board guidance and targeting design that enables explicit state and thrust vector control and on-board targeting for planetary descent and landing. These capabilities are developed utilizing a new closed-form solution for the constant thrust arc of the braking phase of the powered descent trajectory. The key elements of proven targeting and guidance architectures, including braking and approach phase quartics, are employed. It is demonstrated that implementation of the proposed solution avoids numerical simulation iterations, thereby facilitating on-board execution of targeting procedures during the descent. It is shown that the shape of the braking phase constant thrust arc is highly dependent on initial mass and propulsion system parameters. The analytic solution process is explicit in terms of targeting and guidance parameters, while remaining generic with respect to planetary body and descent trajectory design. These features increase the feasibility of extending the proposed integrated targeting and guidance design to future cargo and robotic landing missions.
GENERALISATION OF RADIATOR DESIGN TECHNIQUES FOR PERSONAL NEUTRON DOSEMETERS BY UNFOLDING METHOD.
Oda, K; Nakayama, T; Umetani, K; Kajihara, M; Yamauchi, T
2016-09-01
A novel technique for designing a radiator suitable for personal neutron dosemeter based on plastic track detector was discussed. A multi-layer structure has been proposed in the previous report, where the thicknesses of plural polyethylene (PE) layers and insensitive ones were determined by iterative calculations of double integral. In order to arrange this procedure and make it more systematic, unfolding calculation has been employed to estimate an ideal radiator containing an arbitrary hydrogen concentration. In the second step, realistic materials replaced it with consideration of minimisation of the layer number and commercial availability. A radiator consisting of three layers of PE, Upilex and Kapton sheets was finally designed, for which a deviation in the energy dependence between 0.1 and 20 MeV could be controlled within 18 %. An applicability of fluorescent nuclear track detector element has also been discussed. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Final Report on ITER Task Agreement 81-10
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brad J. Merrill
An International Thermonuclear Experimental Reactor (ITER) Implementing Task Agreement (ITA) on Magnet Safety was established between the ITER International Organization (IO) and the Idaho National Laboratory (INL) Fusion Safety Program (FSP) during calendar year 2004. The objectives of this ITA were to add new capabilities to the MAGARC code and to use this updated version of MAGARC to analyze unmitigated superconductor quench events for both poloidal field (PF) and toroidal field (TF) coils of the ITER design. This report documents the completion of the work scope for this ITA. Based on the results obtained for this ITA, an unmitigated quenchmore » event in an ITER larger PF coil does not appear to be as severe an accident as in an ITER TF coil.« less
Final design of thermal diagnostic system in SPIDER ion source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brombin, M., E-mail: matteo.brombin@igi.cnr.it; Dalla Palma, M.; Pasqualotto, R.
The prototype radio frequency source of the ITER heating neutral beams will be first tested in SPIDER test facility to optimize H{sup −} production, cesium dynamics, and overall plasma characteristics. Several diagnostics will allow to fully characterise the beam in terms of uniformity and divergence and the source, besides supporting a safe and controlled operation. In particular, thermal measurements will be used for beam monitoring and system protection. SPIDER will be instrumented with mineral insulated cable thermocouples, both on the grids, on other components of the beam source, and on the rear side of the beam dump water cooled elements.more » This paper deals with the final design and the technical specification of the thermal sensor diagnostic for SPIDER. In particular the layout of the diagnostic, together with the sensors distribution in the different components, the cables routing and the conditioning and acquisition cubicles are described.« less
Evolution of penile prosthetic devices
Burnett, Arthur L.
2015-01-01
Penile implant usage dates to the 16th century yet penile implants to treat erectile dysfunction did not occur until nearly four centuries later. The modern era of penile implants has progressed rapidly over the past 50 years as physicians' knowledge of effective materials for penile prostheses and surgical techniques has improved. Herein, we describe the history of penile prosthetics and the constant quest to improve the technology. Elements of the design from the first inflatable penile prosthesis by Scott and colleagues and the Small-Carrion malleable penile prosthesis are still found in present iterations of these devices. While there have been significant improvements in penile prosthesis design, the promise of an ideal prosthetic device remains elusive. As other erectile dysfunction therapies emerge, penile prostheses will have to continue to demonstrate a competitive advantage. A particular strength of penile prostheses is their efficacy regardless of etiology, thus allowing treatment of even the most refractory cases. PMID:25763121
Evolution of penile prosthetic devices.
Le, Brian; Burnett, Arthur L
2015-03-01
Penile implant usage dates to the 16th century yet penile implants to treat erectile dysfunction did not occur until nearly four centuries later. The modern era of penile implants has progressed rapidly over the past 50 years as physicians' knowledge of effective materials for penile prostheses and surgical techniques has improved. Herein, we describe the history of penile prosthetics and the constant quest to improve the technology. Elements of the design from the first inflatable penile prosthesis by Scott and colleagues and the Small-Carrion malleable penile prosthesis are still found in present iterations of these devices. While there have been significant improvements in penile prosthesis design, the promise of an ideal prosthetic device remains elusive. As other erectile dysfunction therapies emerge, penile prostheses will have to continue to demonstrate a competitive advantage. A particular strength of penile prostheses is their efficacy regardless of etiology, thus allowing treatment of even the most refractory cases.
Final design of thermal diagnostic system in SPIDER ion source
NASA Astrophysics Data System (ADS)
Brombin, M.; Dalla Palma, M.; Pasqualotto, R.; Pomaro, N.
2016-11-01
The prototype radio frequency source of the ITER heating neutral beams will be first tested in SPIDER test facility to optimize H- production, cesium dynamics, and overall plasma characteristics. Several diagnostics will allow to fully characterise the beam in terms of uniformity and divergence and the source, besides supporting a safe and controlled operation. In particular, thermal measurements will be used for beam monitoring and system protection. SPIDER will be instrumented with mineral insulated cable thermocouples, both on the grids, on other components of the beam source, and on the rear side of the beam dump water cooled elements. This paper deals with the final design and the technical specification of the thermal sensor diagnostic for SPIDER. In particular the layout of the diagnostic, together with the sensors distribution in the different components, the cables routing and the conditioning and acquisition cubicles are described.
Experimental Evidence on Iterated Reasoning in Games
Grehl, Sascha; Tutić, Andreas
2015-01-01
We present experimental evidence on two forms of iterated reasoning in games, i.e. backward induction and interactive knowledge. Besides reliable estimates of the cognitive skills of the subjects, our design allows us to disentangle two possible explanations for the observed limits in performed iterated reasoning: Restrictions in subjects’ cognitive abilities and their beliefs concerning the rationality of co-players. In comparison to previous literature, our estimates regarding subjects’ skills in iterated reasoning are quite pessimistic. Also, we find that beliefs concerning the rationality of co-players are completely irrelevant in explaining the observed limited amount of iterated reasoning in the dirty faces game. In addition, it is demonstrated that skills in backward induction are a solid predictor for skills in iterated knowledge, which points to some generalized ability of the subjects in iterated reasoning. PMID:26312486
NASA Astrophysics Data System (ADS)
Elhag, Mohamed; Boteva, Silvena
2017-12-01
Quantification of geomorphometric features is the keystone concern of the current study. The quantification was based on the statistical approach in term of multivariate analysis of local topographic features. The implemented algorithm utilizes the Digital Elevation Model (DEM) to categorize and extract the geomorphometric features embedded in the topographic dataset. The morphological settings were exercised on the central pixel of 3x3 per-defined convolution kernel to evaluate the surrounding pixels under the right directional pour point model (D8) of the azimuth viewpoints. Realization of unsupervised classification algorithm in term of Iterative Self-Organizing Data Analysis Technique (ISODATA) was carried out on ASTER GDEM within the boundary of the designated study area to distinguish 10 morphometric classes. The morphometric classes expressed spatial distribution variation in the study area. The adopted methodology is successful to appreciate the spatial distribution of the geomorphometric features under investigation. The conducted results verified the superimposition of the delineated geomorphometric elements over a given remote sensing imagery to be further analyzed. Robust relationship between different Land Cover types and the geomorphological elements was established in the context of the study area. The domination and the relative association of different Land Cover types in corresponding to its geomorphological elements were demonstrated.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.
1991-01-01
Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.
Managing Science Operations During Planetary Surface: The 2010 Desert RATS Test
NASA Technical Reports Server (NTRS)
Eppler, Dean B.; Ming, D. W.
2011-01-01
Desert Research and Technology Studies (Desert RATS) is a multi-year series of hardware and operations tests carried out annually in the high desert of Arizona on the San Francisco Volcanic Field. Conducted since 1997, these activities are designed to exercise planetary surface hardware and operations in conditions where long-distance, multi-day roving is achievable. Such activities not only test vehicle subsystems through extended rough-terrain driving, they also stress communications and operations systems and allow testing of science operations approaches to advance human and robotic surface capabilities. Desert RATS is a venue where new ideas can be tested, both individually and as part of an operation with multiple elements. By conducting operations over multiple yearly cycles, ideas that make the cut can be iterated and tested during follow-on years. This ultimately gives both the hardware and the personnel experience in the kind of multi-element integrated operations that will be necessary in future human planetary exploration.
Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F; De, Suvranu
2014-12-01
High-frequency electricity is used in the majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. We present a real-time and physically realistic simulation of electrosurgery by modelling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide subfinite-element graphical rendering of vaporized tissue, a dual-mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. We have demonstrated our physics-based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. Copyright © 2013 John Wiley & Sons, Ltd.
ITER activities and fusion technology
NASA Astrophysics Data System (ADS)
Seki, M.
2007-10-01
At the 21st IAEA Fusion Energy Conference, 68 and 67 papers were presented in the categories of ITER activities and fusion technology, respectively. ITER performance prediction, results of technology R&D and the construction preparation provide good confidence in ITER realization. The superconducting tokamak EAST achieved the first plasma just before the conference. The construction of other new experimental machines has also shown steady progress. Future reactor studies stress the importance of down sizing and a steady-state approach. Reactor technology in the field of blanket including the ITER TBM programme and materials for the demonstration power plant showed sound progress in both R&D and design activities.
Fast time- and frequency-domain finite-element methods for electromagnetic analysis
NASA Astrophysics Data System (ADS)
Lee, Woochan
Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution is a new method for making an explicit time-domain finite-element method (TDFEM) unconditionally stable for general electromagnetic analysis. In this method, for a given time step, we find the unstable modes that are the root cause of instability, and deduct them directly from the system matrix resulting from a TDFEM based analysis. As a result, an explicit TDFEM simulation is made stable for an arbitrarily large time step irrespective of the space step. The third contribution is a new method for full-wave applications from low to very high frequencies in a TDFEM based on matrix exponential. In this method, we directly deduct the eigenmodes having large eigenvalues from the system matrix, thus achieving a significantly increased time step in the matrix exponential based TDFEM. The fourth contribution is a new method for transforming the indefinite system matrix of a frequency-domain FEM to a symmetric positive definite one. We deduct non-positive definite component directly from the system matrix resulting from a frequency-domain FEM-based analysis. The resulting new representation of the finite-element operator ensures an iterative solution to converge in a small number of iterations. We then add back the non-positive definite component to synthesize the original solution with negligible cost.
Multi-objective/loading optimization for rotating composite flexbeams
NASA Technical Reports Server (NTRS)
Hamilton, Brian K.; Peters, James R.
1989-01-01
With the evolution of advanced composites, the feasibility of designing bearingless rotor systems for high speed, demanding maneuver envelopes, and high aircraft gross weights has become a reality. These systems eliminate the need for hinges and heavily loaded bearings by incorporating a composite flexbeam structure which accommodates flapping, lead-lag, and feathering motions by bending and twisting while reacting full blade centrifugal force. The flight characteristics of a bearingless rotor system are largely dependent on hub design, and the principal element in this type of system is the composite flexbeam. As in any hub design, trade off studies must be performed in order to optimize performance, dynamics (stability), handling qualities, and stresses. However, since the flexbeam structure is the primary component which will determine the balance of these characteristics, its design and fabrication are not straightforward. It was concluded that: pitchcase and snubber damper representations are required in the flexbeam model for proper sizing resulting from dynamic requirements; optimization is necessary for flexbeam design, since it reduces the design iteration time and results in an improved design; and inclusion of multiple flight conditions and their corresponding fatigue allowables is necessary for the optimization procedure.
Project Development Model | Integrated Energy Solutions | NREL
. The five elements of project fundamentals are: Baseline: Analyze the current situation for the site . The two-phase iterative model includes elements in project fundamentals and project development based State and Local Energy Data (SLED) tool, developed by NREL for the U.S. Department of Energy, to get
ITER Disruption Mitigation System Design
NASA Astrophysics Data System (ADS)
Rasmussen, David; Lyttle, M. S.; Baylor, L. R.; Carmichael, J. R.; Caughman, J. B. O.; Combs, S. K.; Ericson, N. M.; Bull-Ezell, N. D.; Fehling, D. T.; Fisher, P. W.; Foust, C. R.; Ha, T.; Meitner, S. J.; Nycz, A.; Shoulders, J. M.; Smith, S. F.; Warmack, R. J.; Coburn, J. D.; Gebhart, T. E.; Fisher, J. T.; Reed, J. R.; Younkin, T. R.
2015-11-01
The disruption mitigation system for ITER is under design and will require injection of up to 10 kPa-m3 of deuterium, helium, neon, or argon material for thermal mitigation and up to 100 kPa-m3 of material for suppression of runaway electrons. A hybrid unit compatible with the ITER nuclear, thermal and magnetic field environment is being developed. The unit incorporates a fast gas valve for massive gas injection (MGI) and a shattered pellet injector (SPI) to inject a massive spray of small particles, and can be operated as an SPI with a frozen pellet or an MGI without a pellet. Three ITER upper port locations will have three SPI/MGI units with a common delivery tube. One equatorial port location has space for sixteen similar SPI/MGI units. Supported by US DOE under DE-AC05-00OR22725.
Cooley, Richard L.
1992-01-01
MODFE, a modular finite-element model for simulating steady- or unsteady-state, area1 or axisymmetric flow of ground water in a heterogeneous anisotropic aquifer is documented in a three-part series of reports. In this report, part 2, the finite-element equations are derived by minimizing a functional of the difference between the true and approximate hydraulic head, which produces equations that are equivalent to those obtained by either classical variational or Galerkin techniques. Spatial finite elements are triangular with linear basis functions, and temporal finite elements are one dimensional with linear basis functions. Physical processes that can be represented by the model include (1) confined flow, unconfined flow (using the Dupuit approximation), or a combination of both; (2) leakage through either rigid or elastic confining units; (3) specified recharge or discharge at points, along lines, or areally; (4) flow across specified-flow, specified-head, or head-dependent boundaries; (5) decrease of aquifer thickness to zero under extreme water-table decline and increase of aquifer thickness from zero as the water table rises; and (6) head-dependent fluxes from springs, drainage wells, leakage across riverbeds or confining units combined with aquifer dewatering, and evapotranspiration. The matrix equations produced by the finite-element method are solved by the direct symmetric-Doolittle method or the iterative modified incomplete-Cholesky conjugate-gradient method. The direct method can be efficient for small- to medium-sized problems (less than about 500 nodes), and the iterative method is generally more efficient for larger-sized problems. Comparison of finite-element solutions with analytical solutions for five example problems demonstrates that the finite-element model can yield accurate solutions to ground-water flow problems.
Scanning sequences after Gibbs sampling to find multiple occurrences of functional elements
Tharakaraman, Kannan; Mariño-Ramírez, Leonardo; Sheetlin, Sergey L; Landsman, David; Spouge, John L
2006-01-01
Background Many DNA regulatory elements occur as multiple instances within a target promoter. Gibbs sampling programs for finding DNA regulatory elements de novo can be prohibitively slow in locating all instances of such an element in a sequence set. Results We describe an improvement to the A-GLAM computer program, which predicts regulatory elements within DNA sequences with Gibbs sampling. The improvement adds an optional "scanning step" after Gibbs sampling. Gibbs sampling produces a position specific scoring matrix (PSSM). The new scanning step resembles an iterative PSI-BLAST search based on the PSSM. First, it assigns an "individual score" to each subsequence of appropriate length within the input sequences using the initial PSSM. Second, it computes an E-value from each individual score, to assess the agreement between the corresponding subsequence and the PSSM. Third, it permits subsequences with E-values falling below a threshold to contribute to the underlying PSSM, which is then updated using the Bayesian calculus. A-GLAM iterates its scanning step to convergence, at which point no new subsequences contribute to the PSSM. After convergence, A-GLAM reports predicted regulatory elements within each sequence in order of increasing E-values, so users have a statistical evaluation of the predicted elements in a convenient presentation. Thus, although the Gibbs sampling step in A-GLAM finds at most one regulatory element per input sequence, the scanning step can now rapidly locate further instances of the element in each sequence. Conclusion Datasets from experiments determining the binding sites of transcription factors were used to evaluate the improvement to A-GLAM. Typically, the datasets included several sequences containing multiple instances of a regulatory motif. The improvements to A-GLAM permitted it to predict the multiple instances. PMID:16961919
Novel Framework for Reduced Order Modeling of Aero-engine Components
NASA Astrophysics Data System (ADS)
Safi, Ali
The present study focuses on the popular dynamic reduction methods used in design of complex assemblies (millions of Degrees of Freedom) where numerous iterations are involved to achieve the final design. Aerospace manufacturers such as Rolls Royce and Pratt & Whitney are actively seeking techniques that reduce computational time while maintaining accuracy of the models. This involves modal analysis of components with complex geometries to determine the dynamic behavior due to non-linearity and complicated loading conditions. In such a case the sub-structuring and dynamic reduction techniques prove to be an efficient tool to reduce design cycle time. The components whose designs are finalized can be dynamically reduced to mass and stiffness matrices at the boundary nodes in the assembly. These matrices conserve the dynamics of the component in the assembly, and thus avoid repeated calculations during the analysis runs for design modification of other components. This thesis presents a novel framework in terms of modeling and meshing of any complex structure, in this case an aero-engine casing. In this study the affect of meshing techniques on the run time are highlighted. The modal analysis is carried out using an extremely fine mesh to ensure all minor details in the structure are captured correctly in the Finite Element (FE) model. This is used as the reference model, to compare against the results of the reduced model. The study also shows the conditions/criteria under which dynamic reduction can be implemented effectively, proving the accuracy of Criag-Bampton (C.B.) method and limitations of Static Condensation. The study highlights the longer runtime needed to produce the reduced matrices of components compared to the overall runtime of the complete unreduced model. Although once the components are reduced, the assembly run is significantly. Hence the decision to use Component Mode Synthesis (CMS) is to be taken judiciously considering the number of iterations that may be required during the design cycle.
Architectural Specialization for Inter-Iteration Loop Dependence Patterns
2015-10-01
Architectural Specialization for Inter-Iteration Loop Dependence Patterns Christopher Batten Computer Systems Laboratory School of Electrical and...Trends in Computer Architecture Transistors (Thousands) Frequency (MHz) Typical Power (W) MIPS R2K Intel P4 DEC Alpha 21264 Data collected by M...T as ks p er Jo ule ) Simple Processor Design Power Constraint High-Performance Architectures Embedded Architectures Design Performance
VIMOS Instrument Control Software Design: an Object Oriented Approach
NASA Astrophysics Data System (ADS)
Brau-Nogué, Sylvie; Lucuix, Christian
2002-12-01
The Franco-Italian VIMOS instrument is a VIsible imaging Multi-Object Spectrograph with outstanding multiplex capabilities, allowing to take spectra of more than 800 objects simultaneously, or integral field spectroscopy mode in a 54x54 arcsec area. VIMOS is being installed at the Nasmyth focus of the third Unit Telescope of the European Southern Observatory Very Large Telescope (VLT) at Mount Paranal in Chile. This paper will describe the analysis, the design and the implementation of the VIMOS Instrument Control System, using UML notation. Our Control group followed an Object Oriented software process while keeping in mind the ESO VLT standard control concepts. At ESO VLT a complete software library is available. Rather than applying waterfall lifecycle, ICS project used iterative development, a lifecycle consisting of several iterations. Each iteration consisted in : capture and evaluate the requirements, visual modeling for analysis and design, implementation, test, and deployment. Depending of the project phases, iterations focused more or less on specific activity. The result is an object model (the design model), including use-case realizations. An implementation view and a deployment view complement this product. An extract of VIMOS ICS UML model will be presented and some implementation, integration and test issues will be discussed.
Implementation on a nonlinear concrete cracking algorithm in NASTRAN
NASA Technical Reports Server (NTRS)
Herting, D. N.; Herendeen, D. L.; Hoesly, R. L.; Chang, H.
1976-01-01
A computer code for the analysis of reinforced concrete structures was developed using NASTRAN as a basis. Nonlinear iteration procedures were developed for obtaining solutions with a wide variety of loading sequences. A direct access file system was used to save results at each load step to restart within the solution module for further analysis. A multi-nested looping capability was implemented to control the iterations and change the loads. The basis for the analysis is a set of mutli-layer plate elements which allow local definition of materials and cracking properties.
Noniterative Multireference Coupled Cluster Methods on Heterogeneous CPU-GPU Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhaskaran-Nair, Kiran; Ma, Wenjing; Krishnamoorthy, Sriram
2013-04-09
A novel parallel algorithm for non-iterative multireference coupled cluster (MRCC) theories, which merges recently introduced reference-level parallelism (RLP) [K. Bhaskaran-Nair, J.Brabec, E. Aprà, H.J.J. van Dam, J. Pittner, K. Kowalski, J. Chem. Phys. 137, 094112 (2012)] with the possibility of accelerating numerical calculations using graphics processing unit (GPU) is presented. We discuss the performance of this algorithm on the example of the MRCCSD(T) method (iterative singles and doubles and perturbative triples), where the corrections due to triples are added to the diagonal elements of the MRCCSD (iterative singles and doubles) effective Hamiltonian matrix. The performance of the combined RLP/GPU algorithmmore » is illustrated on the example of the Brillouin-Wigner (BW) and Mukherjee (Mk) state-specific MRCCSD(T) formulations.« less
Layout compliance for triple patterning lithography: an iterative approach
NASA Astrophysics Data System (ADS)
Yu, Bei; Garreton, Gilda; Pan, David Z.
2014-10-01
As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.
Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.
NASA Astrophysics Data System (ADS)
Liu, Y.; Li, Y.
2016-12-01
We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.
The Mixed Finite Element Multigrid Method for Stokes Equations
Muzhinji, K.; Shateyi, S.; Motsa, S. S.
2015-01-01
The stable finite element discretization of the Stokes problem produces a symmetric indefinite system of linear algebraic equations. A variety of iterative solvers have been proposed for such systems in an attempt to construct efficient, fast, and robust solution techniques. This paper investigates one of such iterative solvers, the geometric multigrid solver, to find the approximate solution of the indefinite systems. The main ingredient of the multigrid method is the choice of an appropriate smoothing strategy. This study considers the application of different smoothers and compares their effects in the overall performance of the multigrid solver. We study the multigrid method with the following smoothers: distributed Gauss Seidel, inexact Uzawa, preconditioned MINRES, and Braess-Sarazin type smoothers. A comparative study of the smoothers shows that the Braess-Sarazin smoothers enhance good performance of the multigrid method. We study the problem in a two-dimensional domain using stable Hood-Taylor Q 2-Q 1 pair of finite rectangular elements. We also give the main theoretical convergence results. We present the numerical results to demonstrate the efficiency and robustness of the multigrid method and confirm the theoretical results. PMID:25945361
Clay, Zanna; Pople, Sally; Hood, Bruce; Kita, Sotaro
2014-08-01
Research on Nicaraguan Sign Language, created by deaf children, has suggested that young children use gestures to segment the semantic elements of events and linearize them in ways similar to those used in signed and spoken languages. However, it is unclear whether this is due to children's learning processes or to a more general effect of iterative learning. We investigated whether typically developing children, without iterative learning, segment and linearize information. Gestures produced in the absence of speech to express a motion event were examined in 4-year-olds, 12-year-olds, and adults (all native English speakers). We compared the proportions of gestural expressions that segmented semantic elements into linear sequences and that encoded them simultaneously. Compared with adolescents and adults, children reshaped the holistic stimuli by segmenting and recombining their semantic features into linearized sequences. A control task on recognition memory ruled out the possibility that this was due to different event perception or memory. Young children spontaneously bring fundamental properties of language into their communication system. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Soni, Jigensh; Yadav, R. K.; Patel, A.; Gahlaut, A.; Mistry, H.; Parmar, K. G.; Mahesh, V.; Parmar, D.; Prajapati, B.; Singh, M. J.; Bandyopadhyay, M.; Bansal, G.; Pandya, K.; Chakraborty, A.
2013-02-01
Twin Source - An Inductively coupled two RF driver based 180 kW, 1 MHz negative ion source experimental setup is initiated at IPR, Gandhinagar, under Indian program, with the objective of understanding the physics and technology of multi-driver coupling. Twin Source [1] (TS) also provides an intermediate platform between operational ROBIN [2] [5] and eight RF drivers based Indian test facility -INTF [3]. A twin source experiment requires a central system to provide control, data acquisition and communication interface, referred as TS-CODAC, for which a software architecture similar to ITER CODAC core system has been decided for implementation. The Core System is a software suite for ITER plant system manufacturers to use as a template for the development of their interface with CODAC. The ITER approach, in terms of technology, has been adopted for the TS-CODAC so as to develop necessary expertise for developing and operating a control system based on the ITER guidelines as similar configuration needs to be implemented for the INTF. This cost effective approach will provide an opportunity to evaluate and learn ITER CODAC technology, documentation, information technology and control system processes, on an operational machine. Conceptual design of the TS-CODAC system has been completed. For complete control of the system, approximately 200 Nos. control signals and 152 acquisition signals are needed. In TS-CODAC, control loop time required is within the range of 5ms - 10 ms, therefore for the control system, PLC (Siemens S-7 400) has been chosen as suggested in the ITER slow controller catalog. For the data acquisition, the maximum sampling interval required is 100 micro second, and therefore National Instruments (NI) PXIe system and NI 6259 digitizer cards have been selected as suggested in the ITER fast controller catalog. This paper will present conceptual design of TS -CODAC system based on ITER CODAC Core software and applicable plant system integration processes.
Torak, L.J.
1993-01-01
A MODular, Finite-Element digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water flow. Geometric- and hydrologic-aquifer characteristics in two spatial dimensions are represented by triangular finite elements and linear basis functions; one-dimensional finite elements and linear basis functions represent time. Finite-element matrix equations are solved by the direct symmetric-Doolittle method or the iterative modified, incomplete-Cholesky, conjugate-gradient method. Physical processes that can be represented by the model include (1) confined flow, unconfined flow (using the Dupuit approximation), or a combination of both; (2) leakage through either rigid or elastic confining beds; (3) specified recharge or discharge at points, along lines, and over areas; (4) flow across specified-flow, specified-head, or bead-dependent boundaries; (5) decrease of aquifer thickness to zero under extreme water-table decline and increase of aquifer thickness from zero as the water table rises; and (6) head-dependent fluxes from springs, drainage wells, leakage across riverbeds or confining beds combined with aquifer dewatering, and evapotranspiration. The report describes procedures for applying MODFE to ground-water-flow problems, simulation capabilities, and data preparation. Guidelines for designing the finite-element mesh and for node numbering and determining band widths are given. Tables are given that reference simulation capabilities to specific versions of MODFE. Examples of data input and model output for different versions of MODFE are provided.
Torak, Lynn J.
1992-01-01
A MODular, Finite-Element digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water flow. Geometric- and hydrologic-aquifer characteristics in two spatial dimensions are represented by triangular finite elements and linear basis functions; one-dimensional finite elements and linear basis functions represent time. Finite-element matrix equations are solved by the direct symmetric-Doolittle method or the iterative modified, incomplete-Cholesky, conjugate-gradient method. Physical processes that can be represented by the model include (1) confined flow, unconfined flow (using the Dupuit approximation), or a combination of both; (2) leakage through either rigid or elastic confining beds; (3) specified recharge or discharge at points, along lines, and over areas; (4) flow across specified-flow, specified-head, or head-dependent boundaries; (5) decrease of aquifer thickness to zero under extreme water-table decline and increase of aquifer thickness from zero as the water table rises; and (6) head-dependent fluxes from springs, drainage wells, leakage across riverbeds or confining beds combined with aquifer dewatering, and evapotranspiration.The report describes procedures for applying MODFE to ground-water-flow problems, simulation capabilities, and data preparation. Guidelines for designing the finite-element mesh and for node numbering and determining band widths are given. Tables are given that reference simulation capabilities to specific versions of MODFE. Examples of data input and model output for different versions of MODFE are provided.
Increasing reconstruction quality of diffractive optical elements displayed with LC SLM
NASA Astrophysics Data System (ADS)
Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Sergey N.
2015-03-01
Phase liquid crystal (LC) spatial light modulators (SLM) are actively used in various applications. However, majority of scientific applications require stable phase modulation which might be hard to achieve with commercially available SLM due to its consumer origin. The use of digital voltage addressing scheme leads to phase temporal fluctuations, which results in lower diffraction efficiency and reconstruction quality of displayed diffractive optical elements (DOE). Due to high periodicity of fluctuations it should be possible to use knowledge of these fluctuations during DOE synthesis to minimize negative effect. We synthesized DOE using accurately measured phase fluctuations of phase LC SLM "HoloEye PLUTO VIS" to minimize its negative impact on displayed DOE reconstruction. Synthesis was conducted with versatile direct search with random trajectory (DSRT) method in the following way. Before DOE synthesis begun, two-dimensional dependency of SLM phase shift on addressed signal level and time from frame start was obtained. Then synthesis begins. First, initial phase distribution is created. Second, random trajectory of consecutive processing of all DOE elements is generated. Then iterative process begins. Each DOE element sequentially has its value changed to one that provides better value of objective criterion, e.g. lower deviation of reconstructed image from original one. If current element value provides best objective criterion value then it left unchanged. After all elements are processed, iteration repeats until stagnation is reached. It is demonstrated that application of SLM phase fluctuations knowledge in DOE synthesis with DSRT method leads to noticeable increase of DOE reconstruction quality.
Gaussian-Beam/Physical-Optics Design Of Beam Waveguide
NASA Technical Reports Server (NTRS)
Veruttipong, Watt; Chen, Jacqueline C.; Bathker, Dan A.
1993-01-01
In iterative method of designing wideband beam-waveguide feed for paraboloidal-reflector antenna, Gaussian-beam approximation alternated with more nearly exact physical-optics analysis of diffraction. Includes curved and straight reflectors guiding radiation from feed horn to subreflector. For iterative design calculations, curved mirrors mathematically modeled as thin lenses. Each distance Li is combined length of two straight-line segments intersecting at one of flat mirrors. Method useful for designing beam-waveguide reflectors or mirrors required to have diameters approximately less than 30 wavelengths at one or more intended operating frequencies.
Probabilistic distance-based quantizer design for distributed estimation
NASA Astrophysics Data System (ADS)
Kim, Yoon Hak
2016-12-01
We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.
Too Little Too Soon? Modeling the Risks of Spiral Development
2007-04-30
270 315 360 405 450 495 540 585 630 675 720 765 810 855 900 Time (Week) Work started and active PhIt [Requirements,Iter1] : JavelinCalibration work...packages1 1 1 Work started and active PhIt [Technology,Iter1] : JavelinCalibration work packages2 2 2 Work started and active PhIt [Design,Iter1...JavelinCalibration work packages3 3 3 3 Work started and active PhIt [Manufacturing,Iter1] : JavelinCalibration work packages4 4 Work started and active PhIt
NASA Astrophysics Data System (ADS)
Boski, Marcin; Paszke, Wojciech
2015-11-01
This paper deals with the problem of designing an iterative learning control algorithm for discrete linear systems using repetitive process stability theory. The resulting design produces a stabilizing output feedback controller in the time domain and a feedforward controller that guarantees monotonic convergence in the trial-to-trial domain. The results are also extended to limited frequency range design specification. New design procedure is introduced in terms of linear matrix inequality (LMI) representations, which guarantee the prescribed performances of ILC scheme. A simulation example is given to illustrate the theoretical developments.
Strategies to improve electrode positioning and safety in cochlear implants.
Rebscher, S J; Heilmann, M; Bruszewski, W; Talbot, N H; Snyder, R L; Merzenich, M M
1999-03-01
An injection-molded internal supporting rib has been produced to control the flexibility of silicone rubber encapsulated electrodes designed to electrically stimulate the auditory nerve in human subjects with severe to profound hearing loss. The rib molding dies, and molds for silicone rubber encapsulation of the electrode, were designed and machined using AutoCad and MasterCam software packages in a PC environment. After molding, the prototype plastic ribs were iteratively modified based on observations of the performance of the rib/silicone composite insert in a clear plastic model of the human scala tympani cavity. The rib-based electrodes were reliably inserted farther into these models, required less insertion force and were positioned closer to the target auditory neural elements than currently available cochlear implant electrodes. With further design improvements the injection-molded rib may also function to accurately support metal stimulating contacts and wire leads during assembly to significantly increase the manufacturing efficiency of these devices. This method to reliably control the mechanical properties of miniature implantable devices with multiple electrical leads may be valuable in other areas of biomedical device design.
Optimized bio-inspired stiffening design for an engine nacelle.
Lazo, Neil; Vodenitcharova, Tania; Hoffman, Mark
2015-11-04
Structural efficiency is a common engineering goal in which an ideal solution provides a structure with optimized performance at minimized weight, with consideration of material mechanical properties, structural geometry, and manufacturability. This study aims to address this goal in developing high performance lightweight, stiff mechanical components by creating an optimized design from a biologically-inspired template. The approach is implemented on the optimization of rib stiffeners along an aircraft engine nacelle. The helical and angled arrangements of cellulose fibres in plants were chosen as the bio-inspired template. Optimization of total displacement and weight was carried out using a genetic algorithm (GA) coupled with finite element analysis. Iterations showed a gradual convergence in normalized fitness. Displacement was given higher emphasis in optimization, thus the GA optimization tended towards individual designs with weights near the mass constraint. Dominant features of the resulting designs were helical ribs with rectangular cross-sections having large height-to-width ratio. Displacement reduction was at 73% as compared to an unreinforced nacelle, and is attributed to the geometric features and layout of the stiffeners, while mass is maintained within the constraint.
Hesford, Andrew J.; Chew, Weng C.
2010-01-01
The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438
Saver, Jeffrey L.; Warach, Steven; Janis, Scott; Odenkirchen, Joanne; Becker, Kyra; Benavente, Oscar; Broderick, Joseph; Dromerick, Alexander W.; Duncan, Pamela; Elkind, Mitchell S. V.; Johnston, Karen; Kidwell, Chelsea S.; Meschia, James F.; Schwamm, Lee
2012-01-01
Background and Purpose The National Institute of Neurological Disorders and Stroke initiated development of stroke-specific Common Data Elements (CDEs) as part of a project to develop data standards for funded clinical research in all fields of neuroscience. Standardizing data elements in translational, clinical and population research in cerebrovascular disease could decrease study start-up time, facilitate data sharing, and promote well-informed clinical practice guidelines. Methods A Working Group of diverse experts in cerebrovascular clinical trials, epidemiology, and biostatistics met regularly to develop a set of Stroke CDEs, selecting among, refining, and adding to existing, field-tested data elements from national registries and funded trials and studies. Candidate elements were revised based on comments from leading national and international neurovascular research organizations and the public. Results The first iteration of the NINDS stroke-specific CDEs comprises 980 data elements spanning nine content areas: 1) Biospecimens and Biomarkers; 2) Hospital Course and Acute Therapies; 3) Imaging; 4) Laboratory Tests and Vital Signs; 5) Long Term Therapies; 6) Medical History and Prior Health Status; 7) Outcomes and Endpoints; 8) Stroke Presentation; 9) Stroke Types and Subtypes. A CDE website provides uniform names and structures for each element, a data dictionary, and template case report forms (CRFs) using the CDEs. Conclusion Stroke-specific CDEs are now available as standardized, scientifically-vetted variable structures to facilitate data collection and data sharing in cerebrovascular patient-oriented research. The CDEs are an evolving resource that will be iteratively improved based on investigator use, new technologies, and emerging concepts and research findings. PMID:22308239
On nonlinear finite element analysis in single-, multi- and parallel-processors
NASA Technical Reports Server (NTRS)
Utku, S.; Melosh, R.; Islam, M.; Salama, M.
1982-01-01
Numerical solution of nonlinear equilibrium problems of structures by means of Newton-Raphson type iterations is reviewed. Each step of the iteration is shown to correspond to the solution of a linear problem, therefore the feasibility of the finite element method for nonlinear analysis is established. Organization and flow of data for various types of digital computers, such as single-processor/single-level memory, single-processor/two-level-memory, vector-processor/two-level-memory, and parallel-processors, with and without sub-structuring (i.e. partitioning) are given. The effect of the relative costs of computation, memory and data transfer on substructuring is shown. The idea of assigning comparable size substructures to parallel processors is exploited. Under Cholesky type factorization schemes, the efficiency of parallel processing is shown to decrease due to the occasional shared data, just as that due to the shared facilities.
NASA Astrophysics Data System (ADS)
Peng, Heng; Liu, Yinghua; Chen, Haofeng
2018-05-01
In this paper, a novel direct method called the stress compensation method (SCM) is proposed for limit and shakedown analysis of large-scale elastoplastic structures. Without needing to solve the specific mathematical programming problem, the SCM is a two-level iterative procedure based on a sequence of linear elastic finite element solutions where the global stiffness matrix is decomposed only once. In the inner loop, the static admissible residual stress field for shakedown analysis is constructed. In the outer loop, a series of decreasing load multipliers are updated to approach to the shakedown limit multiplier by using an efficient and robust iteration control technique, where the static shakedown theorem is adopted. Three numerical examples up to about 140,000 finite element nodes confirm the applicability and efficiency of this method for two-dimensional and three-dimensional elastoplastic structures, with detailed discussions on the convergence and the accuracy of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Koldan, Jelena; Puzyrev, Vladimir; de la Puente, Josep; Houzeaux, Guillaume; Cela, José María
2014-06-01
We present an elaborate preconditioning scheme for Krylov subspace methods which has been developed to improve the performance and reduce the execution time of parallel node-based finite-element (FE) solvers for 3-D electromagnetic (EM) numerical modelling in exploration geophysics. This new preconditioner is based on algebraic multigrid (AMG) that uses different basic relaxation methods, such as Jacobi, symmetric successive over-relaxation (SSOR) and Gauss-Seidel, as smoothers and the wave front algorithm to create groups, which are used for a coarse-level generation. We have implemented and tested this new preconditioner within our parallel nodal FE solver for 3-D forward problems in EM induction geophysics. We have performed series of experiments for several models with different conductivity structures and characteristics to test the performance of our AMG preconditioning technique when combined with biconjugate gradient stabilized method. The results have shown that, the more challenging the problem is in terms of conductivity contrasts, ratio between the sizes of grid elements and/or frequency, the more benefit is obtained by using this preconditioner. Compared to other preconditioning schemes, such as diagonal, SSOR and truncated approximate inverse, the AMG preconditioner greatly improves the convergence of the iterative solver for all tested models. Also, when it comes to cases in which other preconditioners succeed to converge to a desired precision, AMG is able to considerably reduce the total execution time of the forward-problem code-up to an order of magnitude. Furthermore, the tests have confirmed that our AMG scheme ensures grid-independent rate of convergence, as well as improvement in convergence regardless of how big local mesh refinements are. In addition, AMG is designed to be a black-box preconditioner, which makes it easy to use and combine with different iterative methods. Finally, it has proved to be very practical and efficient in the parallel context.
Designed electromagnetic pulsed therapy: clinical applications.
Gordon, Glen A
2007-09-01
First reduced to science by Maxwell in 1865, electromagnetic technology as therapy received little interest from basic scientists or clinicians until the 1980s. It now promises applications that include mitigation of inflammation (electrochemistry) and stimulation of classes of genes following onset of illness and injury (electrogenomics). The use of electromagnetism to stop inflammation and restore tissue seems a logical phenomenology, that is, stop the inflammation, then upregulate classes of restorative gene loci to initiate healing. Studies in the fields of MRI and NMR have aided the understanding of cell response to low energy EMF inputs via electromagnetically responsive elements. Understanding protein iterations, that is, how they process information to direct energy, we can maximize technology to aid restorative intervention, a promising step forward over current paradigms of therapy.
NASA Astrophysics Data System (ADS)
Perov, N. I.
1985-02-01
A physical-geometrical method for computing the orbits of earth satellites on the basis of an inadequate number of angular observations (N3) was developed. Specifically, a new method has been developed for calculating the elements of Keplerian orbits of unidentified artificial satellites using two angular observations (alpha sub k, S sub k, k = 1). The first section gives procedures for determining the topocentric distance to AES on the basis of one optical observation. This is followed by description of a very simple method for determining unperturbed orbits using two satellite position vectors and a time interval which is applicable even in the case of antiparallel AED position vectors, a method designated the R sub 2 iterations method.
On the Implementation of Iterative Detection in Real-World MIMO Wireless Systems
2003-12-01
multientr~es et multisorties (MIMO) permettent une exploitation remarquable du spectre comparativement aux syst~mes traditionnels A antenne unique...vecteurs symboliques pilotes connus cause une perte de rendement n~gligeable comparativement au cas hypothdtique des connaissances des voies parfaites...useful design guidelines for iterative systems. it does not provide any fundamental understanding as to how the design of the detector can improve the
Status of the ITER Electron Cyclotron Heating and Current Drive System
NASA Astrophysics Data System (ADS)
Darbos, Caroline; Albajar, Ferran; Bonicelli, Tullio; Carannante, Giuseppe; Cavinato, Mario; Cismondi, Fabio; Denisov, Grigory; Farina, Daniela; Gagliardi, Mario; Gandini, Franco; Gassmann, Thibault; Goodman, Timothy; Hanson, Gregory; Henderson, Mark A.; Kajiwara, Ken; McElhaney, Karen; Nousiainen, Risto; Oda, Yasuhisa; Omori, Toshimichi; Oustinov, Alexander; Parmar, Darshankumar; Popov, Vladimir L.; Purohit, Dharmesh; Rao, Shambhu Laxmikanth; Rasmussen, David; Rathod, Vipal; Ronden, Dennis M. S.; Saibene, Gabriella; Sakamoto, Keishi; Sartori, Filippo; Scherer, Theo; Singh, Narinder Pal; Strauß, Dirk; Takahashi, Koji
2016-01-01
The electron cyclotron (EC) heating and current drive (H&CD) system developed for the ITER is made of 12 sets of high-voltage power supplies feeding 24 gyrotrons connected through 24 transmission lines (TL), to five launchers, four located in upper ports and one at the equatorial level. Nearly all procurements are in-kind, following general ITER philosophy, and will come from Europe, India, Japan, Russia and the USA. The full system is designed to couple to the plasma 20 MW among the 24 MW generated power, at the frequency of 170 GHz, for various physics applications such as plasma start-up, central H&CD and magnetohydrodynamic (MHD) activity control. The design takes present day technology and extends toward high-power continuous operation, which represents a large step forward as compared to the present state of the art. The ITER EC system will be a stepping stone to future EC systems for DEMO and beyond.
NASA Astrophysics Data System (ADS)
Bittner-Rohrhofer, K.; Humer, K.; Weber, H. W.
The windings of the superconducting magnet coils for the ITER-FEAT fusion device are affected by high mechanical stresses at cryogenic temperatures and by a radiation environment, which impose certain constraints especially on the insulating materials. A glass fiber reinforced plastic (GFRP) laminate, which consists of Kapton/R-glass-fiber reinforcement tapes, vacuum-impregnated in a DGEBA epoxy system, was used for the European toroidal field model coil turn insulation of ITER. In order to assess its mechanical properties under the actual operating conditions of ITER-FEAT, cryogenic (77 K) static tensile tests and tension-tension fatigue measurements were done before and after irradiation to a fast neutron fluence of 1×10 22 m -2 ( E>0.1 MeV), i.e. the ITER-FEAT design fluence level. We find that the mechanical strength and the fracture behavior of this GFRP are strongly influenced by the winding direction of the tape and by the radiation induced delamination process. In addition, the composite swells by 3%, forming bubbles inside the laminate, and loses weight (1.4%) at the design fluence.
Hospital influenza pandemic stockpiling needs: A computer simulation.
Abramovich, Mark N; Hershey, John C; Callies, Byron; Adalja, Amesh A; Tosh, Pritish K; Toner, Eric S
2017-03-01
A severe influenza pandemic could overwhelm hospitals but planning guidance that accounts for the dynamic interrelationships between planning elements is lacking. We developed a methodology to calculate pandemic supply needs based on operational considerations in hospitals and then tested the methodology at Mayo Clinic in Rochester, MN. We upgraded a previously designed computer modeling tool and input carefully researched resource data from the hospital to run 10,000 Monte Carlo simulations using various combinations of variables to determine resource needs across a spectrum of scenarios. Of 10,000 iterations, 1,315 fell within the parameters defined by our simulation design and logical constraints. From these valid iterations, we projected supply requirements by percentile for key supplies, pharmaceuticals, and personal protective equipment requirements needed in a severe pandemic. We projected supplies needs for a range of scenarios that use up to 100% of Mayo Clinic-Rochester's surge capacity of beds and ventilators. The results indicate that there are diminishing patient care benefits for stockpiling on the high side of the range, but that having some stockpile of critical resources, even if it is relatively modest, is most important. We were able to display the probabilities of needing various supply levels across a spectrum of scenarios. The tool could be used to model many other hospital preparedness issues, but validation in other settings is needed. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Finite element procedures for coupled linear analysis of heat transfer, fluid and solid mechanics
NASA Technical Reports Server (NTRS)
Sutjahjo, Edhi; Chamis, Christos C.
1993-01-01
Coupled finite element formulations for fluid mechanics, heat transfer, and solid mechanics are derived from the conservation laws for energy, mass, and momentum. To model the physics of interactions among the participating disciplines, the linearized equations are coupled by combining domain and boundary coupling procedures. Iterative numerical solution strategy is presented to solve the equations, with the partitioning of temporal discretization implemented.
Finite element analysis of wrinkling membranes
NASA Technical Reports Server (NTRS)
Miller, R. K.; Hedgepeth, J. M.; Weingarten, V. I.; Das, P.; Kahyai, S.
1984-01-01
The development of a nonlinear numerical algorithm for the analysis of stresses and displacements in partly wrinkled flat membranes, and its implementation on the SAP VII finite-element code are described. A comparison of numerical results with exact solutions of two benchmark problems reveals excellent agreement, with good convergence of the required iterative procedure. An exact solution of a problem involving axisymmetric deformations of a partly wrinkled shallow curved membrane is also reported.
Eight channel transmit array volume coil using on-coil radiofrequency current sources
Kurpad, Krishna N.; Boskamp, Eddy B.
2014-01-01
Background At imaging frequencies associated with high-field MRI, the combined effects of increased load-coil interaction and shortened wavelength results in degradation of circular polarization and B1 field homogeneity in the imaging volume. Radio frequency (RF) shimming is known to mitigate the problem of B1 field inhomogeneity. Transmit arrays with well decoupled transmitting elements enable accurate B1 field pattern control using simple, non-iterative algorithms. Methods An eight channel transmit array was constructed. Each channel consisted of a transmitting element driven by a dedicated on-coil RF current source. The coil current distributions of characteristic transverse electromagnetic (TEM) coil resonant modes were non-iteratively set up on each transmitting element and 3T MRI images of a mineral oil phantom were obtained. Results B1 field patterns of several linear and quadrature TEM coil resonant modes that typically occur at different resonant frequencies were replicated at 128 MHz without having to retune the transmit array. The generated B1 field patterns agreed well with simulation in most cases. Conclusions Independent control of current amplitude and phase on each transmitting element was demonstrated. The transmit array with on-coil RF current sources enables B1 field shimming in a simple and predictable manner. PMID:24834418
Evolution of US DOE Performance Assessments Over 20 Years - 13597
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suttora, Linda C.; Seitz, Roger R.
2013-07-01
Performance assessments (PAs) have been used for many years for the analysis of post-closure hazards associated with a radioactive waste disposal facility and to provide a reasonable expectation of the ability of the site and facility design to meet objectives for the protection of members of the public and the environment. The use of PA to support decision-making for LLW disposal facilities has been mandated in United States Department of Energy (US DOE) directives governing radioactive waste management since 1988 (currently DOE Order 435.1, Radioactive Waste Management). Prior to that time, PAs were also used in a less formal role.more » Over the past 20+ years, the US DOE approach to conduct, review and apply PAs has evolved into an efficient, rigorous and mature process that includes specific requirements for continuous improvement and independent reviews. The PA process has evolved through refinement of a graded and iterative approach designed to help focus efforts on those aspects of the problem expected to have the greatest influence on the decision being made. Many of the evolutionary changes to the PA process are linked to the refinement of the PA maintenance concept that has proven to be an important element of US DOE PA requirements in the context of supporting decision-making for safe disposal of LLW. The PA maintenance concept is central to the evolution of the graded and iterative philosophy and has helped to drive the evolution of PAs from a deterministic compliance calculation into a systematic approach that helps to focus on critical aspects of the disposal system in a manner designed to provide a more informed basis for decision-making throughout the life of a disposal facility (e.g., monitoring, research and testing, waste acceptance criteria, design improvements, data collection, model refinements). A significant evolution in PA modeling has been associated with improved use of uncertainty and sensitivity analysis techniques to support efficient implementation of the graded and iterative approach. Rather than attempt to exactly predict the migration of radionuclides in a disposal unit, the best PAs have evolved into tools that provide a range of results to guide decision-makers in planning the most efficient, cost effective, and safe disposal of radionuclides. (authors)« less
Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry
NASA Technical Reports Server (NTRS)
Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.
2004-01-01
Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.
NASA Astrophysics Data System (ADS)
Huang, Rong; Limburg, Karin; Rohtla, Mehis
2017-05-01
X-ray fluorescence computed tomography is often used to measure trace element distributions within low-Z samples, using algorithms capable of X-ray absorption correction when sample self-absorption is not negligible. Its reconstruction is more complicated compared to transmission tomography, and therefore not widely used. We describe in this paper a very practical iterative method that uses widely available transmission tomography reconstruction software for fluorescence tomography. With this method, sample self-absorption can be corrected not only for the absorption within the measured layer but also for the absorption by material beyond that layer. By combining tomography with analysis for scanning X-ray fluorescence microscopy, absolute concentrations of trace elements can be obtained. By using widely shared software, we not only minimized the coding, took advantage of computing efficiency of fast Fourier transform in transmission tomography software, but also thereby accessed well-developed data processing tools coming with well-known and reliable software packages. The convergence of the iterations was also carefully studied for fluorescence of different attenuation lengths. As an example, fish eye lenses could provide valuable information about fish life-history and endured environmental conditions. Given the lens's spherical shape and sometimes the short distance from sample to detector for detecting low concentration trace elements, its tomography data are affected by absorption related to material beyond the measured layer but can be reconstructed well with our method. Fish eye lens tomography results are compared with sliced lens 2D fluorescence mapping with good agreement, and with tomography providing better spatial resolution.
Huber, A; Brezinsek, S; Mertens, Ph; Schweer, B; Sergienko, G; Terra, A; Arnoux, G; Balshaw, N; Clever, M; Edlingdon, T; Egner, S; Farthing, J; Hartl, M; Horton, L; Kampf, D; Klammer, J; Lambertz, H T; Matthews, G F; Morlock, C; Murari, A; Reindl, M; Riccardo, V; Samm, U; Sanders, S; Stamp, M; Williams, J; Zastrow, K D; Zauner, C
2012-10-01
A new endoscope with optimised divertor view has been developed in order to survey and monitor the emission of specific impurities such as tungsten and the remaining carbon as well as beryllium in the tungsten divertor of JET after the implementation of the ITER-like wall in 2011. The endoscope is a prototype for testing an ITER relevant design concept based on reflective optics only. It may be subject to high neutron fluxes as expected in ITER. The operating wavelength range, from 390 nm to 2500 nm, allows the measurements of the emission of all expected impurities (W I, Be II, C I, C II, C III) with high optical transmittance (≥ 30% in the designed wavelength range) as well as high spatial resolution that is ≤ 2 mm at the object plane and ≤ 3 mm for the full depth of field (± 0.7 m). The new optical design includes options for in situ calibration of the endoscope transmittance during the experimental campaign, which allows the continuous tracing of possible transmittance degradation with time due to impurity deposition and erosion by fast neutral particles. In parallel to the new optical design, a new type of possibly ITER relevant shutter system based on pneumatic techniques has been developed and integrated into the endoscope head. The endoscope is equipped with four digital CCD cameras, each combined with two filter wheels for narrow band interference and neutral density filters. Additionally, two protection cameras in the λ > 0.95 μm range have been integrated in the optical design for the real time wall protection during the plasma operation of JET.
Numerical analysis of modified Central Solenoid insert design
Khodak, Andrei; Martovetsky, Nicolai; Smirnov, Aleksandre; ...
2015-06-21
The United States ITER Project Office (USIPO) is responsible for fabrication of the Central Solenoid (CS) for ITER project. The ITER machine is currently under construction by seven parties in Cadarache, France. The CS Insert (CSI) project should provide a verification of the conductor performance in relevant conditions of temperature, field, currents and mechanical strain. The US IPO designed the CSI that will be tested at the Central Solenoid Model Coil (CSMC) Test Facility at JAEA, Naka. To validate the modified design we performed three-dimensional numerical simulations using coupled solver for simultaneous structural, thermal and electromagnetic analysis. Thermal and electromagneticmore » simulations supported structural calculations providing necessary loads and strains. According to current analysis design of the modified coil satisfies ITER magnet structural design criteria for the following conditions: (1) room temperature, no current, (2) temperature 4K, no current, (3) temperature 4K, current 60 kA direct charge, and (4) temperature 4K, current 60 kA reverse charge. Fatigue life assessment analysis is performed for the alternating conditions of: temperature 4K, no current, and temperature 4K, current 45 kA direct charge. Results of fatigue analysis show that parts of the coil assembly can be qualified for up to 1 million cycles. Distributions of the Current Sharing Temperature (TCS) in the superconductor were obtained from numerical results using parameterization of the critical surface in the form similar to that proposed for ITER. Lastly, special ADPL scripts were developed for ANSYS allowing one-dimensional representation of TCS along the cable, as well as three-dimensional fields of TCS in superconductor material. Published by Elsevier B.V.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huber, A.; Brezinsek, S.; Mertens, Ph.
2012-10-15
A new endoscope with optimised divertor view has been developed in order to survey and monitor the emission of specific impurities such as tungsten and the remaining carbon as well as beryllium in the tungsten divertor of JET after the implementation of the ITER-like wall in 2011. The endoscope is a prototype for testing an ITER relevant design concept based on reflective optics only. It may be subject to high neutron fluxes as expected in ITER. The operating wavelength range, from 390 nm to 2500 nm, allows the measurements of the emission of all expected impurities (W I, Be II,more » C I, C II, C III) with high optical transmittance ({>=}30% in the designed wavelength range) as well as high spatial resolution that is {<=}2 mm at the object plane and {<=}3 mm for the full depth of field ({+-}0.7 m). The new optical design includes options for in situ calibration of the endoscope transmittance during the experimental campaign, which allows the continuous tracing of possible transmittance degradation with time due to impurity deposition and erosion by fast neutral particles. In parallel to the new optical design, a new type of possibly ITER relevant shutter system based on pneumatic techniques has been developed and integrated into the endoscope head. The endoscope is equipped with four digital CCD cameras, each combined with two filter wheels for narrow band interference and neutral density filters. Additionally, two protection cameras in the {lambda} > 0.95 {mu}m range have been integrated in the optical design for the real time wall protection during the plasma operation of JET.« less
Status of the 1 MeV Accelerator Design for ITER NBI
NASA Astrophysics Data System (ADS)
Kuriyama, M.; Boilson, D.; Hemsworth, R.; Svensson, L.; Graceffa, J.; Schunke, B.; Decamps, H.; Tanaka, M.; Bonicelli, T.; Masiello, A.; Bigi, M.; Chitarin, G.; Luchetta, A.; Marcuzzi, D.; Pasqualotto, R.; Pomaro, N.; Serianni, G.; Sonato, P.; Toigo, V.; Zaccaria, P.; Kraus, W.; Franzen, P.; Heinemann, B.; Inoue, T.; Watanabe, K.; Kashiwagi, M.; Taniguchi, M.; Tobari, H.; De Esch, H.
2011-09-01
The beam source of neutral beam heating/current drive system for ITER is needed to accelerate the negative ion beam of 40A with D- at 1 MeV for 3600 sec. In order to realize the beam source, design and R&D works are being developed in many institutions under the coordination of ITER organization. The development of the key issues of the ion source including source plasma uniformity, suppression of co-extracted electron in D beam operation and also after the long beam duration time of over a few 100 sec, is progressed mainly in IPP with the facilities of BATMAN, MANITU and RADI. In the near future, ELISE, that will be tested the half size of the ITER ion source, will start the operation in 2011, and then SPIDER, which demonstrates negative ion production and extraction with the same size and same structure as the ITER ion source, will start the operation in 2014 as part of the NBTF. The development of the accelerator is progressed mainly in JAEA with the MeV test facility, and also the computer simulation of beam optics also developed in JAEA, CEA and RFX. The full ITER heating and current drive beam performance will be demonstrated in MITICA, which will start operation in 2016 as part of the NBTF.
Iterative Neighbour-Information Gathering for Ranking Nodes in Complex Networks
NASA Astrophysics Data System (ADS)
Xu, Shuang; Wang, Pei; Lü, Jinhu
2017-01-01
Designing node influence ranking algorithms can provide insights into network dynamics, functions and structures. Increasingly evidences reveal that node’s spreading ability largely depends on its neighbours. We introduce an iterative neighbourinformation gathering (Ing) process with three parameters, including a transformation matrix, a priori information and an iteration time. The Ing process iteratively combines priori information from neighbours via the transformation matrix, and iteratively assigns an Ing score to each node to evaluate its influence. The algorithm appropriates for any types of networks, and includes some traditional centralities as special cases, such as degree, semi-local, LeaderRank. The Ing process converges in strongly connected networks with speed relying on the first two largest eigenvalues of the transformation matrix. Interestingly, the eigenvector centrality corresponds to a limit case of the algorithm. By comparing with eight renowned centralities, simulations of susceptible-infected-removed (SIR) model on real-world networks reveal that the Ing can offer more exact rankings, even without a priori information. We also observe that an optimal iteration time is always in existence to realize best characterizing of node influence. The proposed algorithms bridge the gaps among some existing measures, and may have potential applications in infectious disease control, designing of optimal information spreading strategies.
Finite elements and the method of conjugate gradients on a concurrent processor
NASA Technical Reports Server (NTRS)
Lyzenga, G. A.; Raefsky, A.; Hager, G. H.
1985-01-01
An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90 percent for sufficiently large problems.
Finite elements and the method of conjugate gradients on a concurrent processor
NASA Technical Reports Server (NTRS)
Lyzenga, G. A.; Raefsky, A.; Hager, B. H.
1984-01-01
An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90% for sufficiently large problems.
Fourier analysis of finite element preconditioned collocation schemes
NASA Technical Reports Server (NTRS)
Deville, Michel O.; Mund, Ernest H.
1990-01-01
The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.
Radiofrequency pulse design using nonlinear gradient magnetic fields.
Kopanoglu, Emre; Constable, R Todd
2015-09-01
An iterative k-space trajectory and radiofrequency (RF) pulse design method is proposed for excitation using nonlinear gradient magnetic fields. The spatial encoding functions (SEFs) generated by nonlinear gradient fields are linearly dependent in Cartesian coordinates. Left uncorrected, this may lead to flip angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a matching pursuit algorithm, and the RF pulse is designed using a conjugate gradient algorithm. Three variants of the proposed approach are given: the full algorithm, a computationally cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. The method is compared with other iterative (matching pursuit and conjugate gradient) and noniterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity. An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. © 2014 Wiley Periodicals, Inc.
Long-pulse stability limits of the ITER baseline scenario
Jackson, G. L.; Luce, T. C.; Solomon, W. M.; ...
2015-01-14
DIII-D has made significant progress in developing the techniques required to operate ITER, and in understanding their impact on performance when integrated into operational scenarios at ITER relevant parameters. We demonstrated long duration plasmas, stable to m/n =2/1 tearing modes (TMs), with an ITER similar shape and I p/aB T, in DIII-D, that evolve to stationary conditions. The operating region most likely to reach stable conditions has normalized pressure, B N≈1.9–2.1 (compared to the ITER baseline design of 1.6 – 1.8), and a Greenwald normalized density fraction, f GW 0.42 – 0.70 (the ITER design is f GW ≈ 0.8).more » The evolution of the current profile, using internal inductance (l i) as an indicator, is found to produce a smaller fraction of stable pulses when l i is increased above ≈ 1.1 at the beginning of β N flattop. Stable discharges with co-neutral beam injection (NBI) are generally accompanied with a benign n=2 MHD mode. However if this mode exceeds ≈ 10 G, the onset of a m/n=2/1 tearing mode occurs with a loss of confinement. In addition, stable operation with low applied external torque, at or below the extrapolated value expected for ITER has also been demonstrated. With electron cyclotron (EC) injection, the operating region of stable discharges has been further extended at ITER equivalent levels of torque and to ELM free discharges at higher torque but with the addition of an n=3 magnetic perturbation from the DIII-D internal coil set. Lastly, the characterization of the ITER baseline scenario evolution for long pulse duration, extension to more ITER relevant values of torque and electron heating, and suppression of ELMs have significantly advanced the physics basis of this scenario, although significant effort remains in the simultaneous integration of all these requirements.« less
NASA Technical Reports Server (NTRS)
Liou, J.; Tezduyar, T. E.
1990-01-01
Adaptive implicit-explicit (AIE), grouped element-by-element (GEBE), and generalized minimum residuals (GMRES) solution techniques for incompressible flows are combined. In this approach, the GEBE and GMRES iteration methods are employed to solve the equation systems resulting from the implicitly treated elements, and therefore no direct solution effort is involved. The benchmarking results demonstrate that this approach can substantially reduce the CPU time and memory requirements in large-scale flow problems. Although the description of the concepts and the numerical demonstration are based on the incompressible flows, the approach presented here is applicable to larger class of problems in computational mechanics.
MILAMIN 2 - Fast MATLAB FEM solver
NASA Astrophysics Data System (ADS)
Dabrowski, Marcin; Krotkiewski, Marcin; Schmid, Daniel W.
2013-04-01
MILAMIN is a free and efficient MATLAB-based two-dimensional FEM solver utilizing unstructured meshes [Dabrowski et al., G-cubed (2008)]. The code consists of steady-state thermal diffusion and incompressible Stokes flow solvers implemented in approximately 200 lines of native MATLAB code. The brevity makes the code easily customizable. An important quality of MILAMIN is speed - it can handle millions of nodes within minutes on one CPU core of a standard desktop computer, and is faster than many commercial solutions. The new MILAMIN 2 allows three-dimensional modeling. It is designed as a set of functional modules that can be used as building blocks for efficient FEM simulations using MATLAB. The utilities are largely implemented as native MATLAB functions. For performance critical parts we use MUTILS - a suite of compiled MEX functions optimized for shared memory multi-core computers. The most important features of MILAMIN 2 are: 1. Modular approach to defining, tracking, and discretizing the geometry of the model 2. Interfaces to external mesh generators (e.g., Triangle, Fade2d, T3D) and mesh utilities (e.g., element type conversion, fast point location, boundary extraction) 3. Efficient computation of the stiffness matrix for a wide range of element types, anisotropic materials and three-dimensional problems 4. Fast global matrix assembly using a dedicated MEX function 5. Automatic integration rules 6. Flexible prescription (spatial, temporal, and field functions) and efficient application of Dirichlet, Neuman, and periodic boundary conditions 7. Treatment of transient and non-linear problems 8. Various iterative and multi-level solution strategies 9. Post-processing tools (e.g., numerical integration) 10. Visualization primitives using MATLAB, and VTK export functions We provide a large number of examples that show how to implement a custom FEM solver using the MILAMIN 2 framework. The examples are MATLAB scripts of increasing complexity that address a given technical topic (e.g., creating meshes, reordering nodes, applying boundary conditions), a given numerical topic (e.g., using various solution strategies, non-linear iterations), or that present a fully-developed solver designed to address a scientific topic (e.g., performing Stokes flow simulations in synthetic porous medium). References: Dabrowski, M., M. Krotkiewski, and D. W. Schmid MILAMIN: MATLAB-based finite element method solver for large problems, Geochem. Geophys. Geosyst., 9, Q04030, 2008
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henning, C.
This report contains papers on the following topics: conceptual design; radiation damage of ITER magnet systems; insulation system of the magnets; critical current density and strain sensitivity; toroidal field coil structural analysis; stress analysis for the ITER central solenoid; and volt-second capabilities and PF magnet configurations.
Transmission of electrons inside the cryogenic pumps of ITER injector.
Veltri, P; Sartori, E
2016-02-01
Large cryogenic pumps are installed in the vessel of large neutral beam injectors (NBIs) used to heat the plasma in nuclear fusion experiments. The operation of such pumps can be compromised by the presence of stray secondary electrons that are generated along the beam path. In this paper, we present a numerical model to analyze the propagation of the electrons inside the pump. The aim of the study is to quantify the power load on the active pump elements, via evaluation of the transmission probabilities across the domain of the pump. These are obtained starting from large datasets of particle trajectories, obtained by numerical means. The transmission probability of the electrons across the domain is calculated for the NBI of the ITER and for its prototype Megavolt ITer Injector and Concept Advancement (MITICA) and the results are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Yunfeng, E-mail: yfcai@math.pku.edu.cn; Department of Computer Science, University of California, Davis 95616; Bai, Zhaojun, E-mail: bai@cs.ucdavis.edu
2013-12-15
The iterative diagonalization of a sequence of large ill-conditioned generalized eigenvalue problems is a computational bottleneck in quantum mechanical methods employing a nonorthogonal basis for ab initio electronic structure calculations. We propose a hybrid preconditioning scheme to effectively combine global and locally accelerated preconditioners for rapid iterative diagonalization of such eigenvalue problems. In partition-of-unity finite-element (PUFE) pseudopotential density-functional calculations, employing a nonorthogonal basis, we show that the hybrid preconditioned block steepest descent method is a cost-effective eigensolver, outperforming current state-of-the-art global preconditioning schemes, and comparably efficient for the ill-conditioned generalized eigenvalue problems produced by PUFE as the locally optimal blockmore » preconditioned conjugate-gradient method for the well-conditioned standard eigenvalue problems produced by planewave methods.« less
Status of the ITER Cryodistribution
NASA Astrophysics Data System (ADS)
Chang, H.-S.; Vaghela, H.; Patel, P.; Rizzato, A.; Cursan, M.; Henry, D.; Forgeas, A.; Grillot, D.; Sarkar, B.; Muralidhara, S.; Das, J.; Shukla, V.; Adler, E.
2017-12-01
Since the conceptual design of the ITER Cryodistribution many modifications have been applied due to both system optimization and improved knowledge of the clients’ requirements. Process optimizations in the Cryoplant resulted in component simplifications whereas increased heat load in some of the superconducting magnet systems required more complicated process configuration but also the removal of a cold box was possible due to component arrangement standardization. Another cold box, planned for redundancy, has been removed due to the Tokamak in-Cryostat piping layout modification. In this proceeding we will summarize the present design status and component configuration of the ITER Cryodistribution with all changes implemented which aim at process optimization and simplification as well as operational reliability, stability and flexibility.
Summary of ECE presentations at EC-18
Taylor, G.
2015-03-12
There were nine ECE and one EBE presentation at EC-18. Four of the presentations were on various aspects of ECE on ITER. The ITER ECE diagnostic has entered an important detailed preliminary design phase and faces several design challenges in the next 2-3 years. Most of the other ECE presentations at the workshop were focused on applications of ECE diagnostics to plasma measurements, rather than improvements in technology, although it was apparent that heterodyne receiver technology continues to improve. CECE, ECE imaging and EBE imaging are increasingly providing valuable insights into plasma behavior that is important to understand if futuremore » burning plasma devices, such as ITER, FNSF and DEMO, are to be successful.« less
Conceptual design of ACB-CP for ITER cryogenic system
NASA Astrophysics Data System (ADS)
Jiang, Yongcheng; Xiong, Lianyou; Peng, Nan; Tang, Jiancheng; Liu, Liqiang; Zhang, Liang
2012-06-01
ACB-CP (Auxiliary Cold Box for Cryopumps) is used to supply the cryopumps system with necessary cryogen in ITER (International Thermonuclear Experimental Reactor) cryogenic distribution system. The conceptual design of ACB-CP contains thermo-hydraulic analysis, 3D structure design and strength checking. Through the thermohydraulic analysis, the main specifications of process valves, pressure safety valves, pipes, heat exchangers can be decided. During the 3D structure design process, vacuum requirement, adiabatic requirement, assembly constraints and maintenance requirement have been considered to arrange the pipes, valves and other components. The strength checking has been performed to crosscheck if the 3D design meets the strength requirements for the ACB-CP.
Overview of ASDEX Upgrade results
Aguiam, D.
2017-06-28
Here, the ASDEX Upgrade (AUG) programme is directed towards physics input to critical elements of the ITER design and the preparation of ITER operation, as well as addressing physics issues for a future DEMO design. Since 2015, AUG is equipped with a new pair of 3-strap ICRF antennas, which were designed for a reduction of tungsten release during ICRF operation. As predicted, a factor two reduction on the ICRF-induced W plasma content could be achieved by the reduction of the sheath voltage at the antenna limiters via the compensation of the image currents of the central and side straps in the antenna frame. There are two main operational scenario lines in AUG. Experiments with low collisionality, which comprise current drive, ELM mitigation/suppression and fast ion physics, are mainly done with freshly boronized walls to reduce the tungsten influx at these high edge temperature conditions. Full ELM suppression and non-inductive operation up to a plasma current ofmore » $${{I}_{\\text{p}}}=0.8$$ MA could be obtained at low plasma density. Plasma exhaust is studied under conditions of high neutral divertor pressure and separatrix electron density, where a fresh boronization is not required. Substantial progress could be achieved for the understanding of the confinement degradation by strong D puffing and the improvement with nitrogen or carbon seeding. Inward/outward shifts of the electron density profile relative to the temperature profile effect the edge stability via the pressure profile changes and lead to improved/decreased pedestal performance. Seeding and D gas puffing are found to effect the core fueling via changes in a region of high density on the high field side (HFSHD).« less
Finite elements: Theory and application
NASA Technical Reports Server (NTRS)
Dwoyer, D. L. (Editor); Hussaini, M. Y. (Editor); Voigt, R. G. (Editor)
1988-01-01
Recent advances in FEM techniques and applications are discussed in reviews and reports presented at the ICASE/LaRC workshop held in Hampton, VA in July 1986. Topics addressed include FEM approaches for partial differential equations, mixed FEMs, singular FEMs, FEMs for hyperbolic systems, iterative methods for elliptic finite-element equations on general meshes, mathematical aspects of FEMS for incompressible viscous flows, and gradient weighted moving finite elements in two dimensions. Consideration is given to adaptive flux-corrected FEM transport techniques for CFD, mixed and singular finite elements and the field BEM, p and h-p versions of the FEM, transient analysis methods in computational dynamics, and FEMs for integrated flow/thermal/structural analysis.
NASA Technical Reports Server (NTRS)
Crasner, Aaron I.; Scola,Salvatore; Beyon, Jeffrey Y.; Petway, Larry B.
2014-01-01
Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. Thermal modeling software was used to run steady state thermal analyses, which were used to both validate the designs and recommend further changes. Analyses were run on each redesign, as well as the original system. Thermal Desktop was used to run trade studies to account for uncertainty and assumptions about fan performance and boundary conditions. The studies suggested that, even if the assumptions were significantly wrong, the redesigned systems would remain within operating temperature limits.
Note: Readout of a micromechanical magnetometer for the ITER fusion reactor.
Rimminen, H; Kyynäräinen, J
2013-05-01
We present readout instrumentation for a MEMS magnetometer, placed 30 m away from the MEMS element. This is particularly useful when sensing is performed in high-radiation environment, where the semiconductors in the readout cannot survive. High bandwidth transimpedance amplifiers are used to cancel the cable capacitances of several nanofarads. A frequency doubling readout scheme is used for crosstalk elimination. Signal-to-noise ratio in the range of 60 dB was achieved and with sub-percent nonlinearity. The presented instrument is intended for the steady-state magnetic field measurements in the ITER fusion reactor.
Brown, James; Carrington, Tucker
2015-07-28
Although phase-space localized Gaussians are themselves poor basis functions, they can be used to effectively contract a discrete variable representation basis [A. Shimshovitz and D. J. Tannor, Phys. Rev. Lett. 109, 070402 (2012)]. This works despite the fact that elements of the Hamiltonian and overlap matrices labelled by discarded Gaussians are not small. By formulating the matrix problem as a regular (i.e., not a generalized) matrix eigenvalue problem, we show that it is possible to use an iterative eigensolver to compute vibrational energy levels in the Gaussian basis.
Finite elements numerical codes as primary tool to improve beam optics in NIO1
NASA Astrophysics Data System (ADS)
Baltador, C.; Cavenago, M.; Veltri, P.; Serianni, G.
2017-08-01
The RF negative ion source NIO1, built at Consorzio RFX in Padua (Italy), is aimed to investigate general issues on ion source physics in view of the full-size ITER injector MITICA as well as DEMO relevant solutions, like energy recovery and alternative neutralization systems, crucial for neutral beam injectors in future fusion experiments. NIO1 has been designed to produce 9 H-beamlets (in a 3x3 pattern) of 15mA each and 60keV, using a three electrodes system downstream the plasma source. At the moment the source is at its early operational stage and only operation at low power and low beam energy is possible. In particular, NIO1 presents a too strong set of SmCo co-extraction electron suppression magnets (CESM) in the extraction grid (EG) that will be replaced by a weaker set of Ferrite magnets. A completely new set of magnets will be also designed and mounted on the new EG that will be installed next year, replacing the present one. In this paper, the finite element code OPERA 3D is used to investigate the effects of the three sets of magnets on beamlet optics. A comparison of numerical results with measurements will be provided where possible.
NASA Astrophysics Data System (ADS)
Brown, R. A.; Pasternack, G. B.; Wallender, W. W.
2014-06-01
The synthesis of artificial landforms is complementary to geomorphic analysis because it affords a reflection on both the characteristics and intrinsic formative processes of real world conditions. Moreover, the applied terminus of geomorphic theory is commonly manifested in the engineering and rehabilitation of riverine landforms where the goal is to create specific processes associated with specific morphology. To date, the synthesis of river topography has been explored outside of geomorphology through artistic renderings, computer science applications, and river rehabilitation design; while within geomorphology it has been explored using morphodynamic modeling, such as one-dimensional simulation of river reach profiles, two-dimensional simulation of river networks, and three-dimensional simulation of subreach scale river morphology. To date, no approach allows geomorphologists, engineers, or river rehabilitation practitioners to create landforms of prescribed conditions. In this paper a method for creating topography of synthetic river valleys is introduced that utilizes a theoretical framework that draws from fluvial geomorphology, computer science, and geometric modeling. Such a method would be valuable to geomorphologists in understanding form-process linkages as well as to engineers and river rehabilitation practitioners in developing design surfaces that can be rapidly iterated. The method introduced herein relies on the discretization of river valley topography into geometric elements associated with overlapping and orthogonal two-dimensional planes such as the planform, profile, and cross section that are represented by mathematical functions, termed geometric element equations. Topographic surfaces can be parameterized independently or dependently using a geomorphic covariance structure between the spatial series of geometric element equations. To illustrate the approach and overall model flexibility examples are provided that are associated with mountain, lowland, and hybrid synthetic river valleys. To conclude, recommended advances such as multithread channels are discussed along with potential applications.
Quiet Clean Short-haul Experimental Engine (QCSEE) composite fan frame design report
NASA Technical Reports Server (NTRS)
Mitchell, S. C.
1978-01-01
An advanced composite frame which is flight-weight and integrates the functions of several structures was developed for the over the wing (OTW) engine and for the under the wing (UTW) engine. The composite material system selected as the basic material for the frame is Type AS graphite fiber in a Hercules 3501 epoxy resin matrix. The frame was analyzed using a finite element digital computer program. This program was used in an iterative fashion to arrive at practical thicknesses and ply orientations to achieve a final design that met all strength and stiffness requirements for critical conditions. Using this information, the detail design of each of the individual parts of the frame was completed and released. On the basis of these designs, the required tooling was designed to fabricate the various component parts of the frame. To verify the structural integrity of the critical joint areas, a full-scale test was conducted on the frame before engine testing. The testing of the frame established critical spring constants and subjected the frame to three critical load cases. The successful static load test was followed by 153 and 58 hours respectively of successful running on the UTW and OTW engines.
NASA Technical Reports Server (NTRS)
Hyer, M. W.; Charette, R. F.
1987-01-01
To increase the effectiveness and efficiency of fiber-reinforced materials, the use of fibers in a curvilinear rather than the traditional straightline format is explored. The capacity of a laminated square plate with a central circular hole loaded in tension is investigated. The orientation of the fibers is chosen so that the fibers in a particular layer are aligned with the principle stress directions in that layer. Finite elements and an iteration scheme are used to find the fiber orientation. A noninteracting maximum strain criterion is used to predict load capacity. The load capacities of several plates with different curvilinear fibers format are compared with the capacities of more conventional straightline format designs. It is found that the most practical curvilinear design sandwiches a group of fibers in a curvilinear format between a pair of +/-45 degree layers. This design has a 60% greater load capacity than a conventional quasi-isotropic design with the same number of layers. The +/-45 degree layers are necessary to prevent matrix cracking in the curvilinear layers due to stresses perpendicular to the fibers in those layers. Greater efficiencies are achievable with composite structures than now realized.
TAP 2: A finite element program for thermal analysis of convectively cooled structures
NASA Technical Reports Server (NTRS)
Thornton, E. A.
1980-01-01
A finite element computer program (TAP 2) for steady-state and transient thermal analyses of convectively cooled structures is presented. The program has a finite element library of six elements: two conduction/convection elements to model heat transfer in a solid, two convection elements to model heat transfer in a fluid, and two integrated conduction/convection elements to represent combined heat transfer in tubular and plate/fin fluid passages. Nonlinear thermal analysis due to temperature-dependent thermal parameters is performed using the Newton-Raphson iteration method. Transient analyses are performed using an implicit Crank-Nicolson time integration scheme with consistent or lumped capacitance matrices as an option. Program output includes nodal temperatures and element heat fluxes. Pressure drops in fluid passages may be computed as an option. User instructions and sample problems are presented in appendixes.
On Dynamics of Spinning Structures
NASA Technical Reports Server (NTRS)
Gupta, K. K.; Ibrahim, A.
2012-01-01
This paper provides details of developments pertaining to vibration analysis of gyroscopic systems, that involves a finite element structural discretization followed by the solution of the resulting matrix eigenvalue problem by a progressive, accelerated simultaneous iteration technique. Thus Coriolis, centrifugal and geometrical stiffness matrices are derived for shell and line elements, followed by the eigensolution details as well as solution of representative problems that demonstrates the efficacy of the currently developed numerical procedures and tools.
Progress in Development of the ITER Plasma Control System Simulation Platform
NASA Astrophysics Data System (ADS)
Walker, Michael; Humphreys, David; Sammuli, Brian; Ambrosino, Giuseppe; de Tommasi, Gianmaria; Mattei, Massimiliano; Raupp, Gerhard; Treutterer, Wolfgang; Winter, Axel
2017-10-01
We report on progress made and expected uses of the Plasma Control System Simulation Platform (PCSSP), the primary test environment for development of the ITER Plasma Control System (PCS). PCSSP will be used for verification and validation of the ITER PCS Final Design for First Plasma, to be completed in 2020. We discuss the objectives of PCSSP, its overall structure, selected features, application to existing devices, and expected evolution over the lifetime of the ITER PCS. We describe an archiving solution for simulation results, methods for incorporating physics models of the plasma and physical plant (tokamak, actuator, and diagnostic systems) into PCSSP, and defining characteristics of models suitable for a plasma control development environment such as PCSSP. Applications of PCSSP simulation models including resistive plasma equilibrium evolution are demonstrated. PCSSP development supported by ITER Organization under ITER/CTS/6000000037. Resistive evolution code developed under General Atomics' Internal funding. The views and opinions expressed herein do not necessarily reflect those of the ITER Organization.
Iterative inversion of deformation vector fields with feedback control.
Dubey, Abhishek; Iliopoulos, Alexandros-Stavros; Sun, Xiaobai; Yin, Fang-Fang; Ren, Lei
2018-05-14
Often, the inverse deformation vector field (DVF) is needed together with the corresponding forward DVF in four-dimesional (4D) reconstruction and dose calculation, adaptive radiation therapy, and simultaneous deformable registration. This study aims at improving both accuracy and efficiency of iterative algorithms for DVF inversion, and advancing our understanding of divergence and latency conditions. We introduce a framework of fixed-point iteration algorithms with active feedback control for DVF inversion. Based on rigorous convergence analysis, we design control mechanisms for modulating the inverse consistency (IC) residual of the current iterate, to be used as feedback into the next iterate. The control is designed adaptively to the input DVF with the objective to enlarge the convergence area and expedite convergence. Three particular settings of feedback control are introduced: constant value over the domain throughout the iteration; alternating values between iteration steps; and spatially variant values. We also introduce three spectral measures of the displacement Jacobian for characterizing a DVF. These measures reveal the critical role of what we term the nontranslational displacement component (NTDC) of the DVF. We carry out inversion experiments with an analytical DVF pair, and with DVFs associated with thoracic CT images of six patients at end of expiration and end of inspiration. The NTDC-adaptive iterations are shown to attain a larger convergence region at a faster pace compared to previous nonadaptive DVF inversion iteration algorithms. By our numerical experiments, alternating control yields smaller IC residuals and inversion errors than constant control. Spatially variant control renders smaller residuals and errors by at least an order of magnitude, compared to other schemes, in no more than 10 steps. Inversion results also show remarkable quantitative agreement with analysis-based predictions. Our analysis captures properties of DVF data associated with clinical CT images, and provides new understanding of iterative DVF inversion algorithms with a simple residual feedback control. Adaptive control is necessary and highly effective in the presence of nonsmall NTDCs. The adaptive iterations or the spectral measures, or both, may potentially be incorporated into deformable image registration methods. © 2018 American Association of Physicists in Medicine.
Modern Design of Resonant Edge-Slot Array Antennas
NASA Technical Reports Server (NTRS)
Gosselin, R. B.
2006-01-01
Resonant edge-slot (slotted-waveguide) array antennas can now be designed very accurately following a modern computational approach like that followed for some other microwave components. This modern approach makes it possible to design superior antennas at lower cost than was previously possible. Heretofore, the physical and engineering knowledge of resonant edge-slot array antennas had remained immature since they were introduced during World War II. This is because despite their mechanical simplicity, high reliability, and potential for operation with high efficiency, the electromagnetic behavior of resonant edge-slot antennas is very complex. Because engineering design formulas and curves for such antennas are not available in the open literature, designers have been forced to implement iterative processes of fabricating and testing multiple prototypes to derive design databases, each unique for a specific combination of operating frequency and set of waveguide tube dimensions. The expensive, time-consuming nature of these processes has inhibited the use of resonant edge-slot antennas. The present modern approach reduces costs by making it unnecessary to build and test multiple prototypes. As an additional benefit, this approach affords a capability to design an array of slots having different dimensions to taper the antenna illumination to reduce the amplitudes of unwanted side lobes. The heart of the modern approach is the use of the latest commercially available microwave-design software, which implements finite-element models of electromagnetic fields in and around waveguides, antenna elements, and similar components. Instead of building and testing prototypes, one builds a database and constructs design curves from the results of computational simulations for sets of design parameters. The figure shows a resonant edge-slot antenna designed following this approach. Intended for use as part of a radiometer operating at a frequency of 10.7 GHz, this antenna was fabricated from dimensions defined exclusively by results of computational simulations. The final design was found to be well optimized and to yield performance exceeding that initially required.
Analysis and Design of ITER 1 MV Core Snubber
NASA Astrophysics Data System (ADS)
Wang, Haitian; Li, Ge
2012-11-01
The core snubber, as a passive protection device, can suppress arc current and absorb stored energy in stray capacitance during the electrical breakdown in accelerating electrodes of ITER NBI. In order to design the core snubber of ITER, the control parameters of the arc peak current have been firstly analyzed by the Fink-Baker-Owren (FBO) method, which are used for designing the DIIID 100 kV snubber. The B-H curve can be derived from the measured voltage and current waveforms, and the hysteresis loss of the core snubber can be derived using the revised parallelogram method. The core snubber can be a simplified representation as an equivalent parallel resistance and inductance, which has been neglected by the FBO method. A simulation code including the parallel equivalent resistance and inductance has been set up. The simulation and experiments result in dramatically large arc shorting currents due to the parallel inductance effect. The case shows that the core snubber utilizing the FBO method gives more compact design.
Analysis of helium-ion scattering with a desktop computer
NASA Astrophysics Data System (ADS)
Butler, J. W.
1986-04-01
This paper describes a program written in an enhanced BASIC language for a desktop computer, for simulating the energy spectra of high-energy helium ions scattered into two concurrent detectors (backward and glancing). The program is designed for 512-channel spectra from samples containing up to 8 elements and 55 user-defined layers. The program is intended to meet the needs of analyses in materials sciences, such as metallurgy, where more than a few elements may be present, where several elements may be near each other in the periodic table, and where relatively deep structure may be important. These conditions preclude the use of completely automatic procedures for obtaining the sample composition directly from the scattered ion spectrum. Therefore, efficient methods are needed for entering and editing large amounts of composition data, with many iterations and with much feedback of information from the computer to the user. The internal video screen is used exclusively for verbal and numeric communications between user and computer. The composition matrix is edited on screen with a two-dimension forms-fill-in text editor and with many automatic procedures, such as doubling the number of layers with appropriate interpolations and extrapolations. The control center of the program is a bank of 10 keys that initiate on-event branching of program flow. The experimental and calculated spectra, including those of individual elements if desired, are displayed on an external color monitor, with an optional inset plot of the depth concentration profiles of the elements in the sample.
DEM Calibration Approach: design of experiment
NASA Astrophysics Data System (ADS)
Boikov, A. V.; Savelev, R. V.; Payor, V. A.
2018-05-01
The problem of DEM models calibration is considered in the article. It is proposed to divide models input parameters into those that require iterative calibration and those that are recommended to measure directly. A new method for model calibration based on the design of the experiment for iteratively calibrated parameters is proposed. The experiment is conducted using a specially designed stand. The results are processed with technical vision algorithms. Approximating functions are obtained and the error of the implemented software and hardware complex is estimated. The prospects of the obtained results are discussed.
Gao, Tia; Kim, Matthew I.; White, David; Alm, Alexander M.
2006-01-01
We have developed a system for real-time patient monitoring during large-scale disasters. Our system is designed with scalable algorithms to monitor large numbers of patients, an intuitive interface to support the overwhelmed responders, and ad-hoc mesh networking capabilities to maintain connectivity to patients in the chaotic settings. This paper describes an iterative approach to user-centered design adopted to guide development of our system. This system is a part of the Advanced Health and Disaster Aid Network (AID-N) architecture. PMID:17238348
Saver, Jeffrey L; Warach, Steven; Janis, Scott; Odenkirchen, Joanne; Becker, Kyra; Benavente, Oscar; Broderick, Joseph; Dromerick, Alexander W; Duncan, Pamela; Elkind, Mitchell S V; Johnston, Karen; Kidwell, Chelsea S; Meschia, James F; Schwamm, Lee
2012-04-01
The National Institute of Neurological Disorders and Stroke initiated development of stroke-specific Common Data Elements (CDEs) as part of a project to develop data standards for funded clinical research in all fields of neuroscience. Standardizing data elements in translational, clinical, and population research in cerebrovascular disease could decrease study start-up time, facilitate data sharing, and promote well-informed clinical practice guidelines. A working group of diverse experts in cerebrovascular clinical trials, epidemiology, and biostatistics met regularly to develop a set of stroke CDEs, selecting among, refining, and adding to existing, field-tested data elements from national registries and funded trials and studies. Candidate elements were revised on the basis of comments from leading national and international neurovascular research organizations and the public. The first iteration of the National Institute of Neurological Disorders and Stroke (NINDS) stroke-specific CDEs comprises 980 data elements spanning 9 content areas: (1) biospecimens and biomarkers; (2) hospital course and acute therapies; (3) imaging; (4) laboratory tests and vital signs; (5) long-term therapies; (6) medical history and prior health status; (7) outcomes and end points; (8) stroke presentation; and (9) stroke types and subtypes. A CDE website provides uniform names and structures for each element, a data dictionary, and template case report forms, using the CDEs. Stroke-specific CDEs are now available as standardized, scientifically vetted, variable structures to facilitate data collection and data sharing in cerebrovascular patient-oriented research. The CDEs are an evolving resource that will be iteratively improved based on investigator use, new technologies, and emerging concepts and research findings.
Preliminary consideration of CFETR ITER-like case diagnostic system.
Li, G S; Yang, Y; Wang, Y M; Ming, T F; Han, X; Liu, S C; Wang, E H; Liu, Y K; Yang, W J; Li, G Q; Hu, Q S; Gao, X
2016-11-01
Chinese Fusion Engineering Test Reactor (CFETR) is a new superconducting tokamak device being designed in China, which aims at bridging the gap between ITER and DEMO, where DEMO is a tokamak demonstration fusion reactor. Two diagnostic cases, ITER-like case and towards DEMO case, have been considered for CFETR early and later operating phases, respectively. In this paper, some preliminary consideration of ITER-like case will be presented. Based on ITER diagnostic system, three versions of increased complexity and coverage of the ITER-like case diagnostic system have been developed with different goals and functions. Version A aims only machine protection and basic control. Both of version B and version C are mainly for machine protection, basic and advanced control, but version C has an increased level of redundancy necessary for improved measurements capability. The performance of these versions and needed R&D work are outlined.
Preliminary consideration of CFETR ITER-like case diagnostic system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, G. S.; Liu, Y. K.; Gao, X.
2016-11-15
Chinese Fusion Engineering Test Reactor (CFETR) is a new superconducting tokamak device being designed in China, which aims at bridging the gap between ITER and DEMO, where DEMO is a tokamak demonstration fusion reactor. Two diagnostic cases, ITER-like case and towards DEMO case, have been considered for CFETR early and later operating phases, respectively. In this paper, some preliminary consideration of ITER-like case will be presented. Based on ITER diagnostic system, three versions of increased complexity and coverage of the ITER-like case diagnostic system have been developed with different goals and functions. Version A aims only machine protection and basicmore » control. Both of version B and version C are mainly for machine protection, basic and advanced control, but version C has an increased level of redundancy necessary for improved measurements capability. The performance of these versions and needed R&D work are outlined.« less
Matte, Guillaume M; Van Neer, Paul L M J; Danilouchkine, Mike G; Huijssen, Jacob; Verweij, Martin D; de Jong, Nico
2011-03-01
Second-harmonic imaging is currently one of the standards in commercial echographic systems for diagnosis, because of its high spatial resolution and low sensitivity to clutter and near-field artifacts. The use of nonlinear phenomena mirrors is a great set of solutions to improve echographic image resolution. To further enhance the resolution and image quality, the combination of the 3rd to 5th harmonics--dubbed the superharmonics--could be used. However, this requires a bandwidth exceeding that of conventional transducers. A promising solution features a phased-array design with interleaved low- and high-frequency elements for transmission and reception, respectively. Because the amplitude of the backscattered higher harmonics at the transducer surface is relatively low, it is highly desirable to increase the sensitivity in reception. Therefore, we investigated the optimization of the number of elements in the receiving aperture as well as their arrangement (topology). A variety of configurations was considered, including one transmit element for each receive element (1/2) up to one transmit for 7 receive elements (1/8). The topologies are assessed based on the ratio of the harmonic peak pressures in the main and grating lobes. Further, the higher harmonic level is maximized by optimization of the center frequency of the transmitted pulse. The achievable SNR for a specific application is a compromise between the frequency-dependent attenuation and nonlinearity at a required penetration depth. To calculate the SNR of the complete imaging chain, we use an approach analogous to the sonar equation used in underwater acoustics. The generated harmonic pressure fields caused by nonlinear wave propagation were modeled with the iterative nonlinear contrast source (INCS) method, the KZK, or the Burger's equation. The optimal topology for superharmonic imaging was an interleaved design with 1 transmit element per 6 receive elements. It improves the SNR by ~5 dB compared with the interleaved (1/2) design reported in literature. The optimal transmit frequency for superharmonic echocardiography was found to be 1.0 to 1.2 MHz. For superharmonic abdominal imaging this frequency was found to be 1.7 to 1.9 MHz. For 2nd-harmonic echocardiography, the optimal transmit frequency of 1.8 MHz reported in the literature was corroborated with our simulation results.
Approximate techniques of structural reanalysis
NASA Technical Reports Server (NTRS)
Noor, A. K.; Lowder, H. E.
1974-01-01
A study is made of two approximate techniques for structural reanalysis. These include Taylor series expansions for response variables in terms of design variables and the reduced-basis method. In addition, modifications to these techniques are proposed to overcome some of their major drawbacks. The modifications include a rational approach to the selection of the reduced-basis vectors and the use of Taylor series approximation in an iterative process. For the reduced basis a normalized set of vectors is chosen which consists of the original analyzed design and the first-order sensitivity analysis vectors. The use of the Taylor series approximation as a first (initial) estimate in an iterative process, can lead to significant improvements in accuracy, even with one iteration cycle. Therefore, the range of applicability of the reanalysis technique can be extended. Numerical examples are presented which demonstrate the gain in accuracy obtained by using the proposed modification techniques, for a wide range of variations in the design variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lazic, Predrag; Stefancic, Hrvoje; Abraham, Hrvoje
2006-03-20
We introduce a novel numerical method, named the Robin Hood method, of solving electrostatic problems. The approach of the method is closest to the boundary element methods, although significant conceptual differences exist with respect to this class of methods. The method achieves equipotentiality of conducting surfaces by iterative non-local charge transfer. For each of the conducting surfaces, non-local charge transfers are performed between surface elements, which differ the most from the targeted equipotentiality of the surface. The method is tested against analytical solutions and its wide range of application is demonstrated. The method has appealing technical characteristics. For the problemmore » with N surface elements, the computational complexity of the method essentially scales with N {sup {alpha}}, where {alpha} < 2, the required computer memory scales with N, while the error of the potential decreases exponentially with the number of iterations for many orders of magnitude of the error, without the presence of the Critical Slowing Down. The Robin Hood method could prove useful in other classical or even quantum problems. Some future development ideas for possible applications outside electrostatics are addressed.« less
NASA Astrophysics Data System (ADS)
Ming, A. B.; Qin, Z. Y.; Zhang, W.; Chu, F. L.
2013-12-01
Bearing failure is one of the most common reasons of machine breakdowns and accidents. Therefore, the fault diagnosis of rolling element bearings is of great significance to the safe and efficient operation of machines owing to its fault indication and accident prevention capability in engineering applications. Based on the orthogonal projection theory, a novel method is proposed to extract the fault characteristic frequency for the incipient fault diagnosis of rolling element bearings in this paper. With the capability of exposing the oscillation frequency of the signal energy, the proposed method is a generalized form of the squared envelope analysis and named as spectral auto-correlation analysis (SACA). Meanwhile, the SACA is a simplified form of the cyclostationary analysis as well and can be iteratively carried out in applications. Simulations and experiments are used to evaluate the efficiency of the proposed method. Comparing the results of SACA, the traditional envelope analysis and the squared envelope analysis, it is found that the result of SACA is more legible due to the more prominent harmonic amplitudes of the fault characteristic frequency and that the SACA with the proper iteration will further enhance the fault features.
Evaluating the iterative development of VR/AR human factors tools for manual work.
Liston, Paul M; Kay, Alison; Cromie, Sam; Leva, Chiara; D'Cruz, Mirabelle; Patel, Harshada; Langley, Alyson; Sharples, Sarah; Aromaa, Susanna
2012-01-01
This paper outlines the approach taken to iteratively evaluate a set of VR/AR (virtual reality / augmented reality) applications for five different manual-work applications - terrestrial spacecraft assembly, assembly-line design, remote maintenance of trains, maintenance of nuclear reactors, and large-machine assembly process design - and examines the evaluation data for evidence of the effectiveness of the evaluation framework as well as the benefits to the development process of feedback from iterative evaluation. ManuVAR is an EU-funded research project that is working to develop an innovative technology platform and a framework to support high-value, high-knowledge manual work throughout the product lifecycle. The results of this study demonstrate the iterative improvements reached throughout the design cycles, observable through the trending of the quantitative results from three successive trials of the applications and the investigation of the qualitative interview findings. The paper discusses the limitations of evaluation in complex, multi-disciplinary development projects and finds evidence of the effectiveness of the use of the particular set of complementary evaluation methods incorporating a common inquiry structure used for the evaluation - particularly in facilitating triangulation of the data.
PREFACE: Progress in the ITER Physics Basis
NASA Astrophysics Data System (ADS)
Ikeda, K.
2007-06-01
I would firstly like to congratulate all who have contributed to the preparation of the `Progress in the ITER Physics Basis' (PIPB) on its publication and express my deep appreciation of the hard work and commitment of the many scientists involved. With the signing of the ITER Joint Implementing Agreement in November 2006, the ITER Members have now established the framework for construction of the project, and the ITER Organization has begun work at Cadarache. The review of recent progress in the physics basis for burning plasma experiments encompassed by the PIPB will be a valuable resource for the project and, in particular, for the current Design Review. The ITER design has been derived from a physics basis developed through experimental, modelling and theoretical work on the properties of tokamak plasmas and, in particular, on studies of burning plasma physics. The `ITER Physics Basis' (IPB), published in 1999, has been the reference for the projection methodologies for the design of ITER, but the IPB also highlighted several key issues which needed to be resolved to provide a robust basis for ITER operation. In the intervening period scientists of the ITER Participant Teams have addressed these issues intensively. The International Tokamak Physics Activity (ITPA) has provided an excellent forum for scientists involved in these studies, focusing their work on the high priority physics issues for ITER. Significant progress has been made in many of the issues identified in the IPB and this progress is discussed in depth in the PIPB. In this respect, the publication of the PIPB symbolizes the strong interest and enthusiasm of the plasma physics community for the success of the ITER project, which we all recognize as one of the great scientific challenges of the 21st century. I wish to emphasize my appreciation of the work of the ITPA Coordinating Committee members, who are listed below. Their support and encouragement for the preparation of the PIPB were fundamental to its completion. I am pleased to witness the extensive collaborations, the excellent working relationships and the free exchange of views that have been developed among scientists working on magnetic fusion, and I would particularly like to acknowledge the importance which they assign to ITER in their research. This close collaboration and the spirit of free discussion will be essential to the success of ITER. Finally, the PIPB identifies issues which remain in the projection of burning plasma performance to the ITER scale and in the control of burning plasmas. Continued R&D is therefore called for to reduce the uncertainties associated with these issues and to ensure the efficient operation and exploitation of ITER. It is important that the international fusion community maintains a high level of collaboration in the future to address these issues and to prepare the physics basis for ITER operation. ITPA Coordination Committee R. Stambaugh (Chair of ITPA CC, General Atomics, USA) D.J. Campbell (Previous Chair of ITPA CC, European Fusion Development Agreement—Close Support Unit, ITER Organization) M. Shimada (Co-Chair of ITPA CC, ITER Organization) R. Aymar (ITER International Team, CERN) V. Chuyanov (ITER Organization) J.H. Han (Korea Basic Science Institute, Korea) Y. Huo (Zengzhou University, China) Y.S. Hwang (Seoul National University, Korea) N. Ivanov (Kurchatov Institute, Russia) Y. Kamada (Japan Atomic Energy Agency, Naka, Japan) P.K. Kaw (Institute for Plasma Research, India) S. Konovalov (Kurchatov Institute, Russia) M. Kwon (National Fusion Research Center, Korea) J. Li (Academy of Science, Institute of Plasma Physics, China) S. Mirnov (TRINITI, Russia) Y. Nakamura (National Institute for Fusion Studies, Japan) H. Ninomiya (Japan Atomic Energy Agency, Naka, Japan) E. Oktay (Department of Energy, USA) J. Pamela (European Fusion Development Agreement—Close Support Unit) C. Pan (Southwestern Institute of Physics, China) F. Romanelli (Ente per le Nuove tecnologie, l'Energia e l'Ambiente, Italy and European Fusion Development Agreement—Close Support Unit) N. Sauthoff (Princeton Plasma Physics Laboratory, USA and Oak Ridge National Laboratories, USA) Y. Saxena (Institute for Plasma Research, India) Y. Shimomura (ITER Organization) R. Singh (Institute for Plasma Research, India) S. Takamura (Nagoya University, Japan) K. Toi (National Institute for Fusion Studies, Japan) M. Wakatani (Kyoto University, Japan (deceased)) H. Zohm (Max-Planck-Institut für Plasmaphysik, Garching, Germany)
Optimization of lightweight structure and supporting bipod flexure for a space mirror.
Chen, Yi-Cheng; Huang, Bo-Kai; You, Zhen-Ting; Chan, Chia-Yen; Huang, Ting-Ming
2016-12-20
This article presents an optimization process for integrated optomechanical design. The proposed optimization process for integrated optomechanical design comprises computer-aided drafting, finite element analysis (FEA), optomechanical transfer codes, and an optimization solver. The FEA was conducted to determine mirror surface deformation; then, deformed surface nodal data were transferred into Zernike polynomials through MATLAB optomechanical transfer codes to calculate the resulting optical path difference (OPD) and optical aberrations. To achieve an optimum design, the optimization iterations of the FEA, optomechanical transfer codes, and optimization solver were automatically connected through a self-developed Tcl script. Two examples of optimization design were illustrated in this research, namely, an optimum lightweight design of a Zerodur primary mirror with an outer diameter of 566 mm that is used in a spaceborne telescope and an optimum bipod flexure design that supports the optimum lightweight primary mirror. Finally, optimum designs were successfully accomplished in both examples, achieving a minimum peak-to-valley (PV) value for the OPD of the deformed optical surface. The simulated optimization results showed that (1) the lightweight ratio of the primary mirror increased from 56% to 66%; and (2) the PV value of the mirror supported by optimum bipod flexures in the horizontal position effectively decreased from 228 to 61 nm.
COMPARISON OF NUMERICAL SCHEMES FOR SOLVING A SPHERICAL PARTICLE DIFFUSION EQUATION
A new robust iterative numerical scheme was developed for a nonlinear diffusive model that described sorption dynamics in spherical particle suspensions. he numerical scheme had been applied to finite difference and finite element models that showed rapid convergence and stabilit...
NASA Astrophysics Data System (ADS)
Sanz, D.; Ruiz, M.; Castro, R.; Vega, J.; Afif, M.; Monroe, M.; Simrock, S.; Debelle, T.; Marawar, R.; Glass, B.
2016-04-01
To aid in assessing the functional performance of ITER, Fission Chambers (FC) based on the neutron diagnostic use case deliver timestamped measurements of neutron source strength and fusion power. To demonstrate the Plant System Instrumentation & Control (I&C) required for such a system, ITER Organization (IO) has developed a neutron diagnostics use case that fully complies with guidelines presented in the Plant Control Design Handbook (PCDH). The implementation presented in this paper has been developed on the PXI Express (PXIe) platform using products from the ITER catalog of standard I&C hardware for fast controllers. Using FlexRIO technology, detector signals are acquired at 125 MS/s, while filtering, decimation, and three methods of neutron counting are performed in real-time via the onboard Field Programmable Gate Array (FPGA). Measurement results are reported every 1 ms through Experimental Physics and Industrial Control System (EPICS) Channel Access (CA), with real-time timestamps derived from the ITER Timing Communication Network (TCN) based on IEEE 1588-2008. Furthermore, in accordance with ITER specifications for CODAC Core System (CCS) application development, the software responsible for the management, configuration, and monitoring of system devices has been developed in compliance with a new EPICS module called Nominal Device Support (NDS) and RIO/FlexRIO design methodology.
In-vacuum sensors for the beamline components of the ITER neutral beam test facility.
Dalla Palma, M; Pasqualotto, R; Sartori, E; Spagnolo, S; Spolaore, M; Veltri, P
2016-11-01
Embedded sensors have been designed for installation on the components of the MITICA beamline, the prototype ITER neutral beam injector (Megavolt ITER Injector and Concept Advancement), to derive characteristics of the particle beam and to monitor the component conditions during operation for protection and thermal control. Along the beamline, the components interacting with the particle beam are the neutralizer, the residual ion dump, and the calorimeter. The design and the positioning of sensors on each component have been developed considering the expected beam-surface interaction including non-ideal and off-normal conditions. The arrangement of the following instrumentation is presented: thermal sensors, strain gages, electrostatic probes including secondary emission detectors, grounding shunt for electrical currents, and accelerometers.
Stokes-Doppler coherence imaging for ITER boundary tomography.
Howard, J; Kocan, M; Lisgo, S; Reichle, R
2016-11-01
An optical coherence imaging system is presently being designed for impurity transport studies and other applications on ITER. The wide variation in magnetic field strength and pitch angle (assumed known) across the field of view generates additional Zeeman-polarization-weighting information that can improve the reliability of tomographic reconstructions. Because background reflected light will be somewhat depolarized analysis of only the polarized fraction may be enough to provide a level of background suppression. We present the principles behind these ideas and some simulations that demonstrate how the approach might work on ITER. The views and opinions expressed herein do not necessarily reflect those of the ITER Organization.
Engineering aspects of design and integration of ECE diagnostic in ITER
Udintsev, V. S.; Taylor, G.; Pandya, H. K.B.; ...
2015-03-12
ITER ECE diagnostic [1] needs not only to meet measurement requirements, but also to withstand various loads, such as electromagnetic, mechanical, neutronic and thermal, and to be protected from stray ECH radiation at 170 GHz and other millimeter wave emission, like Collective Thomson scattering which is planned to operate at 60 GHz. Same or similar loads will be applied to other millimetre-wave diagnostics [2], located both in-vessel and in-port plugs. These loads must be taken into account throughout the design phases of the ECE and other microwave diagnostics to ensure their structural integrity and maintainability. The integration of microwave diagnosticsmore » with other ITER systems is another challenging activity which is currently ongoing through port integration and in-vessel integration work. Port Integration has to address the maintenance and the safety aspects of diagnostics, too. Engineering solutions which are being developed to support and to operate ITER ECE diagnostic, whilst complying with safety and maintenance requirements, are discussed in this paper.« less
Transmission of electrons inside the cryogenic pumps of ITER injector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veltri, P., E-mail: pierluigi.veltri@igi.cnr.it; Sartori, E.
2016-02-15
Large cryogenic pumps are installed in the vessel of large neutral beam injectors (NBIs) used to heat the plasma in nuclear fusion experiments. The operation of such pumps can be compromised by the presence of stray secondary electrons that are generated along the beam path. In this paper, we present a numerical model to analyze the propagation of the electrons inside the pump. The aim of the study is to quantify the power load on the active pump elements, via evaluation of the transmission probabilities across the domain of the pump. These are obtained starting from large datasets of particlemore » trajectories, obtained by numerical means. The transmission probability of the electrons across the domain is calculated for the NBI of the ITER and for its prototype Megavolt ITer Injector and Concept Advancement (MITICA) and the results are discussed.« less
A novel dynamical community detection algorithm based on weighting scheme
NASA Astrophysics Data System (ADS)
Li, Ju; Yu, Kai; Hu, Ke
2015-12-01
Network dynamics plays an important role in analyzing the correlation between the function properties and the topological structure. In this paper, we propose a novel dynamical iteration (DI) algorithm, which incorporates the iterative process of membership vector with weighting scheme, i.e. weighting W and tightness T. These new elements can be used to adjust the link strength and the node compactness for improving the speed and accuracy of community structure detection. To estimate the optimal stop time of iteration, we utilize a new stability measure which is defined as the Markov random walk auto-covariance. We do not need to specify the number of communities in advance. It naturally supports the overlapping communities by associating each node with a membership vector describing the node's involvement in each community. Theoretical analysis and experiments show that the algorithm can uncover communities effectively and efficiently.
TRUST84. Sat-Unsat Flow in Deformable Media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narasimhan, T.N.
1984-11-01
TRUST84 solves for transient and steady-state flow in variably saturated deformable media in one, two, or three dimensions. It can handle porous media, fractured media, or fractured-porous media. Boundary conditions may be an arbitrary function of time. Sources or sinks may be a function of time or of potential. The theoretical model considers a general three-dimensional field of flow in conjunction with a one-dimensional vertical deformation field. The governing equation expresses the conservation of fluid mass in an elemental volume that has a constant volume of solids. Deformation of the porous medium may be nonelastic. Permeability and the compressibility coefficientsmore » may be nonlinearly related to effective stress. Relationships between permeability and saturation with pore water pressure in the unsaturated zone may be characterized by hysteresis. The relation between pore pressure change and effective stress change may be a function of saturation. The basic calculational model of the conductive heat transfer code TRUMP is applied in TRUST84 to the flow of fluids in porous media. The model combines an integrated finite difference algorithm for numerically solving the governing equation with a mixed explicit-implicit iterative scheme in which the explicit changes in potential are first computed for all elements in the system, after which implicit corrections are made only for those elements for which the stable time-step is less than the time-step being used. Time-step sizes are automatically controlled to optimize the number of iterations, to control maximum change to potential during a time-step, and to obtain desired output information. Time derivatives, estimated on the basis of system behavior during the two previous time-steps, are used to start the iteration process and to evaluate nonlinear coefficients. Both heterogeneity and anisotropy can be handled.« less
Transient analysis of 1D inhomogeneous media by dynamic inhomogeneous finite element method
NASA Astrophysics Data System (ADS)
Yang, Zailin; Wang, Yao; Hei, Baoping
2013-12-01
The dynamic inhomogeneous finite element method is studied for use in the transient analysis of onedimensional inhomogeneous media. The general formula of the inhomogeneous consistent mass matrix is established based on the shape function. In order to research the advantages of this method, it is compared with the general finite element method. A linear bar element is chosen for the discretization tests of material parameters with two fictitious distributions. And, a numerical example is solved to observe the differences in the results between these two methods. Some characteristics of the dynamic inhomogeneous finite element method that demonstrate its advantages are obtained through comparison with the general finite element method. It is found that the method can be used to solve elastic wave motion problems with a large element scale and a large number of iteration steps.
The Virtual Solar Observatory: Still a Small Box
NASA Technical Reports Server (NTRS)
Gurman, J. B.; Bogart, R. S.; Davey, A. R.; Dimitoglou, G.; Hill, F.; Hourcle, J. A.; Martens, P. C.; Surez-Sola, I.; Tian, K. Q.; Wampler, S.
2005-01-01
Two and a half years after a design study began, and a year and a half after development commenced, version 1.0 of the Virtual Solar Observatory (VSO) was released at the 2004 Fall AGU meeting. Although internal elements of the VSO have changed, the basic design has remained the same, reflecting the team's belief in the importance of a simple, robust mechanism for registering data provider holdings, initiating queries at the appropriate provider sites, aggregating the responses, allowing the user to iterate before making a final selection, and enabling the delivery of data directly from the providers. In order to make the VSO transparent, lightweight, and portable, the developers employed XML for the registry, SOAP for communication between a VSO instance and data services, and HTML for the graphic user interface (GUI's). We discuss the internal data model, the API, and user responses to various trial GUI's as typical design issues for any virtual observatory. We also discuss the role of the "small box" of data search, identification, and delivery services provided by the VSO in the larger, Sun-Solar System Connection virtual observatory (VxO) scheme.
Wilson, Mandy L; Okumoto, Sakiko; Adam, Laura; Peccoud, Jean
2014-01-15
Expression vectors used in different biotechnology applications are designed with domain-specific rules. For instance, promoters, origins of replication or homologous recombination sites are host-specific. Similarly, chromosomal integration or viral delivery of an expression cassette imposes specific structural constraints. As de novo gene synthesis and synthetic biology methods permeate many biotechnology specialties, the design of application-specific expression vectors becomes the new norm. In this context, it is desirable to formalize vector design strategies applicable in different domains. Using the design of constructs to express genes in the chloroplast of Chlamydomonas reinhardtii as an example, we show that a vector design strategy can be formalized as a domain-specific language. We have developed a graphical editor of context-free grammars usable by biologists without prior exposure to language theory. This environment makes it possible for biologists to iteratively improve their design strategies throughout the course of a project. It is also possible to ensure that vectors designed with early iterations of the language are consistent with the latest iteration of the language. The context-free grammar editor is part of the GenoCAD application. A public instance of GenoCAD is available at http://www.genocad.org. GenoCAD source code is available from SourceForge and licensed under the Apache v2.0 open source license.
Iterative Addition of Kinetic Effects to Cold Plasma RF Wave Solvers
NASA Astrophysics Data System (ADS)
Green, David; Berry, Lee; RF-SciDAC Collaboration
2017-10-01
The hot nature of fusion plasmas requires a wave vector dependent conductivity tensor for accurate calculation of wave heating and current drive. Traditional methods for calculating the linear, kinetic full-wave plasma response rely on a spectral method such that the wave vector dependent conductivity fits naturally within the numerical method. These methods have seen much success for application to the well-confined core plasma of tokamaks. However, quantitative prediction of high power RF antenna designs for fusion applications has meant a requirement of resolving the geometric details of the antenna and other plasma facing surfaces for which the Fourier spectral method is ill-suited. An approach to enabling the addition of kinetic effects to the more versatile finite-difference and finite-element cold-plasma full-wave solvers was presented by where an operator-split iterative method was outlined. Here we expand on this approach, examine convergence and present a simplified kinetic current estimator for rapidly updating the right-hand side of the wave equation with kinetic corrections. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
Conductor analysis of the ITER FEAT poloidal field coils during a plasma scenario
NASA Astrophysics Data System (ADS)
Nicollet, S.; Hertout, P.; Duchateau, J. L.; Bleyer, A.; Bessette, D.
2002-05-01
In the framework of the ITER (International Thermonuclear Experimental Reactor) FEAT (Fusion Energy Advanced Tokamak) project, a fully superconducting PF (Poloidal Field) system has been designed in detail. The Central Solenoid and the 6 equilibrium coils constituting the PF system provide the magnetic fields which develop, shape and control the 15 MA plasma during the 1800 s of a typical plasma scenario. The 6 PF coils will be wound two-in-hand from a 45 kA niobium-titanium CICC (Cable-In-Conduit-Conductor). These coils will experience severe heat loads specially during the 400 s of the plasma burn: nuclear heating due to the 400 MW of fusion power, thermal radiation and AC losses (30 to 300 kJ). The AC losses along the PF coil pancakes are deduced from accurate magnetic field computations performed with a 3D magnetostatic code, TRAPS. The nuclear heating and the thermal radiation are assumed to be uniform over a given face of the PF coils. These heat loads are used as input to perform the thermal and hydraulic analysis with a finite element code, GANDALF. The temperature increases (0.1 to 0.4 K) are computed, the margins and performances of the conductor are evaluated.
To Boldly Go Where No Man has Gone Before: Seeking Gaia's Astrometric Solution with AGIS
NASA Astrophysics Data System (ADS)
Lammers, U.; Lindegren, L.; O'Mullane, W.; Hobbs, D.
2009-09-01
Gaia is ESA's ambitious space astrometry mission with a foreseen launch date in late 2011. Its main objective is to perform a stellar census of the 1,000 million brightest objects in our galaxy (completeness to V=20 mag) from which an astrometric catalog of micro-arcsec (μas) level accuracy will be constructed. A key element in this endeavor is the Astrometric Global Iterative Solution (AGIS) - the mathematical and numerical framework for combining the ≈80 available observations per star obtained during Gaia's 5 yr lifetime into a single global astrometic solution. AGIS consists of four main algorithmic cores which improve the source astrometic parameters, satellite attitude, calibration, and global parameters in a block-iterative manner. We present and discuss this basic scheme, the algorithms themselves and the overarching system architecture. The latter is a data-driven distributed processing framework designed to achieve an overall system performance that is not I/O limited. AGIS is being developed as a pure Java system by a small number of geographically distributed European groups. We present some of the software engineering aspects of the project and show used methodologies and tools. Finally we will briefly discuss how AGIS is embedded into the overall Gaia data processing architecture.
NASA Astrophysics Data System (ADS)
Simonetto, A.; Platania, P.; Garavaglia, S.; Gittini, G.; Granucci, G.; Pallotta, F.
2018-02-01
Plasma position reflectometry for ITER requires interfaces between in-vessel and ex-vessel waveguides. An ultra broadband interface (15-75 GHz) was designed between moderately oversized rectangular waveguide (20 × 12 mm), operated in TE01 (i.e., tall waveguide mode), and circular corrugated waveguide, with 88.9-mm internal diameter, propagating HE11. The interface was designed both as a sequence of waveguide components and as a quasi-optical confocal telescope. The design and the simulated performance are described for both concepts. The latter one requires more space but has better performance, and shall be prototyped.
Heinsch, Stephen C.; Das, Siba R.; Smanski, Michael J.
2018-01-01
Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems. PMID:29535690
Fast divide-and-conquer algorithm for evaluating polarization in classical force fields
NASA Astrophysics Data System (ADS)
Nocito, Dominique; Beran, Gregory J. O.
2017-03-01
Evaluation of the self-consistent polarization energy forms a major computational bottleneck in polarizable force fields. In large systems, the linear polarization equations are typically solved iteratively with techniques based on Jacobi iterations (JI) or preconditioned conjugate gradients (PCG). Two new variants of JI are proposed here that exploit domain decomposition to accelerate the convergence of the induced dipoles. The first, divide-and-conquer JI (DC-JI), is a block Jacobi algorithm which solves the polarization equations within non-overlapping sub-clusters of atoms directly via Cholesky decomposition, and iterates to capture interactions between sub-clusters. The second, fuzzy DC-JI, achieves further acceleration by employing overlapping blocks. Fuzzy DC-JI is analogous to an additive Schwarz method, but with distance-based weighting when averaging the fuzzy dipoles from different blocks. Key to the success of these algorithms is the use of K-means clustering to identify natural atomic sub-clusters automatically for both algorithms and to determine the appropriate weights in fuzzy DC-JI. The algorithm employs knowledge of the 3-D spatial interactions to group important elements in the 2-D polarization matrix. When coupled with direct inversion in the iterative subspace (DIIS) extrapolation, fuzzy DC-JI/DIIS in particular converges in a comparable number of iterations as PCG, but with lower computational cost per iteration. In the end, the new algorithms demonstrated here accelerate the evaluation of the polarization energy by 2-3 fold compared to existing implementations of PCG or JI/DIIS.
Design requirements for plasma facing materials in ITER
NASA Astrophysics Data System (ADS)
Matera, R.; Federici, G.; ITER Joint Central Team
1996-10-01
After the official approval of the Interim Design Report, the ITER project enters the final phase of the EDA. With the definition of the design requirements of the high heat flux components, the structural and armor materials' working domain is better specified, allowing to focus the R & D program on the most critical issues and to orient the design of divertor and first wall components towards those concepts which potentially have a better chance to withstand normal and off-normal operating conditions. Among the latter, slow, high-power, high recycling transient are at present driving the design of high heat flux components. Examples of possible design solution under experimental validation in the R & D program are presented and discussed in this paper.
Roux, Emmanuel; Ramalli, Alessandro; Tortoli, Piero; Cachard, Christian; Robini, Marc C; Liebgott, Herve
2016-12-01
Full matrix arrays are excellent tools for 3-D ultrasound imaging, but the required number of active elements is too high to be individually controlled by an equal number of scanner channels. The number of active elements is significantly reduced by the sparse array techniques, but the position of the remaining elements must be carefully optimized. This issue is faced here by introducing novel energy functions in the simulated annealing (SA) algorithm. At each iteration step of the optimization process, one element is freely translated and the associated radiated pattern is simulated. To control the pressure field behavior at multiple depths, three energy functions inspired by the pressure field radiated by a Blackman-tapered spiral array are introduced. Such energy functions aim at limiting the main lobe width while lowering the side lobe and grating lobe levels at multiple depths. Numerical optimization results illustrate the influence of the number of iterations, pressure measurement points, and depths, as well as the influence of the energy function definition on the optimized layout. It is also shown that performance close to or even better than the one provided by a spiral array, here assumed as reference, may be obtained. The finite-time convergence properties of SA allow the duration of the optimization process to be set in advance.
Engineering Design of ITER Prototype Fast Plant System Controller
NASA Astrophysics Data System (ADS)
Goncalves, B.; Sousa, J.; Carvalho, B.; Rodrigues, A. P.; Correia, M.; Batista, A.; Vega, J.; Ruiz, M.; Lopez, J. M.; Rojo, R. Castro; Wallander, A.; Utzel, N.; Neto, A.; Alves, D.; Valcarcel, D.
2011-08-01
The ITER control, data access and communication (CODAC) design team identified the need for two types of plant systems. A slow control plant system is based on industrial automation technology with maximum sampling rates below 100 Hz, and a fast control plant system is based on embedded technology with higher sampling rates and more stringent real-time requirements than that required for slow controllers. The latter is applicable to diagnostics and plant systems in closed-control loops whose cycle times are below 1 ms. Fast controllers will be dedicated industrial controllers with the ability to supervise other fast and/or slow controllers, interface to actuators and sensors and, if necessary, high performance networks. Two prototypes of a fast plant system controller specialized for data acquisition and constrained by ITER technological choices are being built using two different form factors. This prototyping activity contributes to the Plant Control Design Handbook effort of standardization, specifically regarding fast controller characteristics. Envisaging a general purpose fast controller design, diagnostic use cases with specific requirements were analyzed and will be presented along with the interface with CODAC and sensors. The requirements and constraints that real-time plasma control imposes on the design were also taken into consideration. Functional specifications and technology neutral architecture, together with its implications on the engineering design, were considered. The detailed engineering design compliant with ITER standards was performed and will be discussed in detail. Emphasis will be given to the integration of the controller in the standard CODAC environment. Requirements for the EPICS IOC providing the interface to the outside world, the prototype decisions on form factor, real-time operating system, and high-performance networks will also be discussed, as well as the requirements for data streaming to CODAC for visualization and archiving.
Evaluation of ITER MSE Viewing Optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, S; Lerner, S; Morris, K
2007-03-26
The Motional Stark Effect (MSE) diagnostic on ITER determines the local plasma current density by measuring the polarization angle of light resulting from the interaction of a high energy neutral heating beam and the tokamak plasma. This light signal has to be transmitted from the edge and core of the plasma to a polarization analyzer located in the port plug. The optical system should either preserve the polarization information, or it should be possible to reliably calibrate any changes induced by the optics. This LLNL Work for Others project for the US ITER Project Office (USIPO) is focused on themore » design of the viewing optics for both the edge and core MSE systems. Several design constraints were considered, including: image quality, lack of polarization aberrations, ease of construction and cost of mirrors, neutron shielding, and geometric layout in the equatorial port plugs. The edge MSE optics are located in ITER equatorial port 3 and view Heating Beam 5, and the core system is located in equatorial port 1 viewing heating beam 4. The current work is an extension of previous preliminary design work completed by the ITER central team (ITER resources were not available to complete a detailed optimization of this system, and then the MSE was assigned to the US). The optimization of the optical systems at this level was done with the ZEMAX optical ray tracing code. The final LLNL designs decreased the ''blur'' in the optical system by nearly an order of magnitude, and the polarization blur was reduced by a factor of 3. The mirror sizes were reduced with an estimated cost savings of a factor of 3. The throughput of the system was greater than or equal to the previous ITER design. It was found that optical ray tracing was necessary to accurately measure the throughput. Metal mirrors, while they can introduce polarization aberrations, were used close to the plasma because of the anticipated high heat, particle, and neutron loads. These mirrors formed an intermediate image that then was relayed out of the port plug with more ideal (dielectric) mirrors. Engineering models of the optics, port plug, and neutral beam geometry were also created, using the CATIA ITER models. Two video conference calls with the USIPO provided valuable design guidelines, such as the minimum distance of the first optic from the plasma. A second focus of the project was the calibration of the system. Several different techniques are proposed, both before and during plasma operation. Fixed and rotatable polarizers would be used to characterize the system in the no-plasma case. Obtaining the full modulation spectrum from the polarization analyzer allows measurement of polarization effects and also MHD plasma phenomena. Light from neutral beam interaction with deuterium gas (no plasma) has been found useful to determine the wavelength of each spatial channel. The status of the optical design for the edge (upper) and core (lower) systems is included in the following figure. Several issues should be addressed by a follow-on study, including whether the optical labyrinth has sufficient neutron shielding and a detailed polarization characterization of actual mirrors.« less
NASA Astrophysics Data System (ADS)
Bryson, Dean Edward
A model's level of fidelity may be defined as its accuracy in faithfully reproducing a quantity or behavior of interest of a real system. Increasing the fidelity of a model often goes hand in hand with increasing its cost in terms of time, money, or computing resources. The traditional aircraft design process relies upon low-fidelity models for expedience and resource savings. However, the reduced accuracy and reliability of low-fidelity tools often lead to the discovery of design defects or inadequacies late in the design process. These deficiencies result either in costly changes or the acceptance of a configuration that does not meet expectations. The unknown opportunity cost is the discovery of superior vehicles that leverage phenomena unknown to the designer and not illuminated by low-fidelity tools. Multifidelity methods attempt to blend the increased accuracy and reliability of high-fidelity models with the reduced cost of low-fidelity models. In building surrogate models, where mathematical expressions are used to cheaply approximate the behavior of costly data, low-fidelity models may be sampled extensively to resolve the underlying trend, while high-fidelity data are reserved to correct inaccuracies at key locations. Similarly, in design optimization a low-fidelity model may be queried many times in the search for new, better designs, with a high-fidelity model being exercised only once per iteration to evaluate the candidate design. In this dissertation, a new multifidelity, gradient-based optimization algorithm is proposed. It differs from the standard trust region approach in several ways, stemming from the new method maintaining an approximation of the inverse Hessian, that is the underlying curvature of the design problem. Whereas the typical trust region approach performs a full sub-optimization using the low-fidelity model at every iteration, the new technique finds a suitable descent direction and focuses the search along it, reducing the number of low-fidelity evaluations required. This narrowing of the search domain also alleviates the burden on the surrogate model corrections between the low- and high-fidelity data. Rather than requiring the surrogate to be accurate in a hyper-volume bounded by the trust region, the model needs only to be accurate along the forward-looking search direction. Maintaining the approximate inverse Hessian also allows the multifidelity algorithm to revert to high-fidelity optimization at any time. In contrast, the standard approach has no memory of the previously-computed high-fidelity data. The primary disadvantage of the proposed algorithm is that it may require modifications to the optimization software, whereas standard optimizers may be used as black-box drivers in the typical trust region method. A multifidelity, multidisciplinary simulation of aeroelastic vehicle performance is developed to demonstrate the optimization method. The numerical physics models include body-fitted Euler computational fluid dynamics; linear, panel aerodynamics; linear, finite-element computational structural mechanics; and reduced, modal structural bases. A central element of the multifidelity, multidisciplinary framework is a shared parametric, attributed geometric representation that ensures the analysis inputs are consistent between disciplines and fidelities. The attributed geometry also enables the transfer of data between disciplines. The new optimization algorithm, a standard trust region approach, and a single-fidelity quasi-Newton method are compared for a series of analytic test functions, using both polynomial chaos expansions and kriging to correct discrepancies between fidelity levels of data. In the aggregate, the new method requires fewer high-fidelity evaluations than the trust region approach in 51% of cases, and the same number of evaluations in 18%. The new approach also requires fewer low-fidelity evaluations, by up to an order of magnitude, in almost all cases. The efficacy of both multifidelity methods compared to single-fidelity optimization depends significantly on the behavior of the high-fidelity model and the quality of the low-fidelity approximation, though savings are realized in a large number of cases. The multifidelity algorithm is also compared to the single-fidelity quasi-Newton method for complex aeroelastic simulations. The vehicle design problem includes variables for planform shape, structural sizing, and cruise condition with constraints on trim and structural stresses. Considering the objective function reduction versus computational expenditure, the multifidelity process performs better in three of four cases in early iterations. However, the enforcement of a contracting trust region slows the multifidelity progress. Even so, leveraging the approximate inverse Hessian, the optimization can be seamlessly continued using high-fidelity data alone. Ultimately, the proposed new algorithm produced better designs in all four cases. Investigating the return on investment in terms of design improvement per computational hour confirms that the multifidelity advantage is greatest in early iterations, and managing the transition to high-fidelity optimization is critical.
NASA Technical Reports Server (NTRS)
Boyer, Charles M.; Jackson, Trevor P.; Beyon, Jeffrey Y.; Petway, Larry B.
2013-01-01
Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. Mechanical placement collaboration reduced potential electromagnetic interference (EMI). Through application of newly selected electrical components and thermal analysis data, a total electronic chassis redesign was accomplished. Use of an innovative forced convection tunnel heat sink was employed to meet and exceed project requirements for cooling, mass reduction, and volume reduction. Functionality was a key concern to make efficient use of airflow, and accessibility was also imperative to allow for servicing of chassis internals. The collaborative process provided for accelerated design maturation with substantiated function.
An impact analysis of a flexible bat using an iterative solver.
Penrose, J M; Hose, D R
1999-08-01
Although technology has now infiltrated and prompted evolution in most mass participation sports, the advances in bat technology in such sports as baseball and cricket have been relatively minor. In this study, we used a simple finite element modelling approach to try to shed new light upon the underlying mechanics of the bat-ball impact, with a view to the future optimization of bat design. The analysis of a flexible bat showed that the point of impact that produced the maximum post-impact ball velocity was a function of the bat's vibrational properties and was not necessarily at the centre of percussion. The details of the analysis agreed well with traditional Hertzian impact theory, and broadly with empirical data. An inspection of the relative modal contributions to the deformations during impact also showed that the position of the node of the first flexure mode was important. In conclusion, considerable importance should be attached to the bat's vibrational properties in future design and analysis.
Study on the criterion to determine the bottom deployment modes of a coilable mast
NASA Astrophysics Data System (ADS)
Ma, Haibo; Huang, Hai; Han, Jianbin; Zhang, Wei; Wang, Xinsheng
2017-12-01
A practical design criterion that allows the coilable mast bottom to deploy in local coil mode was proposed. The criterion was defined with initial bottom helical angle and obtained by bottom deformation analyses. Discretizing the longerons into short rods, analyses were conducted based on the cylinder assumption and Kirchhoff's kinetic analogy theory. Then, iterative calculations aiming at the bottom four rods were carried out. A critical bottom helical angle was obtained while the angle changing rate equaled to zero. The critical value was defined as a criterion for judgement of bottom deployment mode. Subsequently, micro-gravity deployment tests were carried out and bottom deployment simulations based on finite element method were developed. Through comparisons of bottom helical angles in critical state, the proposed criterion was evaluated and modified, that is, an initial bottom helical angle less than critical value with a design margin of -13.7% could ensure the mast bottom deploying in local coil mode, and further determine a successful local coil deployment of entire coilable mast.
Finite elements and finite differences for transonic flow calculations
NASA Technical Reports Server (NTRS)
Hafez, M. M.; Murman, E. M.; Wellford, L. C.
1978-01-01
The paper reviews the chief finite difference and finite element techniques used for numerical solution of nonlinear mixed elliptic-hyperbolic equations governing transonic flow. The forms of the governing equations for unsteady two-dimensional transonic flow considered are the Euler equation, the full potential equation in both conservative and nonconservative form, the transonic small-disturbance equation in both conservative and nonconservative form, and the hodograph equations for the small-disturbance case and the full-potential case. Finite difference methods considered include time-dependent methods, relaxation methods, semidirect methods, and hybrid methods. Finite element methods include finite element Lax-Wendroff schemes, implicit Galerkin method, mixed variational principles, dual iterative procedures, optimal control methods and least squares.
Parallel processing in finite element structural analysis
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1987-01-01
A brief review is made of the fundamental concepts and basic issues of parallel processing. Discussion focuses on parallel numerical algorithms, performance evaluation of machines and algorithms, and parallelism in finite element computations. A computational strategy is proposed for maximizing the degree of parallelism at different levels of the finite element analysis process including: 1) formulation level (through the use of mixed finite element models); 2) analysis level (through additive decomposition of the different arrays in the governing equations into the contributions to a symmetrized response plus correction terms); 3) numerical algorithm level (through the use of operator splitting techniques and application of iterative processes); and 4) implementation level (through the effective combination of vectorization, multitasking and microtasking, whenever available).
Scientific and technical challenges on the road towards fusion electricity
NASA Astrophysics Data System (ADS)
Donné, A. J. H.; Federici, G.; Litaudon, X.; McDonald, D. C.
2017-10-01
The goal of the European Fusion Roadmap is to deliver fusion electricity to the grid early in the second half of this century. It breaks the quest for fusion energy into eight missions, and for each of them it describes a research and development programme to address all the open technical gaps in physics and technology and estimates the required resources. It points out the needs to intensify industrial involvement and to seek all opportunities for collaboration outside Europe. The roadmap covers three periods: the short term, which runs parallel to the European Research Framework Programme Horizon 2020, the medium term and the long term. ITER is the key facility of the roadmap as it is expected to achieve most of the important milestones on the path to fusion power. Thus, the vast majority of present resources are dedicated to ITER and its accompanying experiments. The medium term is focussed on taking ITER into operation and bringing it to full power, as well as on preparing the construction of a demonstration power plant DEMO, which will for the first time demonstrate fusion electricity to the grid around the middle of this century. Building and operating DEMO is the subject of the last roadmap phase: the long term. Clearly, the Fusion Roadmap is tightly connected to the ITER schedule. Three key milestones are the first operation of ITER, the start of the DT operation in ITER and reaching the full performance at which the thermal fusion power is 10 times the power put in to the plasma. The Engineering Design Activity of DEMO needs to start a few years after the first ITER plasma, while the start of the construction phase will be a few years after ITER reaches full performance. In this way ITER can give viable input to the design and development of DEMO. Because the neutron fluence in DEMO will be much higher than in ITER, it is important to develop and validate materials that can handle these very high neutron loads. For the testing of the materials, a dedicated 14 MeV neutron source is needed. This DEMO Oriented Neutron Source (DONES) is therefore an important facility to support the fusion roadmap.
Genetic Constructor: An Online DNA Design Platform.
Bates, Maxwell; Lachoff, Joe; Meech, Duncan; Zulkower, Valentin; Moisy, Anaïs; Luo, Yisha; Tekotte, Hille; Franziska Scheitz, Cornelia Johanna; Khilari, Rupal; Mazzoldi, Florencio; Chandran, Deepak; Groban, Eli
2017-12-15
Genetic Constructor is a cloud Computer Aided Design (CAD) application developed to support synthetic biologists from design intent through DNA fabrication and experiment iteration. The platform allows users to design, manage, and navigate complex DNA constructs and libraries, using a new visual language that focuses on functional parts abstracted from sequence. Features like combinatorial libraries and automated primer design allow the user to separate design from construction by focusing on functional intent, and design constraints aid iterative refinement of designs. A plugin architecture enables contributions from scientists and coders to leverage existing powerful software and connect to DNA foundries. The software is easily accessible and platform agnostic, free for academics, and available in an open-source community edition. Genetic Constructor seeks to democratize DNA design, manufacture, and access to tools and services from the synthetic biology community.
NASA Astrophysics Data System (ADS)
Li, Zhifu; Hu, Yueming; Li, Di
2016-08-01
For a class of linear discrete-time uncertain systems, a feedback feed-forward iterative learning control (ILC) scheme is proposed, which is comprised of an iterative learning controller and two current iteration feedback controllers. The iterative learning controller is used to improve the performance along the iteration direction and the feedback controllers are used to improve the performance along the time direction. First of all, the uncertain feedback feed-forward ILC system is presented by an uncertain two-dimensional Roesser model system. Then, two robust control schemes are proposed. One can ensure that the feedback feed-forward ILC system is bounded-input bounded-output stable along time direction, and the other can ensure that the feedback feed-forward ILC system is asymptotically stable along time direction. Both schemes can guarantee the system is robust monotonically convergent along the iteration direction. Third, the robust convergent sufficient conditions are given, which contains a linear matrix inequality (LMI). Moreover, the LMI can be used to determine the gain matrix of the feedback feed-forward iterative learning controller. Finally, the simulation results are presented to demonstrate the effectiveness of the proposed schemes.
The Green House Model of Nursing Home Care in Design and Implementation.
Cohen, Lauren W; Zimmerman, Sheryl; Reed, David; Brown, Patrick; Bowers, Barbara J; Nolet, Kimberly; Hudak, Sandra; Horn, Susan
2016-02-01
To describe the Green House (GH) model of nursing home (NH) care, and examine how GH homes vary from the model, one another, and their founding (or legacy) NH. Data include primary quantitative and qualitative data and secondary quantitative data, derived from 12 GH/legacy NH organizations February 2012-September 2014. This mixed methods, cross-sectional study used structured interviews to obtain information about presence of, and variation in, GH-relevant structures and processes of care. Qualitative questions explored reasons for variation in model implementation. Interview data were analyzed using related-sample tests, and qualitative data were iteratively analyzed using a directed content approach. GH homes showed substantial variation in practices to support resident choice and decision making; neither GH nor legacy homes provided complete choice, and all GH homes excluded residents from some key decisions. GH homes were most consistent with the model and one another in elements to create a real home, such as private rooms and baths and open kitchens, and in staff-related elements, such as self-managed work teams and consistent, universal workers. Although variation in model implementation complicates evaluation, if expansion is to continue, it is essential to examine GH elements and their outcomes. © Health Research and Educational Trust.
NASA Astrophysics Data System (ADS)
Strack, O. D. L.
2018-02-01
We present equations for new limitless analytic line elements. These elements possess a virtually unlimited number of degrees of freedom. We apply these new limitless analytic elements to head-specified boundaries and to problems with inhomogeneities in hydraulic conductivity. Applications of these new analytic elements to practical problems involving head-specified boundaries require the solution of a very large number of equations. To make the new elements useful in practice, an efficient iterative scheme is required. We present an improved version of the scheme presented by Bandilla et al. (2007), based on the application of Cauchy integrals. The limitless analytic elements are useful when modeling strings of elements, rivers for example, where local conditions are difficult to model, e.g., when a well is close to a river. The solution of such problems is facilitated by increasing the order of the elements to obtain a good solution. This makes it unnecessary to resort to dividing the element in question into many smaller elements to obtain a satisfactory solution.
Stochastic Galerkin methods for the steady-state Navier–Stokes equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sousedík, Bedřich, E-mail: sousedik@umbc.edu; Elman, Howard C., E-mail: elman@cs.umd.edu
2016-07-01
We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmarkmore » problems.« less
Relations between elliptic multiple zeta values and a special derivation algebra
NASA Astrophysics Data System (ADS)
Broedel, Johannes; Matthes, Nils; Schlotterer, Oliver
2016-04-01
We investigate relations between elliptic multiple zeta values (eMZVs) and describe a method to derive the number of indecomposable elements of given weight and length. Our method is based on representing eMZVs as iterated integrals over Eisenstein series and exploiting the connection with a special derivation algebra. Its commutator relations give rise to constraints on the iterated integrals over Eisenstein series relevant for eMZVs and thereby allow to count the indecomposable representatives. Conversely, the above connection suggests apparently new relations in the derivation algebra. Under https://tools.aei.mpg.de/emzv we provide relations for eMZVs over a wide range of weights and lengths.
Thermal analysis of the in-vessel components of the ITER plasma-position reflectometry.
Quental, P B; Policarpo, H; Luís, R; Varela, P
2016-11-01
The ITER plasma position reflectometry system measures the edge electron density profile of the plasma, providing real-time supplementary contribution to the magnetic measurements of the plasma-wall distance. Some of the system components will be in direct sight of the plasma and therefore subject to plasma and stray radiation, which may cause excessive temperatures and stresses. In this work, thermal finite element analysis of the antenna and adjacent waveguides is conducted with ANSYS V17 (ANSYS® Academic Research, Release 17.0, 2016). Results allow the identification of critical temperature points, and solutions are proposed to improve the thermal behavior of the system.
Thermal analysis of the in-vessel components of the ITER plasma-position reflectometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quental, P. B., E-mail: pquental@ipfn.tecnico.ulisboa.pt; Policarpo, H.; Luís, R.
The ITER plasma position reflectometry system measures the edge electron density profile of the plasma, providing real-time supplementary contribution to the magnetic measurements of the plasma-wall distance. Some of the system components will be in direct sight of the plasma and therefore subject to plasma and stray radiation, which may cause excessive temperatures and stresses. In this work, thermal finite element analysis of the antenna and adjacent waveguides is conducted with ANSYS V17 (ANSYS® Academic Research, Release 17.0, 2016). Results allow the identification of critical temperature points, and solutions are proposed to improve the thermal behavior of the system.
Light scattering by tenuous particles - A generalization of the Rayleigh-Gans-Rocard approach
NASA Technical Reports Server (NTRS)
Acquista, C.
1976-01-01
We consider scattering by arbitrarily shaped particles that satisfy two conditions: (1) that the polarizability of the particle relative to the ambient medium be small compared to 1 and (2) that the phase shift introduced by the particle be less than 2. We solve the integro-differential equation proposed by Shifrin by using the method of successive iterations and then applying a Fourier transform. For the second iteration, results are presented that accurately describe scattering by a broad class of particles. The phase function and other elements of the scattering matrix are shown to be in excellent agreement with Mie theory for spherical scatterers.
Stochastic Galerkin methods for the steady-state Navier–Stokes equations
Sousedík, Bedřich; Elman, Howard C.
2016-04-12
We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmarkmore » problems.« less
A wave superposition method formulated in digital acoustic space
NASA Astrophysics Data System (ADS)
Hwang, Yong-Sin
In this thesis, a new formulation of the Wave Superposition method is proposed wherein the conventional mesh approach is replaced by a simple 3-D digital work space that easily accommodates shape optimization for minimizing or maximizing radiation efficiency. As sound quality is in demand in almost all product designs and also because of fierce competition between product manufacturers, faster and accurate computational method for shape optimization is always desired. Because the conventional Wave Superposition method relies solely on mesh geometry, it cannot accommodate fast shape changes in the design stage of a consumer product or machinery, where many iterations of shape changes are required. Since the use of a mesh hinders easy shape changes, a new approach for representing geometry is introduced by constructing a uniform lattice in a 3-D digital work space. A voxel (a portmanteau, a new word made from combining the sound and meaning, of the words, volumetric and pixel) is essentially a volume element defined by the uniform lattice, and does not require separate connectivity information as a mesh element does. In the presented method, geometry is represented with voxels that can easily adapt to shape changes, therefore it is more suitable for shape optimization. The new method was validated by computing radiated sound power of structures of simple and complex geometries and complex mode shapes. It was shown that matching volume velocity is a key component to an accurate analysis. A sensitivity study showed that it required at least 6 elements per acoustic wavelength, and a complexity study showed a minimal reduction in computational time.
ITER Cryoplant Infrastructures
NASA Astrophysics Data System (ADS)
Fauve, E.; Monneret, E.; Voigt, T.; Vincent, G.; Forgeas, A.; Simon, M.
2017-02-01
The ITER Tokamak requires an average 75 kW of refrigeration power at 4.5 K and 600 kW of refrigeration Power at 80 K to maintain the nominal operation condition of the ITER thermal shields, superconducting magnets and cryopumps. This is produced by the ITER Cryoplant, a complex cluster of refrigeration systems including in particular three identical Liquid Helium Plants and two identical Liquid Nitrogen Plants. Beyond the equipment directly part of the Cryoplant, colossal infrastructures are required. These infrastructures account for a large part of the Cryoplants lay-out, budget and engineering efforts. It is ITER Organization responsibility to ensure that all infrastructures are adequately sized and designed to interface with the Cryoplant. This proceeding presents the overall architecture of the cryoplant. It provides order of magnitude related to the cryoplant building and utilities: electricity, cooling water, heating, ventilation and air conditioning (HVAC).
Conceptual design of the ITER fast-ion loss detector.
Garcia-Munoz, M; Kocan, M; Ayllon-Guerola, J; Bertalot, L; Bonnet, Y; Casal, N; Galdon, J; Garcia Lopez, J; Giacomin, T; Gonzalez-Martin, J; Gunn, J P; Jimenez-Ramos, M C; Kiptily, V; Pinches, S D; Rodriguez-Ramos, M; Reichle, R; Rivero-Rodriguez, J F; Sanchis-Sanchez, L; Snicker, A; Vayakis, G; Veshchev, E; Vorpahl, Ch; Walsh, M; Walton, R
2016-11-01
A conceptual design of a reciprocating fast-ion loss detector for ITER has been developed and is presented here. Fast-ion orbit simulations in a 3D magnetic equilibrium and up-to-date first wall have been carried out to revise the measurement requirements for the lost alpha monitor in ITER. In agreement with recent observations, the simulations presented here suggest that a pitch-angle resolution of ∼5° might be necessary to identify the loss mechanisms. Synthetic measurements including realistic lost alpha-particle as well as neutron and gamma fluxes predict scintillator signal-to-noise levels measurable with standard light acquisition systems with the detector aperture at ∼11 cm outside of the diagnostic first wall. At measurement position, heat load on detector head is comparable to that in present devices.
In-vacuum sensors for the beamline components of the ITER neutral beam test facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalla Palma, M., E-mail: mauro.dallapalma@igi.cnr.it; Pasqualotto, R.; Spagnolo, S.
2016-11-15
Embedded sensors have been designed for installation on the components of the MITICA beamline, the prototype ITER neutral beam injector (Megavolt ITER Injector and Concept Advancement), to derive characteristics of the particle beam and to monitor the component conditions during operation for protection and thermal control. Along the beamline, the components interacting with the particle beam are the neutralizer, the residual ion dump, and the calorimeter. The design and the positioning of sensors on each component have been developed considering the expected beam-surface interaction including non-ideal and off-normal conditions. The arrangement of the following instrumentation is presented: thermal sensors, strainmore » gages, electrostatic probes including secondary emission detectors, grounding shunt for electrical currents, and accelerometers.« less
2011-01-01
reliability, e.g., Turbo Codes [2] and Low Density Parity Check ( LDPC ) codes [3]. The challenge to apply both MIMO and ECC into wireless systems is on...REPORT Fixed-point Design of theLattice-reduction-aided Iterative Detection andDecoding Receiver for Coded MIMO Systems 14. ABSTRACT 16. SECURITY...illustrates the performance of coded LR aided detectors. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES The views, opinions
Fast generating Greenberger-Horne-Zeilinger state via iterative interaction pictures
NASA Astrophysics Data System (ADS)
Huang, Bi-Hua; Chen, Ye-Hong; Wu, Qi-Cheng; Song, Jie; Xia, Yan
2016-10-01
We delve a little deeper into the construction of shortcuts to adiabatic passage for three-level systems by iterative interaction picture (multiple Schrödinger dynamics). As an application example, we use the deduced iterative based shortcuts to rapidly generate the Greenberger-Horne-Zeilinger (GHZ) state in a three-atom system with the help of quantum Zeno dynamics. Numerical simulation shows the dynamics designed by the iterative picture method is physically feasible and the shortcut scheme performs much better than that using the conventional adiabatic passage techniques. Also, the influences of various decoherence processes are discussed by numerical simulation and the results prove that the scheme is fast and robust against decoherence and operational imperfection.
Design, fabrication and test of block 4 design solar cell modules. Part 2: Residential module
NASA Technical Reports Server (NTRS)
Jester, T. L.
1982-01-01
Design, fabrication and test of the Block IV residential load module are reported. Design changes from the proposed module design through three iterations to the discontinuance of testing are outlined.
The PRIMA Test Facility: SPIDER and MITICA test-beds for ITER neutral beam injectors
NASA Astrophysics Data System (ADS)
Toigo, V.; Piovan, R.; Dal Bello, S.; Gaio, E.; Luchetta, A.; Pasqualotto, R.; Zaccaria, P.; Bigi, M.; Chitarin, G.; Marcuzzi, D.; Pomaro, N.; Serianni, G.; Agostinetti, P.; Agostini, M.; Antoni, V.; Aprile, D.; Baltador, C.; Barbisan, M.; Battistella, M.; Boldrin, M.; Brombin, M.; Dalla Palma, M.; De Lorenzi, A.; Delogu, R.; De Muri, M.; Fellin, F.; Ferro, A.; Fiorentin, A.; Gambetta, G.; Gnesotto, F.; Grando, L.; Jain, P.; Maistrello, A.; Manduchi, G.; Marconato, N.; Moresco, M.; Ocello, E.; Pavei, M.; Peruzzo, S.; Pilan, N.; Pimazzoni, A.; Recchia, M.; Rizzolo, A.; Rostagni, G.; Sartori, E.; Siragusa, M.; Sonato, P.; Sottocornola, A.; Spada, E.; Spagnolo, S.; Spolaore, M.; Taliercio, C.; Valente, M.; Veltri, P.; Zamengo, A.; Zaniol, B.; Zanotto, L.; Zaupa, M.; Boilson, D.; Graceffa, J.; Svensson, L.; Schunke, B.; Decamps, H.; Urbani, M.; Kushwah, M.; Chareyre, J.; Singh, M.; Bonicelli, T.; Agarici, G.; Garbuglia, A.; Masiello, A.; Paolucci, F.; Simon, M.; Bailly-Maitre, L.; Bragulat, E.; Gomez, G.; Gutierrez, D.; Mico, G.; Moreno, J.-F.; Pilard, V.; Kashiwagi, M.; Hanada, M.; Tobari, H.; Watanabe, K.; Maejima, T.; Kojima, A.; Umeda, N.; Yamanaka, H.; Chakraborty, A.; Baruah, U.; Rotti, C.; Patel, H.; Nagaraju, M. V.; Singh, N. P.; Patel, A.; Dhola, H.; Raval, B.; Fantz, U.; Heinemann, B.; Kraus, W.; Hanke, S.; Hauer, V.; Ochoa, S.; Blatchford, P.; Chuilon, B.; Xue, Y.; De Esch, H. P. L.; Hemsworth, R.; Croci, G.; Gorini, G.; Rebai, M.; Muraro, A.; Tardocchi, M.; Cavenago, M.; D'Arienzo, M.; Sandri, S.; Tonti, A.
2017-08-01
The ITER Neutral Beam Test Facility (NBTF), called PRIMA (Padova Research on ITER Megavolt Accelerator), is hosted in Padova, Italy and includes two experiments: MITICA, the full-scale prototype of the ITER heating neutral beam injector, and SPIDER, the full-size radio frequency negative-ions source. The NBTF realization and the exploitation of SPIDER and MITICA have been recognized as necessary to make the future operation of the ITER heating neutral beam injectors efficient and reliable, fundamental to the achievement of thermonuclear-relevant plasma parameters in ITER. This paper reports on design and R&D carried out to construct PRIMA, SPIDER and MITICA, and highlights the huge progress made in just a few years, from the signature of the agreement for the NBTF realization in 2011, up to now—when the buildings and relevant infrastructures have been completed, SPIDER is entering the integrated commissioning phase and the procurements of several MITICA components are at a well advanced stage.
Design Approaches to Support Preservice Teachers in Scientific Modeling
NASA Astrophysics Data System (ADS)
Kenyon, Lisa; Davis, Elizabeth A.; Hug, Barbara
2011-02-01
Engaging children in scientific practices is hard for beginning teachers. One such scientific practice with which beginning teachers may have limited experience is scientific modeling. We have iteratively designed preservice teacher learning experiences and materials intended to help teachers achieve learning goals associated with scientific modeling. Our work has taken place across multiple years at three university sites, with preservice teachers focused on early childhood, elementary, and middle school teaching. Based on results from our empirical studies supporting these design decisions, we discuss design features of our modeling instruction in each iteration. Our results suggest some successes in supporting preservice teachers in engaging students in modeling practice. We propose design principles that can guide science teacher educators in incorporating modeling in teacher education.
NASA Technical Reports Server (NTRS)
Marr, W. A., Jr.
1972-01-01
The behavior of finite element models employing different constitutive relations to describe the stress-strain behavior of soils is investigated. Three models, which assume small strain theory is applicable, include a nondilatant, a dilatant and a strain hardening constitutive relation. Two models are formulated using large strain theory and include a hyperbolic and a Tresca elastic perfectly plastic constitutive relation. These finite element models are used to analyze retaining walls and footings. Methods of improving the finite element solutions are investigated. For nonlinear problems better solutions can be obtained by using smaller load increment sizes and more iterations per load increment than by increasing the number of elements. Suitable methods of treating tension stresses and stresses which exceed the yield criteria are discussed.
Parametric Thermal and Flow Analysis of ITER Diagnostic Shield Module
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khodak, A.; Zhai, Y.; Wang, W.
As part of the diagnostic port plug assembly, the ITER Diagnostic Shield Module (DSM) is designed to provide mechanical support and the plasma shielding while allowing access to plasma diagnostics. Thermal and hydraulic analysis of the DSM was performed using a conjugate heat transfer approach, in which heat transfer was resolved in both solid and liquid parts, and simultaneously, fluid dynamics analysis was performed only in the liquid part. ITER Diagnostic First Wall (DFW) and cooling tubing were also included in the analysis. This allowed direct modeling of the interface between DSM and DFW, and also direct assessment of themore » coolant flow distribution between the parts of DSM and DFW to ensure DSM design meets the DFW cooling requirements. Design of the DSM included voids filled with Boron Carbide pellets, allowing weight reduction while keeping shielding capability of the DSM. These voids were modeled as a continuous solid with smeared material properties using analytical relation for thermal conductivity. Results of the analysis lead to design modifications improving heat transfer efficiency of the DSM. Furthermore, the effect of design modifications on thermal performance as well as effect of Boron Carbide will be presented.« less
Parametric Thermal and Flow Analysis of ITER Diagnostic Shield Module
Khodak, A.; Zhai, Y.; Wang, W.; ...
2017-06-19
As part of the diagnostic port plug assembly, the ITER Diagnostic Shield Module (DSM) is designed to provide mechanical support and the plasma shielding while allowing access to plasma diagnostics. Thermal and hydraulic analysis of the DSM was performed using a conjugate heat transfer approach, in which heat transfer was resolved in both solid and liquid parts, and simultaneously, fluid dynamics analysis was performed only in the liquid part. ITER Diagnostic First Wall (DFW) and cooling tubing were also included in the analysis. This allowed direct modeling of the interface between DSM and DFW, and also direct assessment of themore » coolant flow distribution between the parts of DSM and DFW to ensure DSM design meets the DFW cooling requirements. Design of the DSM included voids filled with Boron Carbide pellets, allowing weight reduction while keeping shielding capability of the DSM. These voids were modeled as a continuous solid with smeared material properties using analytical relation for thermal conductivity. Results of the analysis lead to design modifications improving heat transfer efficiency of the DSM. Furthermore, the effect of design modifications on thermal performance as well as effect of Boron Carbide will be presented.« less
Mapping algorithm for freeform construction using non-ideal light sources
NASA Astrophysics Data System (ADS)
Li, Chen; Michaelis, D.; Schreiber, P.; Dick, L.; Bräuer, A.
2015-09-01
Using conventional mapping algorithms for the construction of illumination freeform optics' arbitrary target pattern can be obtained for idealized sources, e.g. collimated light or point sources. Each freeform surface element generates an image point at the target and the light intensity of an image point is corresponding to the area of the freeform surface element who generates the image point. For sources with a pronounced extension and ray divergence, e.g. an LED with a small source-freeform-distance, the image points are blurred and the blurred patterns might be different between different points. Besides, due to Fresnel losses and vignetting, the relationship between light intensity of image points and area of freeform surface elements becomes complicated. These individual light distributions of each freeform element are taken into account in a mapping algorithm. To this end the method of steepest decent procedures are used to adapt the mapping goal. A structured target pattern for a optics system with an ideal source is computed applying corresponding linear optimization matrices. Special weighting factor and smoothing factor are included in the procedures to achieve certain edge conditions and to ensure the manufacturability of the freefrom surface. The corresponding linear optimization matrices, which are the lighting distribution patterns of each of the freeform surface elements, are gained by conventional raytracing with a realistic source. Nontrivial source geometries, like LED-irregularities due to bonding or source fine structures, and a complex ray divergence behavior can be easily considered. Additionally, Fresnel losses, vignetting and even stray light are taken into account. After optimization iterations, with a realistic source, the initial mapping goal can be achieved by the optics system providing a structured target pattern with an ideal source. The algorithm is applied to several design examples. A few simple tasks are presented to discussed the ability and limitation of the this mothed. It is also presented that a homogeneous LED-illumination system design, in where, with a strongly tilted incident direction, a homogeneous distribution is achieved with a rather compact optics system and short working distance applying a relatively large LED source. It is shown that the lighting distribution patterns from the freeform surface elements can be significantly different from the others. The generation of a structured target pattern, applying weighting factor and smoothing factor, are discussed. Finally, freeform designs for much more complex sources like clusters of LED-sources are presented.
Fusion Breeding for Sustainable, Mid Century, Carbon Free Power
NASA Astrophysics Data System (ADS)
Manheimer, Wallace
2015-11-01
If ITER achieves Q ~10, it is still very far from useful fusion. The fusion power, and the driver power will allow only a small amount of power to be delivered, <~50MW for an ITER scale tokamak. It is unlikely, considering ``conservative design rules'' that tokamaks can ever be economical pure fusion power producers. Considering the status of other magnetic fusion concepts, it is also very unlikely that any alternate concept will either. Laser fusion does not seem to be constrained by any conservative design rules, but considering the failure of NIF to achhieve ignition, at this point it has many more obstacles to overcome than magnetic fusion. One way out of this dilemma is to use an ITER size tokamak, or a NIF size laser, as a fuel breeder for searate nuclear reactors. Hence ITER and NIF become ends in themselves, instead of steps to who knows what DEMO decades later. Such a tokamak can easily live within the consrtaints of conservative design rules. This has led the author to propose ``The Energy Park'' a sustainable, carbon free, economical, and environmently viable power source without prolifertion risk. It is one fusion breeder fuels 5 conventional nuclear reactors, and one fast neutron reactor burns the actinide wastes.
Design of the ITER Electron Cyclotron Heating and Current Drive Waveguide Transmission Line
NASA Astrophysics Data System (ADS)
Bigelow, T. S.; Rasmussen, D. A.; Shapiro, M. A.; Sirigiri, J. R.; Temkin, R. J.; Grunloh, H.; Koliner, J.
2007-11-01
The ITER ECH transmission line system is designed to deliver the power, from twenty-four 1 MW 170 GHz gyrotrons and three 1 MW 127.5 GHz gyrotrons, to the equatorial and upper launchers. The performance requirements, initial design of components and layout between the gyrotrons and the launchers is underway. Similar 63.5 mm ID corrugated waveguide systems have been built and installed on several fusion experiments; however, none have operated at the high frequency and long-pulse required for ITER. Prototype components are being tested at low power to estimate ohmic and mode conversion losses. In order to develop and qualify the ITER components prior to procurement of the full set of 24 transmission lines, a 170 GHz high power test of a complete prototype transmission line is planned. Testing of the transmission line at 1-2 MW can be performed with a modest power (˜0.5 MW) tube with a low loss (10-20%) resonant ring configuration. A 140 GHz long pulse, 400 kW gyrotron will be used in the initial tests and a 170 GHz gyrotron will be used when it becomes available. Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the U.S. Dept. of Energy under contract DE-AC05-00OR22725.
Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua
2011-07-01
In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Cold Test and Performance Evaluation of Prototype Cryoline-X
NASA Astrophysics Data System (ADS)
Shah, N.; Choukekar, K.; Kapoor, H.; Muralidhara, S.; Garg, A.; Kumar, U.; Jadon, M.; Dash, B.; Bhattachrya, R.; Badgujar, S.; Billot, V.; Bravais, P.; Cadeau, P.
2017-12-01
The multi-process pipe vacuum jacketed cryolines for the ITER project are probably world’s most complex cryolines in terms of layout, load cases, quality, safety and regulatory requirements. As a risk mitigation plan, design, manufacturing and testing of prototype cryoline (PTCL) was planned before the approval of final design of ITER cryolines. The 29 meter long PTCL consist of 6 process pipes encased by thermal shield inside Outer Vacuum Jacket of DN 600 size and carries cold helium at 4.5 K and 80 K. The global heat load limit was defined as 1.2 W/m at 4.5 K and 4.5 W/m at 80 K. The PTCL-X (PTCL for Group-X cryolines) was specified in detail by ITER-India and designed as well as manufactured by Air Liquide. PTCL-X was installed and tested at cryogenic temperature at ITER-India Cryogenic Laboratory in 2016. The heat load at 4.5 K and 80 K, estimated using enthalpy difference method, was found to be approximately 0.8 W/m at 4.5 K, 4.2 W/m at 80 K, which is well within the defined limits. Thermal shield temperature profile was also found to be satisfactory. Paper summarizes the cold test results of PTCL-X
NASA Technical Reports Server (NTRS)
Henke, Luke
2010-01-01
The ICARE method is a flexible, widely applicable method for systems engineers to solve problems and resolve issues in a complete and comprehensive manner. The method can be tailored by diverse users for direct application to their function (e.g. system integrators, design engineers, technical discipline leads, analysts, etc.). The clever acronym, ICARE, instills the attitude of accountability, safety, technical rigor and engagement in the problem resolution: Identify, Communicate, Assess, Report, Execute (ICARE). This method was developed through observation of Space Shuttle Propulsion Systems Engineering and Integration (PSE&I) office personnel approach in an attempt to succinctly describe the actions of an effective systems engineer. Additionally it evolved from an effort to make a broadly-defined checklist for a PSE&I worker to perform their responsibilities in an iterative and recursive manner. The National Aeronautics and Space Administration (NASA) Systems Engineering Handbook states, engineering of NASA systems requires a systematic and disciplined set of processes that are applied recursively and iteratively for the design, development, operation, maintenance, and closeout of systems throughout the life cycle of the programs and projects. ICARE is a method that can be applied within the boundaries and requirements of NASA s systems engineering set of processes to provide an elevated sense of duty and responsibility to crew and vehicle safety. The importance of a disciplined set of processes and a safety-conscious mindset increases with the complexity of the system. Moreover, the larger the system and the larger the workforce, the more important it is to encourage the usage of the ICARE method as widely as possible. According to the NASA Systems Engineering Handbook, elements of a system can include people, hardware, software, facilities, policies and documents; all things required to produce system-level results, qualities, properties, characteristics, functions, behavior and performance. The ICARE method can be used to improve all elements of a system and, consequently, the system-level functional, physical and operational performance. Even though ICARE was specifically designed for a systems engineer, any person whose job is to examine another person, product, or process can use the ICARE method to improve effectiveness, implementation, usefulness, value, capability, efficiency, integration, design, and/or marketability. This paper provides the details of the ICARE method, emphasizing the method s application to systems engineering. In addition, a sample of other, non-systems engineering applications are briefly discussed to demonstrate how ICARE can be tailored to a variety of diverse jobs (from project management to parenting).
Cobetto, N; Aubin, C E; Parent, S; Clin, J; Barchi, S; Turgeon, I; Labelle, Hubert
2016-10-01
Clinical assessment of immediate in-brace effect of braces designed using CAD/CAM and FEM vs. only CAD/CAM for conservative treatment of AIS, using a randomized blinded and controlled study design. Forty AIS patients were prospectively recruited and randomized into two groups. For 19 patients (control group), the brace was designed using a scan of patient's torso and a conventional CAD/CAM approach (CtrlBrace). For the 21 other patients (test group), the brace was additionally designed using finite element modeling (FEM) and 3D reconstructions of spine, rib cage and pelvis (NewBrace). The NewBrace design was simulated and iteratively optimized to maximize the correction and minimize the contact surface and material. Both groups had comparable age, sex, weight, height, curve type and severity. Scoliosis Research Society standardized criteria for bracing were followed. Average Cobb angle prior to bracing was 27° and 28° for main thoracic (MT) and lumbar (L) curves, respectively, for the control group, while it was 33° and 28° for the test group. CtrlBraces reduced MT and L curves by 8° (29 %) and 10° (40 %), respectively, compared to 14° (43 %) and 13° (46 %) for NewBraces, which were simulated with a difference inferior to 5°. NewBraces were 50 % thinner and had 20 % less covering surface than CtrlBraces. Braces designed with CAD/CAM and 3D FEM simulation were more efficient and lighter than standard CAD/CAM TLSO's at first immediate in-brace evaluation. These results suggest that long-term effect of bracing in AIS may be improved using this new platform for brace fabrication. NCT02285621.
Conceptual design of the DEMO neutral beam injectors: main developments and R&D achievements
NASA Astrophysics Data System (ADS)
Sonato, P.; Agostinetti, P.; Bolzonella, T.; Cismondi, F.; Fantz, U.; Fassina, A.; Franke, T.; Furno, I.; Hopf, C.; Jenkins, I.; Sartori, E.; Tran, M. Q.; Varje, J.; Vincenzi, P.; Zanotto, L.
2017-05-01
The objectives of the nuclear fusion power plant DEMO, to be built after the ITER experimental reactor, are usually understood to lie somewhere between those of ITER and a ‘first of a kind’ commercial plant. Hence, in DEMO the issues related to efficiency and RAMI (reliability, availability, maintainability and inspectability) are among the most important drivers for the design, as the cost of the electricity produced by this power plant will strongly depend on these aspects. In the framework of the EUROfusion Work Package Heating and Current Drive within the Power Plant Physics and Development activities, a conceptual design of the neutral beam injector (NBI) for the DEMO fusion reactor has been developed by Consorzio RFX in collaboration with other European research institutes. In order to improve efficiency and RAMI aspects, several innovative solutions have been introduced in comparison to the ITER NBI, mainly regarding the beam source, neutralizer and vacuum pumping systems.