Forward marching procedure for separated boundary-layer flows
NASA Technical Reports Server (NTRS)
Carter, J. E.; Wornom, S. F.
1975-01-01
A forward-marching procedure for separated boundary-layer flows which permits the rapid and accurate solution of flows of limited extent is presented. The streamwise convection of vorticity in the reversed flow region is neglected, and this approximation is incorporated into a previously developed (Carter, 1974) inverse boundary-layer procedure. The equations are solved by the Crank-Nicolson finite-difference scheme in which column iteration is carried out at each streamwise station. Instabilities encountered in the column iterations are removed by introducing timelike terms in the finite-difference equations. This provides both unconditional diagonal dominance and a column iterative scheme, found to be stable using the von Neumann stability analysis.
Blacker, Teddy D.
1994-01-01
An automatic quadrilateral surface discretization method and apparatus is provided for automatically discretizing a geometric region without decomposing the region. The automated quadrilateral surface discretization method and apparatus automatically generates a mesh of all quadrilateral elements which is particularly useful in finite element analysis. The generated mesh of all quadrilateral elements is boundary sensitive, orientation insensitive and has few irregular nodes on the boundary. A permanent boundary of the geometric region is input and rows are iteratively layered toward the interior of the geometric region. Also, an exterior permanent boundary and an interior permanent boundary for a geometric region may be input and the rows are iteratively layered inward from the exterior boundary in a first counter clockwise direction while the rows are iteratively layered from the interior permanent boundary toward the exterior of the region in a second clockwise direction. As a result, a high quality mesh for an arbitrary geometry may be generated with a technique that is robust and fast for complex geometric regions and extreme mesh gradations.
Installation and Testing of ITER Integrated Modeling and Analysis Suite (IMAS) on DIII-D
NASA Astrophysics Data System (ADS)
Lao, L.; Kostuk, M.; Meneghini, O.; Smith, S.; Staebler, G.; Kalling, R.; Pinches, S.
2017-10-01
A critical objective of the ITER Integrated Modeling Program is the development of IMAS to support ITER plasma operation and research activities. An IMAS framework has been established based on the earlier work carried out within the EU. It consists of a physics data model and a workflow engine. The data model is capable of representing both simulation and experimental data and is applicable to ITER and other devices. IMAS has been successfully installed on a local DIII-D server using a flexible installer capable of managing the core data access tools (Access Layer and Data Dictionary) and optionally the Kepler workflow engine and coupling tools. A general adaptor for OMFIT (a workflow engine) is being built for adaptation of any analysis code to IMAS using a new IMAS universal access layer (UAL) interface developed from an existing OMFIT EU Integrated Tokamak Modeling UAL. Ongoing work includes development of a general adaptor for EFIT and TGLF based on this new UAL that can be readily extended for other physics codes within OMFIT. Work supported by US DOE under DE-FC02-04ER54698.
Implementation on a nonlinear concrete cracking algorithm in NASTRAN
NASA Technical Reports Server (NTRS)
Herting, D. N.; Herendeen, D. L.; Hoesly, R. L.; Chang, H.
1976-01-01
A computer code for the analysis of reinforced concrete structures was developed using NASTRAN as a basis. Nonlinear iteration procedures were developed for obtaining solutions with a wide variety of loading sequences. A direct access file system was used to save results at each load step to restart within the solution module for further analysis. A multi-nested looping capability was implemented to control the iterations and change the loads. The basis for the analysis is a set of mutli-layer plate elements which allow local definition of materials and cracking properties.
Generalization of analytical tools for helicopter-rotor airfoils
NASA Technical Reports Server (NTRS)
Gibbs, E. H.
1979-01-01
A state-of-the-art finite difference boundary-layer program incorporated into the NYU Transonic Analysis Program is described. Some possible treatments for the trailing edge region were investigated. Findings indicate the trailing edge region, still within the scope of an iterative potential flow, boundary layer program, appears feasible.
Low speed airfoil design and analysis
NASA Technical Reports Server (NTRS)
Eppler, R.; Somers, D. M.
1979-01-01
A low speed airfoil design and analysis program was developed which contains several unique features. In the design mode, the velocity distribution is not specified for one but many different angles of attack. Several iteration options are included which allow the trailing edge angle to be specified while other parameters are iterated. For airfoil analysis, a panel method is available which uses third-order panels having parabolic vorticity distributions. The flow condition is satisfied at the end points of the panels. Both sharp and blunt trailing edges can be analyzed. The integral boundary layer method with its laminar separation bubble analog, empirical transition criterion, and precise turbulent boundary layer equations compares very favorably with other methods, both integral and finite difference. Comparisons with experiment for several airfoils over a very wide Reynolds number range are discussed. Applications to high lift airfoil design are also demonstrated.
Thermal release of D2 from new Be-D co-deposits on previously baked co-deposits
NASA Astrophysics Data System (ADS)
Baldwin, M. J.; Doerner, R. P.
2015-12-01
Past experiments and modeling with the TMAP code in [1, 2] indicated that Be-D co-deposited layers are less (time-wise) efficiently desorbed of retained D in a fixed low-temperature bake, as the layer grows in thickness. In ITER, beryllium rich co-deposited layers will grow in thickness over the life of the machine. Although, compared with the analyses in [1, 2], ITER presents a slightly different bake efficiency problem because of instances of prior tritium recover/control baking. More relevant to ITER, is the thermal release from a new and saturated co-deposit layer in contact with a thickness of previously-baked, less-saturated, co-deposit. Experiments that examine the desorption of saturated co-deposited over-layers in contact with previously baked under-layers are reported and comparison is made to layers of the same combined thickness. Deposition temperatures of ∼323 K and ∼373 K are explored. It is found that an instance of prior bake leads to a subtle effect on the under-layer. The effect causes the thermal desorption of the new saturated over-layer to deviate from the prediction of the validated TMAP model in [2]. Instead of the D thermal release reflecting the combined thickness and levels of D saturation in the over and under layer, experiment differs in that, i) the desorption is a fractional superposition of desorption from the saturated over-layer, with ii) that of the combined over and under -layer thickness. The result is not easily modeled by TMAP without the incorporation of a thin BeO inter-layer which is confirmed experimentally on baked Be-D co-deposits using X-ray micro-analysis.
Viscous and Interacting Flow Field Effects.
1980-06-01
in the inviscid flow analysis using free vortex sheets whose shapes are determined by iteration. The outer iteration employs boundary layer...Methods, Inc. which replaces the source distribution in the separation zone by a vortex wake model . This model is described in some detail in (2), but...in the potential flow is obtained using linearly varying vortex singularities distributed on planar panels. The wake is represented by sheets of
Liao, Yu-Kai; Tseng, Sheng-Hao
2014-01-01
Accurately determining the optical properties of multi-layer turbid media using a layered diffusion model is often a difficult task and could be an ill-posed problem. In this study, an iterative algorithm was proposed for solving such problems. This algorithm employed a layered diffusion model to calculate the optical properties of a layered sample at several source-detector separations (SDSs). The optical properties determined at various SDSs were mutually referenced to complete one round of iteration and the optical properties were gradually revised in further iterations until a set of stable optical properties was obtained. We evaluated the performance of the proposed method using frequency domain Monte Carlo simulations and found that the method could robustly recover the layered sample properties with various layer thickness and optical property settings. It is expected that this algorithm can work with photon transport models in frequency and time domain for various applications, such as determination of subcutaneous fat or muscle optical properties and monitoring the hemodynamics of muscle. PMID:24688828
Application of the perturbation iteration method to boundary layer type problems.
Pakdemirli, Mehmet
2016-01-01
The recently developed perturbation iteration method is applied to boundary layer type singular problems for the first time. As a preliminary work on the topic, the simplest algorithm of PIA(1,1) is employed in the calculations. Linear and nonlinear problems are solved to outline the basic ideas of the new solution technique. The inner and outer solutions are determined with the iteration algorithm and matched to construct a composite expansion valid within all parts of the domain. The solutions are contrasted with the available exact or numerical solutions. It is shown that the perturbation-iteration algorithm can be effectively used for solving boundary layer type problems.
An Approach to the Constrained Design of Natural Laminar Flow Airfoils
NASA Technical Reports Server (NTRS)
Green, Bradford E.
1997-01-01
A design method has been developed by which an airfoil with a substantial amount of natural laminar flow can be designed, while maintaining other aerodynamic and geometric constraints. After obtaining the initial airfoil's pressure distribution at the design lift coefficient using an Euler solver coupled with an integral turbulent boundary layer method, the calculations from a laminar boundary layer solver are used by a stability analysis code to obtain estimates of the transition location (using N-Factors) for the starting airfoil. A new design method then calculates a target pressure distribution that will increase the laminar flow toward the desired amount. An airfoil design method is then iteratively used to design an airfoil that possesses that target pressure distribution. The new airfoil's boundary layer stability characteristics are determined, and this iterative process continues until an airfoil is designed that meets the laminar flow requirement and as many of the other constraints as possible.
An approach to the constrained design of natural laminar flow airfoils
NASA Technical Reports Server (NTRS)
Green, Bradford Earl
1995-01-01
A design method has been developed by which an airfoil with a substantial amount of natural laminar flow can be designed, while maintaining other aerodynamic and geometric constraints. After obtaining the initial airfoil's pressure distribution at the design lift coefficient using an Euler solver coupled with an integml turbulent boundary layer method, the calculations from a laminar boundary layer solver are used by a stability analysis code to obtain estimates of the transition location (using N-Factors) for the starting airfoil. A new design method then calculates a target pressure distribution that will increase the larninar flow toward the desired amounl An airfoil design method is then iteratively used to design an airfoil that possesses that target pressure distribution. The new airfoil's boundary layer stability characteristics are determined, and this iterative process continues until an airfoil is designed that meets the laminar flow requirement and as many of the other constraints as possible.
A computer program for the design and analysis of low-speed airfoils, supplement
NASA Technical Reports Server (NTRS)
Eppler, R.; Somers, D. M.
1980-01-01
Three new options were incorporated into an existing computer program for the design and analysis of low speed airfoils. These options permit the analysis of airfoils having variable chord (variable geometry), a boundary layer displacement iteration, and the analysis of the effect of single roughness elements. All three options are described in detail and are included in the FORTRAN IV computer program.
Resistive MHD Stability Analysis in Near Real-time
NASA Astrophysics Data System (ADS)
Glasser, Alexander; Kolemen, Egemen
2017-10-01
We discuss the feasibility of a near real-time calculation of the tokamak Δ' matrix, which summarizes MHD stability to resistive modes, such as tearing and interchange modes. As the operational phase of ITER approaches, solutions for active feedback tokamak stability control are needed. It has been previously demonstrated that an ideal MHD stability analysis is achievable on a sub- O (1 s) timescale, as is required to control phenomena comparable with the MHD-evolution timescale of ITER. In the present work, we broaden this result to incorporate the effects of resistive MHD modes. Such modes satisfy ideal MHD equations in regions outside narrow resistive layers that form at singular surfaces. We demonstrate that the use of asymptotic expansions at the singular surfaces, as well as the application of state transition matrices, enable a fast, parallelized solution to the singular outer layer boundary value problem, and thereby rapidly compute Δ'. Sponsored by US DOE under DE-SC0015878 and DE-FC02-04ER54698.
Computer-Aided Design Of Turbine Blades And Vanes
NASA Technical Reports Server (NTRS)
Hsu, Wayne Q.
1988-01-01
Quasi-three-dimensional method for determining aerothermodynamic configuration of turbine uses computer-interactive analysis and design and computer-interactive graphics. Design procedure executed rapidly so designer easily repeats it to arrive at best performance, size, structural integrity, and engine life. Sequence of events in aerothermodynamic analysis and design starts with engine-balance equations and ends with boundary-layer analysis and viscous-flow calculations. Analysis-and-design procedure interactive and iterative throughout.
Deblurring in digital tomosynthesis by iterative self-layer subtraction
NASA Astrophysics Data System (ADS)
Youn, Hanbean; Kim, Jee Young; Jang, SunYoung; Cho, Min Kook; Cho, Seungryong; Kim, Ho Kyung
2010-04-01
Recent developments in large-area flat-panel detectors have made tomosynthesis technology revisited in multiplanar xray imaging. However, the typical shift-and-add (SAA) or backprojection reconstruction method is notably claimed by a lack of sharpness in the reconstructed images because of blur artifact which is the superposition of objects which are out of planes. In this study, we have devised an intuitive simple method to reduce the blur artifact based on an iterative approach. This method repeats a forward and backward projection procedure to determine the blur artifact affecting on the plane-of-interest (POI), and then subtracts it from the POI. The proposed method does not include any Fourierdomain operations hence excluding the Fourier-domain-originated artifacts. We describe the concept of the self-layer subtractive tomosynthesis and demonstrate its performance with numerical simulation and experiments. Comparative analysis with the conventional methods, such as the SAA and filtered backprojection methods, is addressed.
NASA Technical Reports Server (NTRS)
Olson, L. E.; Dvorak, F. A.
1975-01-01
The viscous subsonic flow past two-dimensional and infinite-span swept multi-component airfoils is studied theoretically and experimentally. The computerized analysis is based on iteratively coupled boundary layer and potential flow analysis. The method, which is restricted to flows with only slight separation, gives surface pressure distribution, chordwise and spanwise boundary layer characteristics, lift, drag, and pitching moment for airfoil configurations with up to four elements. Merging confluent boundary layers are treated. Theoretical predictions are compared with an exact theoretical potential flow solution and with experimental measures made in the Ames 40- by 80-Foot Wind Tunnel for both two-dimensional and infinite-span swept wing configurations. Section lift characteristics are accurately predicted for zero and moderate sweep angles where flow separation effects are negligible.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1996-01-01
An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.
Two-dimensional over-all neutronics analysis of the ITER device
NASA Astrophysics Data System (ADS)
Zimin, S.; Takatsu, Hideyuki; Mori, Seiji; Seki, Yasushi; Satoh, Satoshi; Tada, Eisuke; Maki, Koichi
1993-07-01
The present work attempts to carry out a comprehensive neutronics analysis of the International Thermonuclear Experimental Reactor (ITER) developed during the Conceptual Design Activities (CDA). The two-dimensional cylindrical over-all calculational models of ITER CDA device including the first wall, blanket, shield, vacuum vessel, magnets, cryostat and support structures were developed for this purpose with a help of the DOGII code. Two dimensional DOT 3.5 code with the FUSION-40 nuclear data library was employed for transport calculations of neutron and gamma ray fluxes, tritium breeding ratio (TBR), and nuclear heating in reactor components. The induced activity calculational code CINAC was employed for the calculations of exposure dose rate after reactor shutdown around the ITER CDA device. The two-dimensional over-all calculational model includes the design specifics such as the pebble bed Li2O/Be layered blanket, the thin double wall vacuum vessel, the concrete cryostat integrated with the over-all ITER design, the top maintenance shield plug, the additional ring biological shield placed under the top cryostat lid around the above-mentioned top maintenance shield plug etc. All the above-mentioned design specifics were included in the employed calculational models. Some alternative design options, such as the water-rich shielding blanket instead of lithium-bearing one, the additional biological shield plug at the top zone between the poloidal field (PF) coil No. 5, and the maintenance shield plug, were calculated as well. Much efforts have been focused on analyses of obtained results. These analyses aimed to obtain necessary recommendations on improving the ITER CDA design.
An analysis for high Reynolds number inviscid/viscid interactions in cascades
NASA Technical Reports Server (NTRS)
Barnett, Mark; Verdon, Joseph M.; Ayer, Timothy C.
1993-01-01
An efficient steady analysis for predicting strong inviscid/viscid interaction phenomena such as viscous-layer separation, shock/boundary-layer interaction, and trailing-edge/near-wake interaction in turbomachinery blade passages is needed as part of a comprehensive analytical blade design prediction system. Such an analysis is described. It uses an inviscid/viscid interaction approach, in which the flow in the outer inviscid region is assumed to be potential, and that in the inner or viscous-layer region is governed by Prandtl's equations. The inviscid solution is determined using an implicit, least-squares, finite-difference approximation, the viscous-layer solution using an inverse, finite-difference, space-marching method which is applied along the blade surfaces and wake streamlines. The inviscid and viscid solutions are coupled using a semi-inverse global iteration procedure, which permits the prediction of boundary-layer separation and other strong-interaction phenomena. Results are presented for three cascades, with a range of inlet flow conditions considered for one of them, including conditions leading to large-scale flow separations. Comparisons with Navier-Stokes solutions and experimental data are also given.
A Decomposition Method for Security Constrained Economic Dispatch of a Three-Layer Power System
NASA Astrophysics Data System (ADS)
Yang, Junfeng; Luo, Zhiqiang; Dong, Cheng; Lai, Xiaowen; Wang, Yang
2018-01-01
This paper proposes a new decomposition method for the security-constrained economic dispatch in a three-layer large-scale power system. The decomposition is realized using two main techniques. The first is to use Ward equivalencing-based network reduction to reduce the number of variables and constraints in the high-layer model without sacrificing accuracy. The second is to develop a price response function to exchange signal information between neighboring layers, which significantly improves the information exchange efficiency of each iteration and results in less iterations and less computational time. The case studies based on the duplicated RTS-79 system demonstrate the effectiveness and robustness of the proposed method.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1993-01-01
In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.
NASA Astrophysics Data System (ADS)
Huang, Rong; Limburg, Karin; Rohtla, Mehis
2017-05-01
X-ray fluorescence computed tomography is often used to measure trace element distributions within low-Z samples, using algorithms capable of X-ray absorption correction when sample self-absorption is not negligible. Its reconstruction is more complicated compared to transmission tomography, and therefore not widely used. We describe in this paper a very practical iterative method that uses widely available transmission tomography reconstruction software for fluorescence tomography. With this method, sample self-absorption can be corrected not only for the absorption within the measured layer but also for the absorption by material beyond that layer. By combining tomography with analysis for scanning X-ray fluorescence microscopy, absolute concentrations of trace elements can be obtained. By using widely shared software, we not only minimized the coding, took advantage of computing efficiency of fast Fourier transform in transmission tomography software, but also thereby accessed well-developed data processing tools coming with well-known and reliable software packages. The convergence of the iterations was also carefully studied for fluorescence of different attenuation lengths. As an example, fish eye lenses could provide valuable information about fish life-history and endured environmental conditions. Given the lens's spherical shape and sometimes the short distance from sample to detector for detecting low concentration trace elements, its tomography data are affected by absorption related to material beyond the measured layer but can be reconstructed well with our method. Fish eye lens tomography results are compared with sliced lens 2D fluorescence mapping with good agreement, and with tomography providing better spatial resolution.
Hattab, H.; Hupalo, M.; Hershberger, M. T.; ...
2015-08-20
A novel type of very fast nucleation was recently found in Pb/Si(111) with 4- to 7-layer high islands becoming crystalline in an “explosive” way, when the Pb deposited amount in the wetting layer is compressed to θ c ~ 1.22 ML, well above the metallic Pb(111) density. This “explosive” nucleation is very different from classical nucleation when island growth is more gradual and islands grow in size by single adatom aggregation [8]. In order to identify the key parameters that control the nucleation we used scanning tunneling microscopy (STM) and spot profile analysis low energy electron diffraction (SPA-LEED). It wasmore » found that the number and duration of steps in iterative deposition used to approach θc and the flux rate have dramatic effects on the crystallization process. Larger depositions over shorter times induce greater spatial coverage fluctuations, so local areas can reach the critical coverage θ c easier. This can trigger the collective motion of the wetting layer from far away to build the Pb islands “explosively”. Here, the SPA-LEED experiments show that even low flux experiments in iterative deposition experiments can trigger transfer of material to the superstable 7-layer islands, as seen from the stronger satellite rings close to the (00) spot.« less
Analysis of airfoil leading edge separation bubbles
NASA Technical Reports Server (NTRS)
Carter, J. E.; Vatsa, V. N.
1982-01-01
A local inviscid-viscous interaction technique was developed for the analysis of low speed airfoil leading edge transitional separation bubbles. In this analysis an inverse boundary layer finite difference analysis is solved iteratively with a Cauchy integral representation of the inviscid flow which is assumed to be a linear perturbation to a known global viscous airfoil analysis. Favorable comparisons with data indicate the overall validity of the present localized interaction approach. In addition numerical tests were performed to test the sensitivity of the computed results to the mesh size, limits on the Cauchy integral, and the location of the transition region.
USDA-ARS?s Scientific Manuscript database
The mean height and standard deviation (SD) of flight is calculated for over 100 insect species from their catches on trap heights reported in the literature. The iterative equations for calculating mean height and SD are presented. The mean flight height for 95% of the studies varied from 0.17 to 5...
A complex guided spectral transform Lanczos method for studying quantum resonance states
Yu, Hua-Gen
2014-12-28
A complex guided spectral transform Lanczos (cGSTL) algorithm is proposed to compute both bound and resonance states including energies, widths and wavefunctions. The algorithm comprises of two layers of complex-symmetric Lanczos iterations. A short inner layer iteration produces a set of complex formally orthogonal Lanczos (cFOL) polynomials. They are used to span the guided spectral transform function determined by a retarded Green operator. An outer layer iteration is then carried out with the transform function to compute the eigen-pairs of the system. The guided spectral transform function is designed to have the same wavefunctions as the eigenstates of the originalmore » Hamiltonian in the spectral range of interest. Therefore the energies and/or widths of bound or resonance states can be easily computed with their wavefunctions or by using a root-searching method from the guided spectral transform surface. The new cGSTL algorithm is applied to bound and resonance states of HO₂, and compared to previous calculations.« less
Long-term fuel retention in JET ITER-like wall
NASA Astrophysics Data System (ADS)
Heinola, K.; Widdowson, A.; Likonen, J.; Alves, E.; Baron-Wiechec, A.; Barradas, N.; Brezinsek, S.; Catarino, N.; Coad, P.; Koivuranta, S.; Krat, S.; Matthews, G. F.; Mayer, M.; Petersson, P.; Contributors, JET
2016-02-01
Post-mortem studies with ion beam analysis, thermal desorption, and secondary ion mass spectrometry have been applied for investigating the long-term fuel retention in the JET ITER-like wall components. The retention takes place via implantation and co-deposition, and the highest retention values were found to correlate with the thickness of the deposited impurity layers. From the total amount of retained D fuel over half was detected in the divertor region. The majority of the retained D is on the top surface of the inner divertor, whereas the least retention was measured in the main chamber on the mid-plane of the inner wall limiter. The recessed areas of the inner wall showed significant contribution to the main chamber total retention. Thermal desorption spectroscopy analysis revealed the energetic T from DD reactions being implanted in the divertor. The total T inventory was assessed to be \\gt 0.3 {{mg}}.
NASA Astrophysics Data System (ADS)
Turner, Peter
2016-05-01
A 2-dimensional radiation analysis has been developed to analyse the radiative efficiency of an arrangement of heat transfer tubes distributed in layers but spaced apart to form a tubed, volumetric receiver. Such an arrangement could be suitable for incorporation into a cavity receiver. Much of the benefit of this volumetric approach is gained after using 5 layers although improvements do continue with further layers. The radiation analysis splits each tube into multiple segments in which each segment surface can absorb, reflect and radiate rays depending on its surface temperature. An iterative technique is used to calculate appropriate temperatures depending on the distribution of the net energy absorbed and assuming that the cool heat transfer fluid (molten salt) starts at the front layer and flows back through successive layers to the rear of the cavity. Modelling the finite diameter of each layer of tubes increases the ability of a layer to block radiation scattered at acute angles and this effect is shown to reduce radiation losses by nearly 25% compared to the earlier 1-d analysis. Optimum efficient designs tend to occur when the blockage factor is 0.2 plus the inverse of the number of tube layers. It is beneficial if the distance between successive layers is ≥ 2 times the diameter of individual tubes and in this situation, if the incoming radiation is spread over a range of angles, the performance is insensitive to the degree of any tube positional offset or stagger between layers.
Estimation of near-surface shear-wave velocity by inversion of Rayleigh waves
Xia, J.; Miller, R.D.; Park, C.B.
1999-01-01
The shear-wave (S-wave) velocity of near-surface materials (soil, rocks, pavement) and its effect on seismic-wave propagation are of fundamental interest in many groundwater, engineering, and environmental studies. Rayleigh-wave phase velocity of a layered-earth model is a function of frequency and four groups of earth properties: P-wave velocity, S-wave velocity, density, and thickness of layers. Analysis of the Jacobian matrix provides a measure of dispersion-curve sensitivity to earth properties. S-wave velocities are the dominant influence on a dispersion curve in a high-frequency range (>5 Hz) followed by layer thickness. An iterative solution technique to the weighted equation proved very effective in the high-frequency range when using the Levenberg-Marquardt and singular-value decomposition techniques. Convergence of the weighted solution is guaranteed through selection of the damping factor using the Levenberg-Marquardt method. Synthetic examples demonstrated calculation efficiency and stability of inverse procedures. We verify our method using borehole S-wave velocity measurements.Iterative solutions to the weighted equation by the Levenberg-Marquardt and singular-value decomposition techniques are derived to estimate near-surface shear-wave velocity. Synthetic and real examples demonstrate the calculation efficiency and stability of the inverse procedure. The inverse results of the real example are verified by borehole S-wave velocity measurements.
Parallel Ellipsoidal Perfectly Matched Layers for Acoustic Helmholtz Problems on Exterior Domains
Bunting, Gregory; Prakash, Arun; Walsh, Timothy; ...
2018-01-26
Exterior acoustic problems occur in a wide range of applications, making the finite element analysis of such problems a common practice in the engineering community. Various methods for truncating infinite exterior domains have been developed, including absorbing boundary conditions, infinite elements, and more recently, perfectly matched layers (PML). PML are gaining popularity due to their generality, ease of implementation, and effectiveness as an absorbing boundary condition. PML formulations have been developed in Cartesian, cylindrical, and spherical geometries, but not ellipsoidal. In addition, the parallel solution of PML formulations with iterative solvers for the solution of the Helmholtz equation, and howmore » this compares with more traditional strategies such as infinite elements, has not been adequately investigated. In this study, we present a parallel, ellipsoidal PML formulation for acoustic Helmholtz problems. To faciliate the meshing process, the ellipsoidal PML layer is generated with an on-the-fly mesh extrusion. Though the complex stretching is defined along ellipsoidal contours, we modify the Jacobian to include an additional mapping back to Cartesian coordinates in the weak formulation of the finite element equations. This allows the equations to be solved in Cartesian coordinates, which is more compatible with existing finite element software, but without the necessity of dealing with corners in the PML formulation. Herein we also compare the conditioning and performance of the PML Helmholtz problem with infinite element approach that is based on high order basis functions. On a set of representative exterior acoustic examples, we show that high order infinite element basis functions lead to an increasing number of Helmholtz solver iterations, whereas for PML the number of iterations remains constant for the same level of accuracy. Finally, this provides an additional advantage of PML over the infinite element approach.« less
Parallel Ellipsoidal Perfectly Matched Layers for Acoustic Helmholtz Problems on Exterior Domains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bunting, Gregory; Prakash, Arun; Walsh, Timothy
Exterior acoustic problems occur in a wide range of applications, making the finite element analysis of such problems a common practice in the engineering community. Various methods for truncating infinite exterior domains have been developed, including absorbing boundary conditions, infinite elements, and more recently, perfectly matched layers (PML). PML are gaining popularity due to their generality, ease of implementation, and effectiveness as an absorbing boundary condition. PML formulations have been developed in Cartesian, cylindrical, and spherical geometries, but not ellipsoidal. In addition, the parallel solution of PML formulations with iterative solvers for the solution of the Helmholtz equation, and howmore » this compares with more traditional strategies such as infinite elements, has not been adequately investigated. In this study, we present a parallel, ellipsoidal PML formulation for acoustic Helmholtz problems. To faciliate the meshing process, the ellipsoidal PML layer is generated with an on-the-fly mesh extrusion. Though the complex stretching is defined along ellipsoidal contours, we modify the Jacobian to include an additional mapping back to Cartesian coordinates in the weak formulation of the finite element equations. This allows the equations to be solved in Cartesian coordinates, which is more compatible with existing finite element software, but without the necessity of dealing with corners in the PML formulation. Herein we also compare the conditioning and performance of the PML Helmholtz problem with infinite element approach that is based on high order basis functions. On a set of representative exterior acoustic examples, we show that high order infinite element basis functions lead to an increasing number of Helmholtz solver iterations, whereas for PML the number of iterations remains constant for the same level of accuracy. Finally, this provides an additional advantage of PML over the infinite element approach.« less
Final Report on ITER Task Agreement 81-08
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard L. Moore
As part of an ITER Implementing Task Agreement (ITA) between the ITER US Participant Team (PT) and the ITER International Team (IT), the INL Fusion Safety Program was tasked to provide the ITER IT with upgrades to the fusion version of the MELCOR 1.8.5 code including a beryllium dust oxidation model. The purpose of this model is to allow the ITER IT to investigate hydrogen production from beryllium dust layers on hot surfaces inside the ITER vacuum vessel (VV) during in-vessel loss-of-cooling accidents (LOCAs). Also included in the ITER ITA was a task to construct a RELAP5/ATHENA model of themore » ITER divertor cooling loop to model the draining of the loop during a large ex-vessel pipe break followed by an in-vessel divertor break and compare the results to a simular MELCOR model developed by the ITER IT. This report, which is the final report for this agreement, documents the completion of the work scope under this ITER TA, designated as TA 81-08.« less
NASA Technical Reports Server (NTRS)
Day, Brad A.; Meade, Andrew J., Jr.
1993-01-01
A semi-discrete Galerkin (SDG) method is under development to model attached, turbulent, and compressible boundary layers for transonic airfoil analysis problems. For the boundary-layer formulation the method models the spatial variable normal to the surface with linear finite elements and the time-like variable with finite differences. A Dorodnitsyn transformed system of equations is used to bound the infinite spatial domain thereby providing high resolution near the wall and permitting the use of a uniform finite element grid which automatically follows boundary-layer growth. The second-order accurate Crank-Nicholson scheme is applied along with a linearization method to take advantage of the parabolic nature of the boundary-layer equations and generate a non-iterative marching routine. The SDG code can be applied to any smoothly-connected airfoil shape without modification and can be coupled to any inviscid flow solver. In this analysis, a direct viscous-inviscid interaction is accomplished between the Euler and boundary-layer codes through the application of a transpiration velocity boundary condition. Results are presented for compressible turbulent flow past RAE 2822 and NACA 0012 airfoils at various freestream Mach numbers, Reynolds numbers, and angles of attack.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berco, Dan, E-mail: danny.barkan@gmail.com; Tseng, Tseung-Yuen, E-mail: tseng@cc.nctu.edu.tw
This study presents an evaluation method for resistive random access memory retention reliability based on the Metropolis Monte Carlo algorithm and Gibbs free energy. The method, which does not rely on a time evolution, provides an extremely efficient way to compare the relative retention properties of metal-insulator-metal structures. It requires a small number of iterations and may be used for statistical analysis. The presented approach is used to compare the relative robustness of a single layer ZrO{sub 2} device with a double layer ZnO/ZrO{sub 2} one, and obtain results which are in good agreement with experimental data.
An Iterative Method for Problems with Multiscale Conductivity
Kim, Hyea Hyun; Minhas, Atul S.; Woo, Eung Je
2012-01-01
A model with its conductivity varying highly across a very thin layer will be considered. It is related to a stable phantom model, which is invented to generate a certain apparent conductivity inside a region surrounded by a thin cylinder with holes. The thin cylinder is an insulator and both inside and outside the thin cylinderare filled with the same saline. The injected current can enter only through the holes adopted to the thin cylinder. The model has a high contrast of conductivity discontinuity across the thin cylinder and the thickness of the layer and the size of holes are very small compared to the domain of the model problem. Numerical methods for such a model require a very fine mesh near the thin layer to resolve the conductivity discontinuity. In this work, an efficient numerical method for such a model problem is proposed by employing a uniform mesh, which need not resolve the conductivity discontinuity. The discrete problem is then solved by an iterative method, where the solution is improved by solving a simple discrete problem with a uniform conductivity. At each iteration, the right-hand side is updated by integrating the previous iterate over the thin cylinder. This process results in a certain smoothing effect on microscopic structures and our discrete model can provide a more practical tool for simulating the apparent conductivity. The convergence of the iterative method is analyzed regarding the contrast in the conductivity and the relative thickness of the layer. In numerical experiments, solutions of our method are compared to reference solutions obtained from COMSOL, where very fine meshes are used to resolve the conductivity discontinuity in the model. Errors of the voltage in L2 norm follow O(h) asymptotically and the current density matches quitewell those from the reference solution for a sufficiently small mesh size h. The experimental results present a promising feature of our approach for simulating the apparent conductivity related to changes in microscopic cellular structures. PMID:23304238
NASA Astrophysics Data System (ADS)
Chuang, Hsueh-Hua
The purpose of this dissertation is to develop an iterative model for the analysis of the current distribution in vertical-cavity surface-emitting lasers (VCSELs) using a circuit network modeling approach. This iterative model divides the VCSEL structure into numerous annular elements and uses a circuit network consisting of resistors and diodes. The measured sheet resistance of the p-distributed Bragg reflector (DBR), the measured sheet resistance of the layers under the oxide layer, and two empirical adjustable parameters are used as inputs to the iterative model to determine the resistance of each resistor. The two empirical values are related to the anisotropy of the resistivity of the p-DBR structure. The spontaneous current, stimulated current, and surface recombination current are accounted for by the diodes. The lateral carrier transport in the quantum well region is analyzed using drift and diffusion currents. The optical gain is calculated as a function of wavelength and carrier density from fundamental principles. The predicted threshold current densities for these VCSELs match the experimentally measured current densities over the wavelength range of 0.83 mum to 0.86 mum with an error of less than 5%. This model includes the effects of the resistance of the p-DBR mirrors, the oxide current-confining layer and spatial hole burning. Our model shows that higher sheet resistance under the oxide layer reduces the threshold current, but also reduces the current range over which single transverse mode operation occurs. The spatial hole burning profile depends on the lateral drift and diffusion of carriers in the quantum wells but is dominated by the voltage drop across the p-DBR region. To my knowledge, for the first time, the drift current and the diffusion current are treated separately. Previous work uses an ambipolar approach, which underestimates the total charge transferred in the quantum well region, especially under the oxide region. However, the total result of the drift current and the diffusion current is less significant than the Ohmic current, especially in the cavity region. This simple iterative model is applied to commercially available oxide-confined VCSELs. The simulation results show excellent agreement with experimentally measured voltage-current curves (within 3.7% for a 10 mum and within 4% for a 5 mum diameter VCSEL) and light-current curves (within 2% for a 10 mum and within 9% for a 5 mum diameter VCSEL) curves and provides insight into the detailed distributions of current and voltage within a VCSEL. This difference between the theoretically calculated results and the measured results is less than the variation shown in the data sheets for production VCSELs.
2014-02-01
idle waiting for the wavefront to reach it. To overcome this, Reeve et al. (2001) 3 developed a scheme in analogy to the red-black Gauss - Seidel iterative ...understandable procedure calls. Parallelization of the SIMPLE iterative scheme with SIP used a red-black scheme similar to the red-black Gauss - Seidel ...scheme, the SIMPLE method, for pressure-velocity coupling. The result is a slowing convergence of the outer iterations . The red-black scheme excites a 2
NASA Astrophysics Data System (ADS)
Li, Zhanhui; Huang, Qinghua; Xie, Xingbing; Tang, Xingong; Chang, Liao
2016-08-01
We present a generic 1D forward modeling and inversion algorithm for transient electromagnetic (TEM) data with an arbitrary horizontal transmitting loop and receivers at any depth in a layered earth. Both the Hankel and sine transforms required in the forward algorithm are calculated using the filter method. The adjoint-equation method is used to derive the formulation of data sensitivity at any depth in non-permeable media. The inversion algorithm based on this forward modeling algorithm and sensitivity formulation is developed using the Gauss-Newton iteration method combined with the Tikhonov regularization. We propose a new data-weighting method to minimize the initial model dependence that enhances the convergence stability. On a laptop with a CPU of i7-5700HQ@3.5 GHz, the inversion iteration of a 200 layered input model with a single receiver takes only 0.34 s, while it increases to only 0.53 s for the data from four receivers at a same depth. For the case of four receivers at different depths, the inversion iteration runtime increases to 1.3 s. Modeling the data with an irregular loop and an equal-area square loop indicates that the effect of the loop geometry is significant at early times and vanishes gradually along the diffusion of TEM field. For a stratified earth, inversion of data from more than one receiver is useful in noise reducing to get a more credible layered earth. However, for a resistive layer shielded below a conductive layer, increasing the number of receivers on the ground does not have significant improvement in recovering the resistive layer. Even with a down-hole TEM sounding, the shielded resistive layer cannot be recovered if all receivers are above the shielded resistive layer. However, our modeling demonstrates remarkable improvement in detecting the resistive layer with receivers in or under this layer.
Saddeek, Ali Mohamed
2017-01-01
Most mathematical models arising in stationary filtration processes as well as in the theory of soft shells can be described by single-valued or generalized multivalued pseudomonotone mixed variational inequalities with proper convex nondifferentiable functionals. Therefore, for finding the minimum norm solution of such inequalities, the current paper attempts to introduce a modified two-layer iteration via a boundary point approach and to prove its strong convergence. The results here improve and extend the corresponding recent results announced by Badriev, Zadvornov and Saddeek (Differ. Equ. 37:934-942, 2001).
A new method for the automatic interpretation of Schlumberger and Wenner sounding curves
Zohdy, A.A.R.
1989-01-01
A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author
Layout compliance for triple patterning lithography: an iterative approach
NASA Astrophysics Data System (ADS)
Yu, Bei; Garreton, Gilda; Pan, David Z.
2014-10-01
As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.
NASA Astrophysics Data System (ADS)
Albert, L.; Rottensteiner, F.; Heipke, C.
2015-08-01
Land cover and land use exhibit strong contextual dependencies. We propose a novel approach for the simultaneous classification of land cover and land use, where semantic and spatial context is considered. The image sites for land cover and land use classification form a hierarchy consisting of two layers: a land cover layer and a land use layer. We apply Conditional Random Fields (CRF) at both layers. The layers differ with respect to the image entities corresponding to the nodes, the employed features and the classes to be distinguished. In the land cover layer, the nodes represent super-pixels; in the land use layer, the nodes correspond to objects from a geospatial database. Both CRFs model spatial dependencies between neighbouring image sites. The complex semantic relations between land cover and land use are integrated in the classification process by using contextual features. We propose a new iterative inference procedure for the simultaneous classification of land cover and land use, in which the two classification tasks mutually influence each other. This helps to improve the classification accuracy for certain classes. The main idea of this approach is that semantic context helps to refine the class predictions, which, in turn, leads to more expressive context information. Thus, potentially wrong decisions can be reversed at later stages. The approach is designed for input data based on aerial images. Experiments are carried out on a test site to evaluate the performance of the proposed method. We show the effectiveness of the iterative inference procedure and demonstrate that a smaller size of the super-pixels has a positive influence on the classification result.
NASA Astrophysics Data System (ADS)
Zheng, Xianwei; Xiong, Hanjiang; Gong, Jianya; Yue, Linwei
2017-07-01
Virtual globes play an important role in representing three-dimensional models of the Earth. To extend the functioning of a virtual globe beyond that of a "geobrowser", the accuracy of the geospatial data in the processing and representation should be of special concern for the scientific analysis and evaluation. In this study, we propose a method for the processing of large-scale terrain data for virtual globe visualization and analysis. The proposed method aims to construct a morphologically preserved multi-resolution triangulated irregular network (TIN) pyramid for virtual globes to accurately represent the landscape surface and simultaneously satisfy the demands of applications at different scales. By introducing cartographic principles, the TIN model in each layer is controlled with a data quality standard to formulize its level of detail generation. A point-additive algorithm is used to iteratively construct the multi-resolution TIN pyramid. The extracted landscape features are also incorporated to constrain the TIN structure, thus preserving the basic morphological shapes of the terrain surface at different levels. During the iterative construction process, the TIN in each layer is seamlessly partitioned based on a virtual node structure, and tiled with a global quadtree structure. Finally, an adaptive tessellation approach is adopted to eliminate terrain cracks in the real-time out-of-core spherical terrain rendering. The experiments undertaken in this study confirmed that the proposed method performs well in multi-resolution terrain representation, and produces high-quality underlying data that satisfy the demands of scientific analysis and evaluation.
Analysis of electric current flow through the HTc multilayered superconductors
NASA Astrophysics Data System (ADS)
Sosnowski, J.
2016-02-01
Issue of the flow of the transport current through multilayered high-temperature superconductors is considered, depending on the direction of the electric current towards the surface of the superconducting CuO2 layers. For configuration of the current flow inside of the layers and for perpendicular magnetic field, it will be considered the current limitations connected with interaction of pancake type vortices with nano-sized defects, created among other during fast neutrons irradiation. So it makes this issue associated with work of nuclear energy devices, like tokamak ITER, LHC and actually developed accelerator Nuclotron-NICA, as well as cryocables. Phenomenological analysis of the pinning potential barrier formation will be in the paper given, which determines critical current flow inside the plane. Comparison of theoretical model with experimental data will be presented too as well as influence of fast neutrons irradiation dose on critical current calculated. For current direction perpendicular to superconducting planes the current-voltage characteristics are calculated basing on model assuming formation of long intrinsic Josephson's junctions in layered HTc superconductors.
Aeromechanics Analysis of a Boundary Layer Ingesting Fan
NASA Technical Reports Server (NTRS)
Bakhle, Milind A.; Reddy, T. S. R.; Herrick, Gregory P.; Shabbir, Aamir; Florea, Razvan V.
2013-01-01
Boundary layer ingesting propulsion systems have the potential to significantly reduce fuel burn but these systems must overcome the challe nges related to aeromechanics-fan flutter stability and forced response dynamic stresses. High-fidelity computational analysis of the fan a eromechanics is integral to the ongoing effort to design a boundary layer ingesting inlet and fan for fabrication and wind-tunnel test. A t hree-dimensional, time-accurate, Reynolds-averaged Navier Stokes computational fluid dynamics code is used to study aerothermodynamic and a eromechanical behavior of the fan in response to both clean and distorted inflows. The computational aeromechanics analyses performed in th is study show an intermediate design iteration of the fan to be flutter-free at the design conditions analyzed with both clean and distorte d in-flows. Dynamic stresses from forced response have been calculated for the design rotational speed. Additional work is ongoing to expan d the analyses to off-design conditions, and for on-resonance conditions.
NASA Astrophysics Data System (ADS)
Brighenti, A.; Bonifetto, R.; Isono, T.; Kawano, K.; Russo, G.; Savoldi, L.; Zanino, R.
2017-12-01
The ITER Central Solenoid Model Coil (CSMC) is a superconducting magnet, layer-wound two-in-hand using Nb3Sn cable-in-conduit conductors (CICCs) with the central channel typical of ITER magnets, cooled with supercritical He (SHe) at ∼4.5 K and 0.5 MPa, operating for approximately 15 years at the National Institutes for Quantum and Radiological Science and Technology in Naka, Japan. The aim of this work is to give an overview of the issues related to the hydraulic performance of the three different CICCs used in the CSMC based on the extensive experimental database put together during the past 15 years. The measured hydraulic characteristics are compared for the different test campaigns and compared also to those coming from the tests of short conductor samples when available. It is shown that the hydraulic performance of the CSMC conductors did not change significantly in the sequence of test campaigns with more than 50 cycles up to 46 kA and 8 cooldown/warmup cycles from 300 K to 4.5 K. The capability of the correlations typically used to predict the friction factor of the SHe for the design and analysis of ITER-like CICCs is also shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Federici, G.; Raffray, A.R.; Chiocchio, S.
1995-12-31
This paper presents the results of an analysis carried out to investigate the thermal response of ITER divertor plasma facing components (PFC`s) clad with Be, W, and CFC, to high-recycling, high-power thermal transients (i.e. 10--30 MW/m{sup 2}) which are anticipated to last up to a few seconds. The armour erosion and surface melting are estimated for the different plasma facing materials (PFM`s) together with the maximum heat flux to the coolant, and armour/heat-sink interface temperature. The analysis assumes that intense target evaporation will lead to high radiative power losses in the plasma in front of the target which self-protects themore » target. The cases analyzed clarify the influence of several key parameters such as the plasma heat flux to the target, the loss of the melt layer, the duration of the event, the thickness of the armour, and comparison is made with cases without vapor shielding. Finally, some implications for the performance and lifetime of divertor PFC`s clad with different PFM`s are discussed.« less
NASA Technical Reports Server (NTRS)
Forgoston, Eric; Tumin, Anatoli; Ashpis, David E.
2005-01-01
An analysis of the optimal control by blowing and suction in order to generate stream- wise velocity streaks is presented. The problem is examined using an iterative process that employs the Parabolized Stability Equations for an incompressible uid along with its adjoint equations. In particular, distributions of blowing and suction are computed for both the normal and tangential velocity perturbations for various choices of parameters.
Full two-dimensional transient solutions of electrothermal aircraft blade deicing
NASA Technical Reports Server (NTRS)
Masiulaniec, K. C.; Keith, T. G., Jr.; Dewitt, K. J.; Leffel, K. L.
1985-01-01
Two finite difference methods are presented for the analysis of transient, two-dimensional responses of an electrothermal de-icer pad of an aircraft wing or blade with attached variable ice layer thickness. Both models employ a Crank-Nicholson iterative scheme, and use an enthalpy formulation to handle the phase change in the ice layer. The first technique makes use of a 'staircase' approach, fitting the irregular ice boundary with square computational cells. The second technique uses a body fitted coordinate transform, and maps the exact shape of the irregular boundary into a rectangular body, with uniformally square computational cells. The numerical solution takes place in the transformed plane. Initial results accounting for variable ice layer thickness are presented. Details of planned de-icing tests at NASA-Lewis, which will provide empirical verification for the above two methods, are also presented.
A method for the dynamic and thermal stress analysis of space shuttle surface insulation
NASA Technical Reports Server (NTRS)
Ojalvo, I. U.; Levy, A.; Austin, F.
1975-01-01
The thermal protection system of the space shuttle consists of thousands of separate insulation tiles bonded to the orbiter's surface through a soft strain-isolation layer. The individual tiles are relatively thick and possess nonuniform properties. Therefore, each is idealized by finite-element assemblages containing up to 2500 degrees of freedom. Since the tiles affixed to a given structural panel will, in general, interact with one another, application of the standard direct-stiffness method would require equation systems involving excessive numbers of unknowns. This paper presents a method which overcomes this problem through an efficient iterative procedure which requires treatment of only a single tile at any given time. Results of associated static, dynamic, and thermal stress analyses and sufficient conditions for convergence of the iterative solution method are given.
NASA Astrophysics Data System (ADS)
Hatano, Y.; Yumizuru, K.; Koivuranta, S.; Likonen, J.; Hara, M.; Matsuyama, M.; Masuzaki, S.; Tokitani, M.; Asakura, N.; Isobe, K.; Hayashi, T.; Baron-Wiechec, A.; Widdowson, A.; contributors, JET
2017-12-01
Energy spectra of β-ray induced x-rays from divertor tiles used in ITER-like wall campaigns of the Joint European Torus were measured to examine tritium (T) penetration into tungsten (W) layers. The penetration depth of T evaluated from the intensity ratio of W(Lα) x-rays to W(Mα) x-rays showed clear correlation with poloidal position; the penetration depth at the upper divertor region reached several micrometers, while that at the lower divertor region was less than 500 nm. The deep penetration at the upper part was ascribed to the implantation of high energy T produced by DD fusion reactions. The poloidal distribution of total x-ray intensity indicated higher T retention in the inboard side than the outboard side of the divertor region.
Liu, Chen-Yi; Goertzen, Andrew L
2013-07-21
An iterative position-weighted centre-of-gravity algorithm was developed and tested for positioning events in a silicon photomultiplier (SiPM)-based scintillation detector for positron emission tomography. The algorithm used a Gaussian-based weighting function centred at the current estimate of the event location. The algorithm was applied to the signals from a 4 × 4 array of SiPM detectors that used individual channel readout and a LYSO:Ce scintillator array. Three scintillator array configurations were tested: single layer with 3.17 mm crystal pitch, matched to the SiPM size; single layer with 1.5 mm crystal pitch; and dual layer with 1.67 mm crystal pitch and a ½ crystal offset in the X and Y directions between the two layers. The flood histograms generated by this algorithm were shown to be superior to those generated by the standard centre of gravity. The width of the Gaussian weighting function of the algorithm was optimized for different scintillator array setups. The optimal width of the Gaussian curve was found to depend on the amount of light spread. The algorithm required less than 20 iterations to calculate the position of an event. The rapid convergence of this algorithm will readily allow for implementation on a front-end detector processing field programmable gate array for use in improved real-time event positioning and identification.
Numerical study of shock-wave/boundary layer interactions in premixed hydrogen-air hypersonic flows
NASA Technical Reports Server (NTRS)
Yungster, Shaye
1991-01-01
A computational study of shock wave/boundary layer interactions involving premixed combustible gases, and the resulting combustion processes is presented. The analysis is carried out using a new fully implicit, total variation diminishing (TVD) code developed for solving the fully coupled Reynolds-averaged Navier-Stokes equations and species continuity equations in an efficient manner. To accelerate the convergence of the basic iterative procedure, this code is combined with vector extrapolation methods. The chemical nonequilibrium processes are simulated by means of a finite-rate chemistry model for hydrogen-air combustion. Several validation test cases are presented and the results compared with experimental data or with other computational results. The code is then applied to study shock wave/boundary layer interactions in a ram accelerator configuration. Results indicate a new combustion mechanism in which a shock wave induces combustion in the boundary layer, which then propagates outwards and downstream. At higher Mach numbers, spontaneous ignition in part of the boundary layer is observed, which eventually extends along the entire boundary layer at still higher values of the Mach number.
Numerical study of shock-wave/boundary layer interactions in premixed hydrogen-air hypersonic flows
NASA Technical Reports Server (NTRS)
Yungster, Shaye
1990-01-01
A computational study of shock wave/boundary layer interactions involving premixed combustible gases, and the resulting combustion processes is presented. The analysis is carried out using a new fully implicit, total variation diminishing (TVD) code developed for solving the fully coupled Reynolds-averaged Navier-Stokes equations and species continuity equations in an efficient manner. To accelerate the convergence of the basic iterative procedure, this code is combined with vector extrapolation methods. The chemical nonequilibrium processes are simulated by means of a finite-rate chemistry model for hydrogen-air combustion. Several validation test cases are presented and the results compared with experimental data or with other computational results. The code is then applied to study shock wave/boundary layer interactions in a ram accelerator configuration. Results indicate a new combustion mechanism in which a shock wave induces combustion in the boundary layer, which then propagates outwards and downstream. At higher Mach numbers, spontaneous ignition in part of the boundary layer is observed, which eventually extends along the entire boundary layer at still higher values of the Mach number.
Optimization of tritium breeding and shielding analysis to plasma in ITER fusion reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Indah Rosidah, M., E-mail: indah.maymunah@gmail.com; Suud, Zaki, E-mail: szaki@fi.itb.ac.id; Yazid, Putranto Ilham
The development of fusion energy is one of the important International energy strategies with the important milestone is ITER (International Thermonuclear Experimental Reactor) project, initiated by many countries, such as: America, Europe, and Japan who agreed to set up TOKAMAK type fusion reactor in France. In ideal fusion reactor the fuel is purely deuterium, but it need higher temperature of reactor. In ITER project the fuels are deuterium and tritium which need lower temperature of the reactor. In this study tritium for fusion reactor can be produced by using reaction of lithium with neutron in the blanket region. With themore » tritium breeding blanket which react between Li-6 in the blanket with neutron resulted from the plasma region. In this research the material used in each layer surrounding the plasma in the reactor is optimized. Moreover, achieving self-sufficiency condition in the reactor in order tritium has enough availability to be consumed for a long time. In order to optimize Tritium Breeding Ratio (TBR) value in the fusion reactor, there are several strategies considered here. The first requirement is making variation in Li-6 enrichment to be 60%, 70%, and 90%. But, the result of that condition can not reach TBR value better than with no enrichment. Because there is reduction of Li-7 percent when increasing Li-6 percent. The other way is converting neutron multiplier material with Pb. From this, we get TBR value better with the Be as neutron multiplier. Beside of TBR value, fusion reactor can analyze the distribution of neutron flux and dose rate of neutron to know the change of neutron concentration for each layer in reactor. From the simulation in this study, 97% neutron concentration can be absorbed by material in reactor, so it is good enough. In addition, it is required to analyze spectrum neutron energy in many layers in the fusion reactor such as in blanket, coolant, and divertor. Actually material in that layer can resist in high temperature and high pressure condition for more than ten years.« less
Resin Permeation Through Compressed Glass Insulation for Iter Central Solenoid
NASA Astrophysics Data System (ADS)
Reed, R.; Roundy, F.; Martovetsky, N.; Miller, J.; Mann, T.
2010-04-01
Concern has been expressed about the ability of the resin system to penetrate the compressed dry glass of the turn and layer insulation during vacuum-pressure impregnation of ITER Central Solenoid (CS) modules. The stacked pancake layers of each module result in compression loads up to 9×104 kg (100 tons) on the lowest layers of each segment. The objective of this program was to assess the effects of this compressive load on resin permeation under resin-transfer conditions and with materials identical to that expected to be used in actual coil fabrication [45-50 °C, vacuum of 133 Pa (1 torr), DGEBF/anhydride epoxy resin system, E-glass satin weave, applied pressure of 125 kPa]. The experimental conditions and materials are detailed and the permeation results presented in this paper.
NASA Technical Reports Server (NTRS)
Blumenthal, Brennan T.; Elmiligui, Alaa; Geiselhart, Karl A.; Campbell, Richard L.; Maughmer, Mark D.; Schmitz, Sven
2016-01-01
The present paper examines potential propulsive and aerodynamic benefits of integrating a Boundary-Layer Ingestion (BLI) propulsion system into a typical commercial aircraft using the Common Research Model (CRM) geometry and the NASA Tetrahedral Unstructured Software System (TetrUSS). The Numerical Propulsion System Simulation (NPSS) environment is used to generate engine conditions for CFD analysis. Improvements to the BLI geometry are made using the Constrained Direct Iterative Surface Curvature (CDISC) design method. Previous studies have shown reductions of up to 25% in terms of propulsive power required for cruise for other axisymmetric geometries using the BLI concept. An analysis of engine power requirements, drag, and lift coefficients using the baseline and BLI geometries coupled with the NPSS model are shown. Potential benefits of the BLI system relating to cruise propulsive power are quantified using a power balance method, and a comparison to the baseline case is made. Iterations of the BLI geometric design are shown and any improvements between subsequent BLI designs presented. Simulations are conducted for a cruise flight condition of Mach 0.85 at an altitude of 38,500 feet and an angle of attack of 2 deg for all geometries. A comparison between available wind tunnel data, previous computational results, and the original CRM model is presented for model verification purposes along with full results for BLI power savings. Results indicate a 14.4% reduction in engine power requirements at cruise for the BLI configuration over the baseline geometry. Minor shaping of the aft portion of the fuselage using CDISC has been shown to increase the benefit from Boundary-Layer Ingestion further, resulting in a 15.6% reduction in power requirements for cruise as well as a drag reduction of eighteen counts over the baseline geometry.
NASA Technical Reports Server (NTRS)
Blumenthal, Brennan
2016-01-01
This thesis will examine potential propulsive and aerodynamic benefits of integrating a boundary-layer ingestion (BLI) propulsion system with a typical commercial aircraft using the Common Research Model geometry and the NASA Tetrahedral Unstructured Software System (TetrUSS). The Numerical Propulsion System Simulation (NPSS) environment will be used to generate engine conditions for CFD analysis. Improvements to the BLI geometry will be made using the Constrained Direct Iterative Surface Curvature (CDISC) design method. Previous studies have shown reductions of up to 25% in terms of propulsive power required for cruise for other axisymmetric geometries using the BLI concept. An analysis of engine power requirements, drag, and lift coefficients using the baseline and BLI geometries coupled with the NPSS model are shown. Potential benefits of the BLI system relating to cruise propulsive power are quantified using a power balance method and a comparison to the baseline case is made. Iterations of the BLI geometric design are shown and any improvements between subsequent BLI designs presented. Simulations are conducted for a cruise flight condition of Mach 0.85 at an altitude of 38,500 feet and an angle of attack of 2deg for all geometries. A comparison between available wind tunnel data, previous computational results, and the original CRM model is presented for model verification purposes along with full results for BLI power savings. Results indicate a 14.3% reduction in engine power requirements at cruise for the BLI configuration over the baseline geometry. Minor shaping of the aft portion of the fuselage using CDISC has been shown to increase the benefit from boundary-layer ingestion further, resulting in a 15.6% reduction in power requirements for cruise as well as a drag reduction of eighteen counts over the baseline geometry.
Scattering from very rough layers under the geometric optics approximation: further investigation.
Pinel, Nicolas; Bourlier, Christophe
2008-06-01
Scattering from very rough homogeneous layers is studied in the high-frequency limit (under the geometric optics approximation) by taking the shadowing effect into account. To do so, the iterated Kirchhoff approximation, recently developed by Pinel et al. [Waves Random Complex Media17, 283 (2007)] and reduced to the geometric optics approximation, is used and investigated in more detail. The contributions from the higher orders of scattering inside the rough layer are calculated under the iterated Kirchhoff approximation. The method can be applied to rough layers of either very rough or perfectly flat lower interfaces, separating either lossless or lossy media. The results are compared with the PILE (propagation-inside-layer expansion) method, recently developed by Déchamps et al. [J. Opt. Soc. Am. A23, 359 (2006)], and accelerated by the forward-backward method with spectral acceleration. They highlight that there is very good agreement between the developed method and the reference numerical method for all scattering orders and that the method can be applied to root-mean-square (RMS) heights at least down to 0.25lambda.
Radiograph and passive data analysis using mixed variable optimization
Temple, Brian A.; Armstrong, Jerawan C.; Buescher, Kevin L.; Favorite, Jeffrey A.
2015-06-02
Disclosed herein are representative embodiments of methods, apparatus, and systems for performing radiography analysis. For example, certain embodiments perform radiographic analysis using mixed variable computation techniques. One exemplary system comprises a radiation source, a two-dimensional detector for detecting radiation transmitted through a object between the radiation source and detector, and a computer. In this embodiment, the computer is configured to input the radiographic image data from the two-dimensional detector and to determine one or more materials that form the object by using an iterative analysis technique that selects the one or more materials from hierarchically arranged solution spaces of discrete material possibilities and selects the layer interfaces from the optimization of the continuous interface data.
The semi-discrete Galerkin finite element modelling of compressible viscous flow past an airfoil
NASA Technical Reports Server (NTRS)
Meade, Andrew J., Jr.
1992-01-01
A method is developed to solve the two-dimensional, steady, compressible, turbulent boundary-layer equations and is coupled to an existing Euler solver for attached transonic airfoil analysis problems. The boundary-layer formulation utilizes the semi-discrete Galerkin (SDG) method to model the spatial variable normal to the surface with linear finite elements and the time-like variable with finite differences. A Dorodnitsyn transformed system of equations is used to bound the infinite spatial domain thereby permitting the use of a uniform finite element grid which provides high resolution near the wall and automatically follows boundary-layer growth. The second-order accurate Crank-Nicholson scheme is applied along with a linearization method to take advantage of the parabolic nature of the boundary-layer equations and generate a non-iterative marching routine. The SDG code can be applied to any smoothly-connected airfoil shape without modification and can be coupled to any inviscid flow solver. In this analysis, a direct viscous-inviscid interaction is accomplished between the Euler and boundary-layer codes, through the application of a transpiration velocity boundary condition. Results are presented for compressible turbulent flow past NACA 0012 and RAE 2822 airfoils at various freestream Mach numbers, Reynolds numbers, and angles of attack. All results show good agreement with experiment, and the coupled code proved to be a computationally-efficient and accurate airfoil analysis tool.
Yu, Hua-Gen
2015-01-28
We report a rigorous full dimensional quantum dynamics algorithm, the multi-layer Lanczos method, for computing vibrational energies and dipole transition intensities of polyatomic molecules without any dynamics approximation. The multi-layer Lanczos method is developed by using a few advanced techniques including the guided spectral transform Lanczos method, multi-layer Lanczos iteration approach, recursive residue generation method, and dipole-wavefunction contraction. The quantum molecular Hamiltonian at the total angular momentum J = 0 is represented in a set of orthogonal polyspherical coordinates so that the large amplitude motions of vibrations are naturally described. In particular, the algorithm is general and problem-independent. An applicationmore » is illustrated by calculating the infrared vibrational dipole transition spectrum of CH₄ based on the ab initio T8 potential energy surface of Schwenke and Partridge and the low-order truncated ab initio dipole moment surfaces of Yurchenko and co-workers. A comparison with experiments is made. The algorithm is also applicable for Raman polarizability active spectra.« less
Analysis of Artificial Neural Network in Erosion Modeling: A Case Study of Serang Watershed
NASA Astrophysics Data System (ADS)
Arif, N.; Danoedoro, P.; Hartono
2017-12-01
Erosion modeling is an important measuring tool for both land users and decision makers to evaluate land cultivation and thus it is necessary to have a model to represent the actual reality. Erosion models are a complex model because of uncertainty data with different sources and processing procedures. Artificial neural networks can be relied on for complex and non-linear data processing such as erosion data. The main difficulty in artificial neural network training is the determination of the value of each network input parameters, i.e. hidden layer, momentum, learning rate, momentum, and RMS. This study tested the capability of artificial neural network application in the prediction of erosion risk with some input parameters through multiple simulations to get good classification results. The model was implemented in Serang Watershed, Kulonprogo, Yogyakarta which is one of the critical potential watersheds in Indonesia. The simulation results showed the number of iterations that gave a significant effect on the accuracy compared to other parameters. A small number of iterations can produce good accuracy if the combination of other parameters was right. In this case, one hidden layer was sufficient to produce good accuracy. The highest training accuracy achieved in this study was 99.32%, occurred in ANN 14 simulation with combination of network input parameters of 1 HL; LR 0.01; M 0.5; RMS 0.0001, and the number of iterations of 15000. The ANN training accuracy was not influenced by the number of channels, namely input dataset (erosion factors) as well as data dimensions, rather it was determined by changes in network parameters.
Sound transmission through a poroelastic layered panel
NASA Astrophysics Data System (ADS)
Nagler, Loris; Rong, Ping; Schanz, Martin; von Estorff, Otto
2014-04-01
Multi-layered panels are often used to improve the acoustics in cars, airplanes, rooms, etc. For such an application these panels include porous and/or fibrous layers. The proposed numerical method is an approach to simulate the acoustical behavior of such multi-layered panels. The model assumes plate-like structures and, hence, combines plate theories for the different layers. The poroelastic layer is modelled with a recently developed plate theory. This theory uses a series expansion in thickness direction with subsequent analytical integration in this direction to reduce the three dimensions to two. The same idea is used to model either air gaps or fibrous layers. The latter are modeled as equivalent fluid and can be handled like an air gap, i.e., a kind of `air plate' is used. The coupling of the layers is done by using the series expansion to express the continuity conditions on the surfaces of the plates. The final system is solved with finite elements, where domain decomposition techniques in combination with preconditioned iterative solvers are applied to solve the final system of equations. In a large frequency range, the comparison with measurements shows very good agreement. From the numerical solution process it can be concluded that different preconditioners for the different layers are necessary. A reuse of the Krylov subspace of the iterative solvers pays if several excitations have to be computed but not that much in the loop over the frequencies.
Plasma facing materials performance under ITER-relevant mitigated disruption photonic heat loads
NASA Astrophysics Data System (ADS)
Klimov, N. S.; Putrik, A. B.; Linke, J.; Pitts, R. A.; Zhitlukhin, A. M.; Kuprianov, I. B.; Spitsyn, A. V.; Ogorodnikova, O. V.; Podkovyrov, V. L.; Muzichenko, A. D.; Ivanov, B. V.; Sergeecheva, Ya. V.; Lesina, I. G.; Kovalenko, D. V.; Barsuk, V. A.; Danilina, N. A.; Bazylev, B. N.; Giniyatulin, R. N.
2015-08-01
PFMs (Plasma-facing materials: ITER grade stainless steel, beryllium, and ferritic-martensitic steels) as well as deposited erosion products of PFCs (Be-like, tungsten, and carbon based) were tested in QSPA under photonic heat loads relevant to those expected from photon radiation during disruptions mitigated by massive gas injection in ITER. Repeated pulses slightly above the melting threshold on the bulk materials eventually lead to a regular, "corrugated" surface, with hills and valleys spaced by 0.2-2 mm. The results indicate that hill growth (growth rate of ∼1 μm per pulse) and sample thinning in the valleys is a result of melt-layer redistribution. The measurements on the 316L(N)-IG indicate that the amount of tritium absorbed by the sample from the gas phase significantly increases with pulse number as well as the modified layer thickness. Repeated pulses significantly below the melting threshold on the deposited erosion products lead to a decrease of hydrogen isotopes trapped during the deposition of the eroded material.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemmi, T.; Matsui, K.; Koizumi, N.
2014-01-27
The insulation system of the ITER TF coils consists of multi-layer glass/polyimide tapes impregnated a cyanate-ester/epoxy resin. The ITER TF coils are required to withstand an irradiation of 10 MGy from gamma-ray and neutrons since the ITER TF coils is exposed by fast neutron (>0.1 MeV) of 10{sup 22} n/m{sup 2} during the ITER operation. Cyanate-ester/epoxy blended resins and bonded glass/polyimide tapes are developed as insulation materials to realize the required radiation-hardness for the insulation of the ITER TF coils. To evaluate the radiation-hardness of the developed insulation materials, the inter-laminar shear strength (ILSS) of glass-fiber reinforced plastics (GFRP) fabricatedmore » using developed insulation materials is measured as one of most important mechanical properties before/after the irradiation in a fission reactor of JRR-3M. As a result, it is demonstrated that the GFRPs using the developed insulation materials have a sufficient performance to apply for the ITER TF coil insulation.« less
Mathematical modeling of static layer crystallization for propellant grade hydrogen peroxide
NASA Astrophysics Data System (ADS)
Hao, Lin; Chen, Xinghua; Sun, Yaozhou; Liu, Yangyang; Li, Shuai; Zhang, Mengqian
2017-07-01
Hydrogen peroxide (H2O2) is an important raw material widely used in many fields. In this work a mathematical model of heat conduction with a moving boundary was proposed to study the melt crystallization process of hydrogen peroxide which was carried out outside a cylindrical crystallizer. Considering the effects of the temperature of the cooling fluid on the thermal conductivity of crude crystal, the model is an improvement of Guardani's research and can be solved by analytic iteration method. An experiment was designed to measure the thickness of crystal layer with time under different conditions. A series of analysis, including the effects of different refrigerant temperature on crystal growth rate, the effects of different cooling rates on crystal layer growth rate, the effects of crystallization temperature on heat transfer and the model's application scope were conducted based on the comparison between experimental results and simulation results of the model.
Parallelization of implicit finite difference schemes in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Decker, Naomi H.; Naik, Vijay K.; Nicoules, Michel
1990-01-01
Implicit finite difference schemes are often the preferred numerical schemes in computational fluid dynamics, requiring less stringent stability bounds than the explicit schemes. Each iteration in an implicit scheme involves global data dependencies in the form of second and higher order recurrences. Efficient parallel implementations of such iterative methods are considerably more difficult and non-intuitive. The parallelization of the implicit schemes that are used for solving the Euler and the thin layer Navier-Stokes equations and that require inversions of large linear systems in the form of block tri-diagonal and/or block penta-diagonal matrices is discussed. Three-dimensional cases are emphasized and schemes that minimize the total execution time are presented. Partitioning and scheduling schemes for alleviating the effects of the global data dependencies are described. An analysis of the communication and the computation aspects of these methods is presented. The effect of the boundary conditions on the parallel schemes is also discussed.
Stability investigations of airfoil flow by global analysis
NASA Technical Reports Server (NTRS)
Morzynski, Marek; Thiele, Frank
1992-01-01
As the result of global, non-parallel flow stability analysis the single value of the disturbance growth-rate and respective frequency is obtained. This complex value characterizes the stability of the whole flow configuration and is not referred to any particular flow pattern. The global analysis assures that all the flow elements (wake, boundary and shear layer) are taken into account. The physical phenomena connected with the wake instability are properly reproduced by the global analysis. This enhances the investigations of instability of any 2-D flows, including ones in which the boundary layer instability effects are known to be of dominating importance. Assuming fully 2-D disturbance form, the global linear stability problem is formulated. The system of partial differential equations is solved for the eigenvalues and eigenvectors. The equations, written in the pure stream function formulation, are discretized via FDM using a curvilinear coordinate system. The complex eigenvalues and corresponding eigenvectors are evaluated by an iterative method. The investigations performed for various Reynolds numbers emphasize that the wake instability develops into the Karman vortex street. This phenomenon is shown to be connected with the first mode obtained from the non-parallel flow stability analysis. The higher modes are reflecting different physical phenomena as for example Tollmien-Schlichting waves, originating in the boundary layer and having the tendency to emerge as instabilities for the growing Reynolds number. The investigations are carried out for a circular cylinder, oblong ellipsis and airfoil. It is shown that the onset of the wake instability, the waves in the boundary layer, the shear layer instability are different solutions of the same eigenvalue problem, formulated using the non-parallel theory. The analysis offers large potential possibilities as the generalization of methods used till now for the stability analysis.
Spatial frequency domain spectroscopy of two layer media
NASA Astrophysics Data System (ADS)
Yudovsky, Dmitry; Durkin, Anthony J.
2011-10-01
Monitoring of tissue blood volume and oxygen saturation using biomedical optics techniques has the potential to inform the assessment of tissue health, healing, and dysfunction. These quantities are typically estimated from the contribution of oxyhemoglobin and deoxyhemoglobin to the absorption spectrum of the dermis. However, estimation of blood related absorption in superficial tissue such as the skin can be confounded by the strong absorption of melanin in the epidermis. Furthermore, epidermal thickness and pigmentation varies with anatomic location, race, gender, and degree of disease progression. This study describes a technique for decoupling the effect of melanin absorption in the epidermis from blood absorption in the dermis for a large range of skin types and thicknesses. An artificial neural network was used to map input optical properties to spatial frequency domain diffuse reflectance of two layer media. Then, iterative fitting was used to determine the optical properties from simulated spatial frequency domain diffuse reflectance. Additionally, an artificial neural network was trained to directly map spatial frequency domain reflectance to sets of optical properties of a two layer medium, thus bypassing the need for iteration. In both cases, the optical thickness of the epidermis and absorption and reduced scattering coefficients of the dermis were determined independently. The accuracy and efficiency of the iterative fitting approach was compared with the direct neural network inversion.
NASA Astrophysics Data System (ADS)
Prokopec, R.; Humer, K.; Fillunger, H.; Maix, R. K.; Weber, H. W.
2010-04-01
Because of the double pancake design of the ITER TF coils the insulation will be applied in several steps. As a consequence, the conductor insulation as well as the pancake insulation will undergo multiple heat cycles in addition to the initial curing cycle. In particular the properties of the organic resin may be influenced, since its heat resistance is limited. Two identical types of sample consisting of wrapped R-glass/Kapton layers and vacuum impregnated with a cyanate ester/epoxy blend were prepared. The build-up of the reinforcement was identical for both insulation systems; however, one system was fabricated in two steps. In the first step only one half of the reinforcing layers was impregnated and cured. Afterwards the remaining layers were wrapped onto the already cured system, before the resulting system was impregnated and cured again. The mechanical properties were characterized prior to and after irradiation to fast neutron fluences of 1 and 2×1022 m-2 (E>0.1 MeV) in tension and interlaminar shear at 77 K. In order to simulate the pulsed operation of ITER, tension-tension fatigue measurements were performed in the load controlled mode. The results do not show any evidence for reduced mechanical strength caused by the additional heat cycle.
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Tong, Qing-zhen; Huang, Sheng; Wang, Yong
2013-11-01
An effective hierarchical reliable belief propagation (HRBP) decoding algorithm is proposed according to the structural characteristics of systematically constructed Gallager low-density parity-check (SCG-LDPC) codes. The novel decoding algorithm combines the layered iteration with the reliability judgment, and can greatly reduce the number of the variable nodes involved in the subsequent iteration process and accelerate the convergence rate. The result of simulation for SCG-LDPC(3969,3720) code shows that the novel HRBP decoding algorithm can greatly reduce the computing amount at the condition of ensuring the performance compared with the traditional belief propagation (BP) algorithm. The bit error rate (BER) of the HRBP algorithm is considerable at the threshold value of 15, but in the subsequent iteration process, the number of the variable nodes for the HRBP algorithm can be reduced by about 70% at the high signal-to-noise ratio (SNR) compared with the BP algorithm. When the threshold value is further increased, the HRBP algorithm will gradually degenerate into the layered-BP algorithm, but at the BER of 10-7 and the maximal iteration number of 30, the net coding gain (NCG) of the HRBP algorithm is 0.2 dB more than that of the BP algorithm, and the average iteration times can be reduced by about 40% at the high SNR. Therefore, the novel HRBP decoding algorithm is more suitable for optical communication systems.
A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2011-01-01
An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.
Design optimization of natural laminar flow bodies in compressible flow
NASA Technical Reports Server (NTRS)
Dodbele, Simha S.
1992-01-01
An optimization method has been developed to design axisymmetric body shapes such as fuselages, nacelles, and external fuel tanks with increased transition Reynolds numbers in subsonic compressible flow. The new design method involves a constraint minimization procedure coupled with analysis of the inviscid and viscous flow regions and linear stability analysis of the compressible boundary-layer. In order to reduce the computer time, Granville's transition criterion is used to predict boundary-layer transition and to calculate the gradients of the objective function, and linear stability theory coupled with the e(exp n)-method is used to calculate the objective function at the end of each design iteration. Use of a method to design an axisymmetric body with extensive natural laminar flow is illustrated through the design of a tiptank of a business jet. For the original tiptank, boundary layer transition is predicted to occur at a transition Reynolds number of 6.04 x 10(exp 6). For the designed body shape, a transition Reynolds number of 7.22 x 10(exp 6) is predicted using compressible linear stability theory coupled with the e(exp n)-method.
AORSA full wave calculations of helicon waves in DIII-D and ITER
NASA Astrophysics Data System (ADS)
Lau, C.; Jaeger, E. F.; Bertelli, N.; Berry, L. A.; Green, D. L.; Murakami, M.; Park, J. M.; Pinsker, R. I.; Prater, R.
2018-06-01
Helicon waves have been recently proposed as an off-axis current drive actuator for DIII-D, FNSF, and DEMO tokamaks. Previous ray tracing modeling using GENRAY predicts strong single pass absorption and current drive in the mid-radius region on DIII-D in high beta tokamak discharges. The full wave code AORSA, which is valid to all order of Larmor radius and can resolve arbitrary ion cyclotron harmonics, has been used to validate the ray tracing technique. If the scrape-off-layer (SOL) is ignored in the modeling, AORSA agrees with GENRAY in both the amplitude and location of driven current for DIII-D and ITER cases. These models also show that helicon current drive can possibly be an efficient current drive actuator for ITER. Previous GENRAY analysis did not include the SOL. AORSA has also been used to extend the simulations to include the SOL and to estimate possible power losses of helicon waves in the SOL. AORSA calculations show that another mode can propagate in the SOL and lead to significant (~10%–20%) SOL losses at high SOL densities. Optimizing the SOL density profile can reduce these SOL losses to a few percent.
AORSA full wave calculations of helicon waves in DIII-D and ITER
Lau, Cornwall; Jaeger, E.F.; Bertelli, Nicola; ...
2018-04-11
Helicon waves have been recently proposed as an off-axis current drive actuator for DIII-D, FNSF, and DEMO tokamaks. Previous ray tracing modeling using GENRAY predicts strong single pass absorption and current drive in the mid-radius region on DIII-D in high beta tokamak discharges. The full wave code AORSA, which is valid to all order of Larmor radius and can resolve arbitrary ion cyclotron harmonics, has been used to validate the ray tracing technique. If the scrape-off-layer (SOL) is ignored in the modeling, AORSA agrees with GENRAY in both the amplitude and location of driven current for DIII-D and ITER cases.more » These models also show that helicon current drive can possibly be an efficient current drive actuator for ITER. Previous GENRAY analysis did not include the SOL. AORSA has also been used to extend the simulations to include the SOL and to estimate possible power losses of helicon waves in the SOL. AORSA calculations show that another mode can propagate in the SOL and lead to significant (~10-20%) SOL losses at high SOL densities. Optimizing the SOL density profile can reduce these SOL losses to a few percent.« less
AORSA full wave calculations of helicon waves in DIII-D and ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lau, Cornwall; Jaeger, E.F.; Bertelli, Nicola
Helicon waves have been recently proposed as an off-axis current drive actuator for DIII-D, FNSF, and DEMO tokamaks. Previous ray tracing modeling using GENRAY predicts strong single pass absorption and current drive in the mid-radius region on DIII-D in high beta tokamak discharges. The full wave code AORSA, which is valid to all order of Larmor radius and can resolve arbitrary ion cyclotron harmonics, has been used to validate the ray tracing technique. If the scrape-off-layer (SOL) is ignored in the modeling, AORSA agrees with GENRAY in both the amplitude and location of driven current for DIII-D and ITER cases.more » These models also show that helicon current drive can possibly be an efficient current drive actuator for ITER. Previous GENRAY analysis did not include the SOL. AORSA has also been used to extend the simulations to include the SOL and to estimate possible power losses of helicon waves in the SOL. AORSA calculations show that another mode can propagate in the SOL and lead to significant (~10-20%) SOL losses at high SOL densities. Optimizing the SOL density profile can reduce these SOL losses to a few percent.« less
NASA Astrophysics Data System (ADS)
Clayton, N.; Crouchen, M.; Evans, D.; Gung, C.-Y.; Su, M.; Devred, A.; Piccin, R.
2017-12-01
The high voltage (HV) insulation on the ITER magnet feeder superconducting busbars and current leads will be prepared from S-glass fabric, pre-impregnated with an epoxy resin, which is interleaved with polyimide film and wrapped onto the components and cured during feeder manufacture. The insulation architecture consists of nine half-lapped layers of glass/Kapton, which is then enveloped in a ground-screen, and two further half-lapped layers of glass pre-preg for mechanical protection. The integrity of the HV insulation is critical in order to inhibit electrical arcs within the feeders. The insulation over the entire length of the HV components (bus bar, current leads and joints) must provide a level of voltage isolation of 30 kV. In operation, the insulation on ITER busbars will be subjected to high mechanical loads, arising from Lorentz forces, and in addition will be subjected to fretting erosion against stainless steel clamps, as the pulsed nature of some magnets results in longitudinal movement of the busbar. This work was aimed at assessing the wear on, and the changes in, the electrical properties of the insulation when subjected to typical ITER operating conditions. High voltage tests demonstrated that the electrical isolation of the insulation was intact after the fretting test.
NASA Astrophysics Data System (ADS)
Jin, Y.; Liang, Z.
2002-12-01
The vector radiative transfer (VRT) equation is an integral-deferential equation to describe multiple scattering, absorption and transmission of four Stokes parameters in random scatter media. From the integral formal solution of VRT equation, the lower order solutions, such as the first-order scattering for a layer medium or the second order scattering for a half space, can be obtained. The lower order solutions are usually good at low frequency when high-order scattering is negligible. It won't be feasible to continue iteration for obtaining high order scattering solution because too many folds integration would be involved. In the space-borne microwave remote sensing, for example, the DMSP (Defense Meterological Satellite Program) SSM/I (Special Sensor Microwave/Imager) employed seven channels of 19, 22, 37 and 85GHz. Multiple scattering from the terrain surfaces such as snowpack cannot be neglected at these channels. The discrete ordinate and eigen-analysis method has been studied to take into account for multiple scattering and applied to remote sensing of atmospheric precipitation, snowpack etc. Snowpack was modeled as a layer of dense spherical particles, and the VRT for a layer of uniformly dense spherical particles has been numerically studied by the discrete ordinate method. However, due to surface melting and refrozen crusts, the snowpack undergoes stratifying to form inhomegeneous profiles of the ice grain size, fractional volume and physical temperature etc. It becomes necessary to study multiple scattering and emission from stratified snowpack of dense ice grains. But, the discrete ordinate and eigen-analysis method cannot be simply applied to multi-layers model, because numerically solving a set of multi-equations of VRT is difficult. Stratifying the inhomogeneous media into multi-slabs and employing the first order Mueller matrix of each thin slab, this paper developed an iterative method to derive high orders scattering solutions of whole scatter media. High order scattering and emission from inhomogeneous stratifying media of dense spherical particles are numerically obtained. The brightness temperature at low frequency such as 5.3 GHz without high order scattering and at SSM/I channels with high order scattering are obtained. This approach is also compared with the conventional discrete ordinate method for an uniform layer model. Numerical simulation for inhomogeneous snowpack is also compared with the measurements of microwave remote sensing.
Experiment of low resistance joints for the ITER correction coil.
Liu, Huajun; Wu, Yu; Wu, Weiyue; Liu, Bo; Shi, Yi; Guo, Shuai
2013-01-01
A test method was designed and performed to measure joint resistance of the ITER correction coil (CC) in liquid helium (LHe) temperature. A 10 kA superconducting transformer was manufactured to provide the joints current. The transformer consisted of two concentric layer-wound superconducting solenoids. NbTi superconducting wire was wound in the primary coil and the ITER CC conductor was wound in the secondary coil. The primary and the secondary coils were both immersed in liquid helium of a 300 mm useful bore diameter cryostat. Two ITER CC joints were assembled in the secondary loop and tested. The current of the secondary loop was ramped to 9 kA in several steps. The two joint resistances were measured to be 1.2 nΩ and 1.65 nΩ, respectively.
Characterization of the ITER CS conductor and projection to the ITER CS performance
Martovetsky, N.; Isono, T.; Bessette, D.; ...
2017-06-20
The ITER Central Solenoid (CS) is one of the critical elements of the machine. The CS conductor went through an intense optimization and qualification program, which included characterization of the strands, a conductor straight short sample testing in the SULTAN facility at the Swiss Plasma Center (SPC), Villigen, Switzerland, and a single-layer CS Insert coil recently tested in the Central Solenoid Model Coil (CSMC) facility in QST-Naka, Japan. In this paper, we obtained valuable data in a wide range of the parameters (current, magnetic field, temperature, and strain), which allowed a credible characterization of the CS conductor in different conditions.more » Finally, using this characterization, we will make a projection to the performance of the CS in the ITER reference scenario.« less
Characterization of the ITER CS conductor and projection to the ITER CS performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martovetsky, N.; Isono, T.; Bessette, D.
The ITER Central Solenoid (CS) is one of the critical elements of the machine. The CS conductor went through an intense optimization and qualification program, which included characterization of the strands, a conductor straight short sample testing in the SULTAN facility at the Swiss Plasma Center (SPC), Villigen, Switzerland, and a single-layer CS Insert coil recently tested in the Central Solenoid Model Coil (CSMC) facility in QST-Naka, Japan. In this paper, we obtained valuable data in a wide range of the parameters (current, magnetic field, temperature, and strain), which allowed a credible characterization of the CS conductor in different conditions.more » Finally, using this characterization, we will make a projection to the performance of the CS in the ITER reference scenario.« less
Electroless atomic layer deposition
Robinson, David Bruce; Cappillino, Patrick J.; Sheridan, Leah B.; Stickney, John L.; Benson, David M.
2017-10-31
A method of electroless atomic layer deposition is described. The method electrolessly generates a layer of sacrificial material on a surface of a first material. The method adds doses of a solution of a second material to the substrate. The method performs a galvanic exchange reaction to oxidize away the layer of the sacrificial material and deposit a layer of the second material on the surface of the first material. The method can be repeated for a plurality of iterations in order to deposit a desired thickness of the second material on the surface of the first material.
NASA Astrophysics Data System (ADS)
Ji, Hongzhu; Zhang, Yinchao; Chen, Siying; Chen, He; Guo, Pan
2018-06-01
An iterative method, based on a derived inverse relationship between atmospheric backscatter coefficient and aerosol lidar ratio, is proposed to invert the lidar ratio profile and aerosol extinction coefficient. The feasibility of this method is investigated theoretically and experimentally. Simulation results show the inversion accuracy of aerosol optical properties for iterative method can be improved in the near-surface aerosol layer and the optical thick layer. Experimentally, as a result of the reduced insufficiency error and incoherence error, the aerosol optical properties with higher accuracy can be obtained in the near-surface region and the region of numerical derivative distortion. In addition, the particle component can be distinguished roughly based on this improved lidar ratio profile.
NASA Astrophysics Data System (ADS)
Shiangjen, Kanokwatt; Chaijaruwanich, Jeerayut; Srisujjalertwaja, Wijak; Unachak, Prakarn; Somhom, Samerkae
2018-02-01
This article presents an efficient heuristic placement algorithm, namely, a bidirectional heuristic placement, for solving the two-dimensional rectangular knapsack packing problem. The heuristic demonstrates ways to maximize space utilization by fitting the appropriate rectangle from both sides of the wall of the current residual space layer by layer. The iterative local search along with a shift strategy is developed and applied to the heuristic to balance the exploitation and exploration tasks in the solution space without the tuning of any parameters. The experimental results on many scales of packing problems show that this approach can produce high-quality solutions for most of the benchmark datasets, especially for large-scale problems, within a reasonable duration of computational time.
Reduction of Free Edge Peeling Stress of Laminated Composites Using Active Piezoelectric Layers
Huang, Bin; Kim, Heung Soo
2014-01-01
An analytical approach is proposed in the reduction of free edge peeling stresses of laminated composites using active piezoelectric layers. The approach is the extended Kantorovich method which is an iterative method. Multiterms of trial function are employed and governing equations are derived by taking the principle of complementary virtual work. The solutions are obtained by solving a generalized eigenvalue problem. By this approach, the stresses automatically satisfy not only the traction-free boundary conditions, but also the free edge boundary conditions. Through the iteration processes, the free edge stresses converge very quickly. It is found that the peeling stresses generated by mechanical loadings are significantly reduced by applying a proper electric field to the piezoelectric actuators. PMID:25025088
Triple/quadruple patterning layout decomposition via linear programming and iterative rounding
NASA Astrophysics Data System (ADS)
Lin, Yibo; Xu, Xiaoqing; Yu, Bei; Baldick, Ross; Pan, David Z.
2017-04-01
As the feature size of the semiconductor technology scales down to 10 nm and beyond, multiple patterning lithography (MPL) has become one of the most practical candidates for lithography, along with other emerging technologies, such as extreme ultraviolet lithography (EUVL), e-beam lithography (EBL), and directed self-assembly. Due to the delay of EUVL and EBL, triple and even quadruple patterning is considered to be used for lower metal and contact layers with tight pitches. In the process of MPL, layout decomposition is the key design stage, where a layout is split into various parts and each part is manufactured through a separate mask. For metal layers, stitching may be allowed to resolve conflicts, whereas it is forbidden for contact and via layers. We focus on the application of layout decomposition where stitching is not allowed, such as for contact and via layers. We propose a linear programming (LP) and iterative rounding solving technique to reduce the number of nonintegers in the LP relaxation problem. Experimental results show that the proposed algorithms can provide high quality decomposition solutions efficiently while introducing as few conflicts as possible.
General and crevice corrosion study of the in-wall shielding materials for ITER vacuum vessel
NASA Astrophysics Data System (ADS)
Joshi, K. S.; Pathak, H. A.; Dayal, R. K.; Bafna, V. K.; Kimihiro, Ioki; Barabash, V.
2012-11-01
Vacuum vessel In-Wall Shield (IWS) will be inserted between the inner and outer shells of the ITER vacuum vessel. The behaviour of IWS in the vacuum vessel especially concerning the susceptibility to crevice of shielding block assemblies could cause rapid and extensive corrosion attacks. Even galvanic corrosion may be due to different metals in same electrolyte. IWS blocks are not accessible until life of the machine after closing of vacuum vessel. Hence, it is necessary to study the susceptibility of IWS materials to general corrosion and crevice corrosion under operations of ITER vacuum vessel. Corrosion properties of IWS materials were studied by using (i) Immersion technique and (ii) Electro-chemical Polarization techniques. All the sample materials were subjected to a series of examinations before and after immersion test, like Loss/Gain weight measurement, SEM analysis, and Optical stereo microscopy, measurement of surface profile and hardness of materials. After immersion test, SS 304B4 and SS 304B7 showed slight weight gain which indicate oxide layer formation on the surface of coupons. The SS 430 material showed negligible weight loss which indicates mild general corrosion effect. On visual observation with SEM and Metallography, all material showed pitting corrosion attack. All sample materials were subjected to series of measurements like Open Circuit potential, Cyclic polarization, Pitting potential, protection potential, Critical anodic current and SEM examination. All materials show pitting loop in OC2 operating condition. However, its absence in OC1 operating condition clearly indicates the activity of chloride ion to penetrate oxide layer on the sample surface, at higher temperature. The critical pitting temperature of all samples remains between 100° and 200°C.
First-Order Hyperbolic System Method for Time-Dependent Advection-Diffusion Problems
2014-03-01
accuracy, with rapid convergence over each physical time step, typically less than five Newton iter - ations. 1 Contents 1 Introduction 3 2 Hyperbolic...however, we employ the Gauss - Seidel (GS) relaxation, which is also an O(N) method for the discretization arising from hyperbolic advection-diffusion system...advection-diffusion scheme. The linear dependency of the iterations on Table 1: Boundary layer problem ( Convergence criteria: Residuals < 10−8.) log10Re
Electromagnetic scattering of large structures in layered earths using integral equations
NASA Astrophysics Data System (ADS)
Xiong, Zonghou; Tripp, Alan C.
1995-07-01
An electromagnetic scattering algorithm for large conductivity structures in stratified media has been developed and is based on the method of system iteration and spatial symmetry reduction using volume electric integral equations. The method of system iteration divides a structure into many substructures and solves the resulting matrix equation using a block iterative method. The block submatrices usually need to be stored on disk in order to save computer core memory. However, this requires a large disk for large structures. If the body is discretized into equal-size cells it is possible to use the spatial symmetry relations of the Green's functions to regenerate the scattering impedance matrix in each iteration, thus avoiding expensive disk storage. Numerical tests show that the system iteration converges much faster than the conventional point-wise Gauss-Seidel iterative method. The numbers of cells do not significantly affect the rate of convergency. Thus the algorithm effectively reduces the solution of the scattering problem to an order of O(N2), instead of O(N3) as with direct solvers.
Tomographic reconstruction of layered tissue structures
NASA Astrophysics Data System (ADS)
Hielscher, Andreas H.; Azeez-Jan, Mohideen; Bartel, Sebastian
2001-11-01
In recent years the interest in the determination of optical properties of layered tissue structure has resurfaced. Applications include, for example, studies on layered skin tissue and underlying muscles, imaging of the brain underneath layers of skin, skull, and meninges, and imaging of the fetal head in utero beneath the layered structures of the maternal abdomen. In this work we approach the problem of layered structures in the framework of model-based iterative image reconstruction schemes. These schemes are currently developed to determine the optical properties inside tissue from measurement on the surface. If applied to layered structure these techniques yield substantial improvements over currently available semi-analytical approaches.
Avoiding drift related to linear analysis update with Lagrangian coordinate models
NASA Astrophysics Data System (ADS)
Wang, Yiguo; Counillon, Francois; Bertino, Laurent
2015-04-01
When applying data assimilation to Lagrangian coordinate models, it is profitable to correct its grid (position, volume). In isopycnal ocean coordinate model, such information is provided by the layer thickness that can be massless but must remains positive (truncated Gaussian distribution). A linear gaussian analysis does not ensure positivity for such variable. Existing methods have been proposed to handle this issue - e.g. post processing, anamorphosis or resampling - but none ensures conservation of the mean, which is imperative in climate application. Here, a framework is introduced to test a new method, which proceed as following. First, layers for which analysis yields negative values are iteratively grouped with neighboring layers, resulting in a probability density function with a larger mean and smaller standard deviation that prevent appearance of negative values. Second, analysis increments of the grouped layer are uniformly distributed, which prevent massless layers to become filled and vice-versa. The new method is proved fully conservative with e.g. OI or 3DVAR but a small drift remains with ensemble-based methods (e.g. EnKF, DEnKF, …) during the update of the ensemble anomaly. However, the resulting drift with the latter is small (an order of magnitude smaller than with post-processing) and the increase of the computational cost moderate. The new method is demonstrated with a realistic application in the Norwegian Climate Prediction Model (NorCPM) that provides climate prediction by assimilating sea surface temperature with the Ensemble Kalman Filter in a fully coupled Earth System model (NorESM) with an isopycnal ocean model (MICOM). Over 25-year analysis period, the new method does not impair the predictive skill of the system but corrects the artificial steric drift introduced by data assimilation, and provide estimate in good agreement with IPCC AR5.
NASA Technical Reports Server (NTRS)
Harris, J. E.; Blanchard, D. K.
1982-01-01
A numerical algorithm and computer program are presented for solving the laminar, transitional, or turbulent two dimensional or axisymmetric compressible boundary-layer equations for perfect-gas flows. The governing equations are solved by an iterative three-point implicit finite-difference procedure. The software, program VGBLP, is a modification of the approach presented in NASA TR R-368 and NASA TM X-2458, respectively. The major modifications are: (1) replacement of the fourth-order Runge-Kutta integration technique with a finite-difference procedure for numerically solving the equations required to initiate the parabolic marching procedure; (2) introduction of the Blottner variable-grid scheme; (3) implementation of an iteration scheme allowing the coupled system of equations to be converged to a specified accuracy level; and (4) inclusion of an iteration scheme for variable-entropy calculations. These modifications to the approach presented in NASA TR R-368 and NASA TM X-2458 yield a software package with high computational efficiency and flexibility. Turbulence-closure options include either two-layer eddy-viscosity or mixing-length models. Eddy conductivity is modeled as a function of eddy viscosity through a static turbulent Prandtl number formulation. Several options are provided for specifying the static turbulent Prandtl number. The transitional boundary layer is treated through a streamwise intermittency function which modifies the turbulence-closure model. This model is based on the probability distribution of turbulent spots and ranges from zero to unity for laminar and turbulent flow, respectively. Several test cases are presented as guides for potential users of the software.
Global stability analysis of axisymmetric boundary layer over a circular cylinder
NASA Astrophysics Data System (ADS)
Bhoraniya, Ramesh; Vinod, Narayanan
2018-05-01
This paper presents a linear global stability analysis of the incompressible axisymmetric boundary layer on a circular cylinder. The base flow is parallel to the axis of the cylinder at inflow boundary. The pressure gradient is zero in the streamwise direction. The base flow velocity profile is fully non-parallel and non-similar in nature. The boundary layer grows continuously in the spatial directions. Linearized Navier-Stokes (LNS) equations are derived for the disturbance flow quantities in the cylindrical polar coordinates. The LNS equations along with homogeneous boundary conditions forms a generalized eigenvalues problem. Since the base flow is axisymmetric, the disturbances are periodic in azimuthal direction. Chebyshev spectral collocation method and Arnoldi's iterative algorithm is used for the solution of the general eigenvalues problem. The global temporal modes are computed for the range of Reynolds numbers and different azimuthal wave numbers. The largest imaginary part of the computed eigenmodes is negative, and hence, the flow is temporally stable. The spatial structure of the eigenmodes shows that the disturbance amplitudes grow in size and magnitude while they are moving towards downstream. The global modes of axisymmetric boundary layer are more stable than that of 2D flat-plate boundary layer at low Reynolds number. However, at higher Reynolds number they approach 2D flat-plate boundary layer. Thus, the damping effect of transverse curvature is significant at low Reynolds number. The wave-like nature of the disturbance amplitudes is found in the streamwise direction for the least stable eigenmodes.
NASA Astrophysics Data System (ADS)
Masuzaki, S.; Tokitani, M.; Otsuka, T.; Oya, Y.; Hatano, Y.; Miyamoto, M.; Sakamoto, R.; Ashikawa, N.; Sakurada, S.; Uemura, Y.; Azuma, K.; Yumizuru, K.; Oyaizu, M.; Suzuki, T.; Kurotaki, H.; Hamaguchi, D.; Isobe, K.; Asakura, N.; Widdowson, A.; Heinola, K.; Jachmich, S.; Rubel, M.; contributors, JET
2017-12-01
Results of the comprehensive surface analyses of divertor tiles and dusts retrieved from JET after the first ITER-like wall campaign (2011-2012) are presented. The samples cored from the divertor tiles were analyzed. Numerous nano-size bubble-like structures were observed in the deposition layer on the apron of the inner divertor tile, and a beryllium dust with the same structures were found in the matter collected from the inner divertor after the campaign. This suggests that the nano-size bubble-like structures can make the deposition layer to become brittle and may lead to cracking followed by dust generation. X-ray photoelectron spectroscopy analyses of chemical states of species in the deposition layers identified the formation of beryllium-tungsten intermetallic compounds on an inner vertical tile. Different tritium retention profiles along the divertor tiles were observed at the top surfaces and at deeper regions of the tiles by using the imaging plate technique.
A fluid modeling perspective on the tokamak power scrape-off width using SOLPS-ITER
NASA Astrophysics Data System (ADS)
Meier, Eric
2016-10-01
SOLPS-ITER, a 2D fluid code, is used to conduct the first fluid modeling study of the physics behind the power scrape-off width (λq). When drift physics are activated in the code, λq is insensitive to changes in toroidal magnetic field (Bt), as predicted by the 0D heuristic drift (HD) model developed by Goldston. Using the HD model, which quantitatively agrees with regression analysis of a multi-tokamak database, λq in ITER is projected to be 1 mm instead of the previously assumed 4 mm, magnifying the challenge of maintaining the peak divertor target heat flux below the technological limit. These simulations, which use DIII-D H-mode experimental conditions as input, and reproduce the observed high-recycling, attached outer target plasma, allow insights into the scrape-off layer (SOL) physics that set λq. Independence of λq with respect to Bt suggests that SOLPS-ITER captures basic HD physics: the effect of Bt on the particle dwell time ( Bt) cancels with the effect on drift speed ( 1 /Bt), fixing the SOL plasma density width, and dictating λq. Scaling with plasma current (Ip), however, is much weaker than the roughly 1 /Ip dependence predicted by the HD model. Simulated net cross-separatrix particle flux due to magnetic drifts exceeds the anomalous particle transport, and a Pfirsch-Schluter-like SOL flow pattern is established. Up-down ion pressure asymmetry enables the net magnetic drift flux. Drifts establish in-out temperature asymmetry, and an associated thermoelectric current carries significant heat flux to the outer target. The density fall-off length in the SOL is similar to the electron temperature fall-off length, as observed experimentally. Finally, opportunities and challenges foreseen in ongoing work to extrapolate SOLPS-ITER and the HD model to ITER and future machines will be discussed. Supported by U.S. Department of Energy Contract DESC0010434.
Layer-oriented multigrid wavefront reconstruction algorithms for multi-conjugate adaptive optics
NASA Astrophysics Data System (ADS)
Gilles, Luc; Ellerbroek, Brent L.; Vogel, Curtis R.
2003-02-01
Multi-conjugate adaptive optics (MCAO) systems with 104-105 degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wavefront control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of AO degrees of freedom. In this paper, we develop an iterative sparse matrix implementation of minimum variance wavefront reconstruction for telescope diameters up to 32m with more than 104 actuators. The basic approach is the preconditioned conjugate gradient method, using a multigrid preconditioner incorporating a layer-oriented (block) symmetric Gauss-Seidel iterative smoothing operator. We present open-loop numerical simulation results to illustrate algorithm convergence.
Elastic-plastic mixed-iterative finite element analysis: Implementation and performance assessment
NASA Technical Reports Server (NTRS)
Sutjahjo, Edhi; Chamis, Christos C.
1993-01-01
An elastic-plastic algorithm based on Von Mises and associative flow criteria is implemented in MHOST-a mixed iterative finite element analysis computer program developed by NASA Lewis Research Center. The performance of the resulting elastic-plastic mixed-iterative analysis is examined through a set of convergence studies. Membrane and bending behaviors of 4-node quadrilateral shell finite elements are tested for elastic-plastic performance. Generally, the membrane results are excellent, indicating the implementation of elastic-plastic mixed-iterative analysis is appropriate.
Movement of the Melt Metal Layer under Conditions Typical of Transient Events in ITER
NASA Astrophysics Data System (ADS)
Poznyak, I. M.; Safronov, V. M.; Zybenko, V. Yu.
2017-12-01
During the operation of ITER, protective coatings of the divertor and the first wall will be exposed to significant plasma heat loads which may cause a huge erosion. One of the major failure mechanisms of metallic armor is diminution of their thickness due to the melt layer displacement. New experimental data are required in order to develop and validate physical models of the melt layer movement. The paper presents the experiments where metal targets were irradiated by a plasma stream at the quasi-stationary high-current plasma accelerator QSPA-T. The obtained data allow one to determine the velocity and acceleration of the melt layer at various distances from the plasma stream axis. The force causing the radial movement of the melt layer is shown to create an acceleration whose order of magnitude is 1000g. The pressure gradient is not responsible for creating this large acceleration. To investigate the melt layer movement under a known force, the experiment with a rotating target was carried out. The influence of centrifugal and Coriolis forces led to appearance of curved elongated waves on the surface. The surface profile changed: there is no hill in the central part of the erosion crater in contrast to the stationary target. The experimental data clarify the trends in the melt motion that are required for development of theoretical models.
Comparing direct and iterative equation solvers in a large structural analysis software system
NASA Technical Reports Server (NTRS)
Poole, E. L.
1991-01-01
Two direct Choleski equation solvers and two iterative preconditioned conjugate gradient (PCG) equation solvers used in a large structural analysis software system are described. The two direct solvers are implementations of the Choleski method for variable-band matrix storage and sparse matrix storage. The two iterative PCG solvers include the Jacobi conjugate gradient method and an incomplete Choleski conjugate gradient method. The performance of the direct and iterative solvers is compared by solving several representative structural analysis problems. Some key factors affecting the performance of the iterative solvers relative to the direct solvers are identified.
Drifts, currents, and power scrape-off width in SOLPS-ITER modeling of DIII-D
Meier, E. T.; Goldston, R. J.; Kaveeva, E. G.; ...
2016-12-27
The effects of drifts and associated flows and currents on the width of the parallel heat flux channel (λ q) in the tokamak scrape-off layer (SOL) are analyzed using the SOLPS-ITER 2D fluid transport code. Motivation is supplied by Goldston’s heuristic drift (HD) model for λ q, which yields the same approximately inverse poloidal magnetic field dependence seen in multi-machine regression. The analysis, focusing on a DIII-D H-mode discharge, reveals HD-like features, including comparable density and temperature fall-off lengths in the SOL, and up-down ion pressure asymmetry that allows net cross-separatrix ion magnetic drift flux to exceed net anomalous ionmore » flux. In experimentally relevant high-recycling cases, scans of both toroidal and poloidal magnetic field (B tor and B pol) are conducted, showing minimal λ q dependence on either component of the field. Insensitivity to B tor is expected, and suggests that SOLPS-ITER is effectively capturing some aspects of HD physics. Absence of λ q dependence on B pol, however, is inconsistent with both the HD model and experimental results. As a result, the inconsistency is attributed to strong variation in the parallel Mach number, which violates one of the premises of the HD model.« less
Final case for a stainless steel diagnostic first wall on ITER
NASA Astrophysics Data System (ADS)
Pitts, R. A.; Bazylev, B.; Linke, J.; Landman, I.; Lehnen, M.; Loesser, D.; Loewenhoff, Th.; Merola, M.; Roccella, R.; Saibene, G.; Smith, M.; Udintsev, V. S.
2015-08-01
In 2010 the ITER Organization (IO) proposed to eliminate the beryllium armour on the plasma-facing surface of the diagnostic port plugs and instead to use bare stainless steel (SS), simplifying the design and providing significant cost reduction. Transport simulations at the IO confirmed that charge-exchange sputtering of the SS surfaces would not affect burning plasma operation through core impurity contamination, but a second key issue is the potential melt damage/material loss inflicted by the intense photon radiation flashes expected at the thermal quench of disruptions mitigated by massive gas injection. This paper addresses this second issue through a combination of ITER relevant experimental heat load tests and qualitative theoretical arguments of melt layer stability. It demonstrates that SS can be employed as material for the port plug plasma-facing surface and this has now been adopted into the ITER baseline.
Carbon fiber composites application in ITER plasma facing components
NASA Astrophysics Data System (ADS)
Barabash, V.; Akiba, M.; Bonal, J. P.; Federici, G.; Matera, R.; Nakamura, K.; Pacher, H. D.; Rödig, M.; Vieider, G.; Wu, C. H.
1998-10-01
Carbon Fiber Composites (CFCs) are one of the candidate armour materials for the plasma facing components of the International Thermonuclear Experimental Reactor (ITER). For the present reference design, CFC has been selected as armour for the divertor target near the plasma strike point mainly because of unique resistance to high normal and off-normal heat loads. It does not melt under disruptions and might have higher erosion lifetime in comparison with other possible armour materials. Issues related to CFC application in ITER are described in this paper. They include erosion lifetime, tritium codeposition with eroded material and possible methods for the removal of the codeposited layers, neutron irradiation effect, development of joining technologies with heat sink materials, and thermomechanical performance. The status of the development of new advanced CFCs for ITER application is also described. Finally, the remaining R&D needs are critically discussed.
NASA Astrophysics Data System (ADS)
Cunha-Filho, A. G.; Briend, Y. P. J.; de Lima, A. M. G.; Donadon, M. V.
2018-05-01
The flutter boundary prediction of complex aeroelastic systems is not an easy task. In some cases, these analyses may become prohibitive due to the high computational cost and time associated with the large number of degrees of freedom of the aeroelastic models, particularly when the aeroelastic model incorporates a control strategy with the aim of suppressing the flutter phenomenon, such as the use of viscoelastic treatments. In this situation, the use of a model reduction method is essential. However, the construction of a modal reduction basis for aeroviscoelastic systems is still a challenge, owing to the inherent frequency- and temperature-dependent behavior of the viscoelastic materials. Thus, the main contribution intended for the present study is to propose an efficient and accurate iterative enriched Ritz basis to deal with aeroviscoelastic systems. The main features and capabilities of the proposed model reduction method are illustrated in the prediction of flutter boundary for a thin three-layer sandwich flat panel and a typical aeronautical stiffened panel, both under supersonic flow.
NASA Astrophysics Data System (ADS)
Wang, Ke; Guo, Ping; Luo, A.-Li
2017-03-01
Spectral feature extraction is a crucial procedure in automated spectral analysis. This procedure starts from the spectral data and produces informative and non-redundant features, facilitating the subsequent automated processing and analysis with machine-learning and data-mining techniques. In this paper, we present a new automated feature extraction method for astronomical spectra, with application in spectral classification and defective spectra recovery. The basic idea of our approach is to train a deep neural network to extract features of spectra with different levels of abstraction in different layers. The deep neural network is trained with a fast layer-wise learning algorithm in an analytical way without any iterative optimization procedure. We evaluate the performance of the proposed scheme on real-world spectral data. The results demonstrate that our method is superior regarding its comprehensive performance, and the computational cost is significantly lower than that for other methods. The proposed method can be regarded as a new valid alternative general-purpose feature extraction method for various tasks in spectral data analysis.
NASA Technical Reports Server (NTRS)
Baker, A. J.; Orzechowski, J. A.
1980-01-01
A theoretical analysis is presented yielding sets of partial differential equations for determination of turbulent aerodynamic flowfields in the vicinity of an airfoil trailing edge. A four phase interaction algorithm is derived to complete the analysis. Following input, the first computational phase is an elementary viscous corrected two dimensional potential flow solution yielding an estimate of the inviscid-flow induced pressure distribution. Phase C involves solution of the turbulent two dimensional boundary layer equations over the trailing edge, with transition to a two dimensional parabolic Navier-Stokes equation system describing the near-wake merging of the upper and lower surface boundary layers. An iteration provides refinement of the potential flow induced pressure coupling to the viscous flow solutions. The final phase is a complete two dimensional Navier-Stokes analysis of the wake flow in the vicinity of a blunt-bases airfoil. A finite element numerical algorithm is presented which is applicable to solution of all partial differential equation sets of inviscid-viscous aerodynamic interaction algorithm. Numerical results are discussed.
2010-11-01
number of deposition strategies, including sputtering [10–12] and electrodeposition [13,14]. With all synthesis strategies, control of the film...to 10% ozone in 400 sccm O2 for 10 min. A 20 Å Al2O3 film was then deposited as a nucleation layer by iterative exposures of trimethyla- luminum and
Triple/quadruple patterning layout decomposition via novel linear programming and iterative rounding
NASA Astrophysics Data System (ADS)
Lin, Yibo; Xu, Xiaoqing; Yu, Bei; Baldick, Ross; Pan, David Z.
2016-03-01
As feature size of the semiconductor technology scales down to 10nm and beyond, multiple patterning lithography (MPL) has become one of the most practical candidates for lithography, along with other emerging technologies such as extreme ultraviolet lithography (EUVL), e-beam lithography (EBL) and directed self assembly (DSA). Due to the delay of EUVL and EBL, triple and even quadruple patterning are considered to be used for lower metal and contact layers with tight pitches. In the process of MPL, layout decomposition is the key design stage, where a layout is split into various parts and each part is manufactured through a separate mask. For metal layers, stitching may be allowed to resolve conflicts, while it is forbidden for contact and via layers. In this paper, we focus on the application of layout decomposition where stitching is not allowed such as for contact and via layers. We propose a linear programming and iterative rounding (LPIR) solving technique to reduce the number of non-integers in the LP relaxation problem. Experimental results show that the proposed algorithms can provide high quality decomposition solutions efficiently while introducing as few conflicts as possible.
Hierarchical Higher Order Crf for the Classification of Airborne LIDAR Point Clouds in Urban Areas
NASA Astrophysics Data System (ADS)
Niemeyer, J.; Rottensteiner, F.; Soergel, U.; Heipke, C.
2016-06-01
We propose a novel hierarchical approach for the classification of airborne 3D lidar points. Spatial and semantic context is incorporated via a two-layer Conditional Random Field (CRF). The first layer operates on a point level and utilises higher order cliques. Segments are generated from the labelling obtained in this way. They are the entities of the second layer, which incorporates larger scale context. The classification result of the segments is introduced as an energy term for the next iteration of the point-based layer. This framework iterates and mutually propagates context to improve the classification results. Potentially wrong decisions can be revised at later stages. The output is a labelled point cloud as well as segments roughly corresponding to object instances. Moreover, we present two new contextual features for the segment classification: the distance and the orientation of a segment with respect to the closest road. It is shown that the classification benefits from these features. In our experiments the hierarchical framework improve the overall accuracies by 2.3% on a point-based level and by 3.0% on a segment-based level, respectively, compared to a purely point-based classification.
Erosion and deposition in the JET divertor during the second ITER-like wall campaign
NASA Astrophysics Data System (ADS)
Mayer, M.; Krat, S.; Baron-Wiechec, A.; Gasparyan, Yu; Heinola, K.; Koivuranta, S.; Likonen, J.; Ruset, C.; de Saint-Aubin, G.; Widdowson, A.; Contributors, JET
2017-12-01
Erosion of plasma-facing materials and successive transport and redeposition of eroded material are crucial processes determining the lifetime of plasma-facing components and the trapped tritium inventory in redeposited material layers. Erosion and deposition in the JET divertor were studied during the second JET ITER-like wall campaign ILW-2 in 2013-2014 by using a poloidal row of specially prepared divertor marker tiles including the tungsten bulk tile 5. The marker tiles were analyzed using elastic backscattering with 3-4.5 MeV incident protons and nuclear reaction analysis using 0.8-4.5 MeV 3He ions before and after the campaign. The erosion/deposition pattern observed during ILW-2 is qualitatively comparable to the first campaign ILW-1 in 2011-2012: deposits consist mainly of beryllium with 5-20 at.% of carbon and oxygen and small amounts of Ni and W. The highest deposition with deposited layer thicknesses up to 30 μm per campaign is still observed on the upper and horizontal parts of the inner divertor. Outer divertor tiles 5, 6, 7 and 8 are net W erosion areas. The observed D inventory is roughly comparable to the inventory observed during ILW-1. The results obtained during ILW-2 therefore confirm the positive results observed in ILW-1 with respect to reduced material deposition and hydrogen isotopes retention in the divertor.
Packing Optimization of an Intentionally Stratified Sorbent Bed Containing Dissimilar Media Types
NASA Technical Reports Server (NTRS)
Kidd, Jessica; Guttromson, Jayleen; Holland, Nathan
2010-01-01
The Fire Cartridge is a packed bed air filter with two different and separate layers of media designed to provide respiratory protection from combustion products after a fire event on the International Space Station (ISS). The first layer of media is a carbon monoxide catalyst made from gold nanoparticles dispersed on iron oxide. The second layer of media is universal carbon, commonly used in commercial respirator filters. Each layer must be optimally packed to effectively remove contaminants from the air. Optimal packing is achieved by vibratory agitations. However, if post-packing movement of the media within the cartridge occurs, mixing of the bed layers, air voids, and channeling could cause preferential air flow and allow contaminants to pass. Several iterations of prototype fire cartridges were developed to reduce post-packing movement of the media within each layer (settling), and to prevent mixing of the two media types. Both types of movement of the media contribute to decreased fire cartridge performance. Each iteration of the fire cartridge design was tested to demonstrate mechanical loads required to cause detrimental movement within the bed, and resulting level of functionality of the media beds after movement was detected. In order to optimally pack each layer, vertical, horizontal, and orbital agitations were tested and a final packed bulk density was calculated for each method. Packed bulk density must be calculated for each lot of catalyst to accommodate variations in particle size, shape, and density. In addition, a physical divider sheet between each type of media was added within the fire cartridge design to further inhibit intermixing of the bed layers.
Liu, Tao; Thibos, Larry; Marin, Gildas; Hernandez, Martha
2014-01-01
Conventional aberration analysis by a Shack-Hartmann aberrometer is based on the implicit assumption that an injected probe beam reflects from a single fundus layer. In fact, the biological fundus is a thick reflector and therefore conventional analysis may produce errors of unknown magnitude. We developed a novel computational method to investigate this potential failure of conventional analysis. The Shack-Hartmann wavefront sensor was simulated by computer software and used to recover by two methods the known wavefront aberrations expected from a population of normally-aberrated human eyes and bi-layer fundus reflection. The conventional method determines the centroid of each spot in the SH data image, from which wavefront slopes are computed for least-squares fitting with derivatives of Zernike polynomials. The novel 'global' method iteratively adjusted the aberration coefficients derived from conventional centroid analysis until the SH image, when treated as a unitary picture, optimally matched the original data image. Both methods recovered higher order aberrations accurately and precisely, but only the global algorithm correctly recovered the defocus coefficients associated with each layer of fundus reflection. The global algorithm accurately recovered Zernike coefficients for mean defocus and bi-layer separation with maximum error <0.1%. The global algorithm was robust for bi-layer separation up to 2 dioptres for a typical SH wavefront sensor design. For 100 randomly generated test wavefronts with 0.7 D axial separation, the retrieved mean axial separation was 0.70 D with standard deviations (S.D.) of 0.002 D. Sufficient information is contained in SH data images to measure the dioptric thickness of dual-layer fundus reflection. The global algorithm is superior since it successfully recovered the focus value associated with both fundus layers even when their separation was too small to produce clearly separated spots, while the conventional analysis misrepresents the defocus component of the wavefront aberration as the mean defocus for the two reflectors. Our novel global algorithm is a promising method for SH data image analysis in clinical and visual optics research for human and animal eyes. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
Optical implementation of neocognitron and its applications to radar signature discrimination
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin; Stoner, William W.
1991-01-01
A feature-extraction-based optoelectronic neural network is introduced. The system implementation approach applies the principle of the neocognitron paradigm first introduced by Fukushima et al. (1983). A multichannel correlator is used as a building block of a generic single layer of the neocognitron for shift-invariant feature correlation. Multilayer processing is achieved by iteratively feeding back the output of the feature correlator to the input spatial light modulator. Successful pattern recognition with intraclass fault tolerance and interclass discrimination is achieved using this optoelectronic neocognitron. Detailed system analysis is described. Experimental demonstration of radar signature processing is also provided.
Mechanical Characterization of the Iter Mock-Up Insulation after Reactor Irradiation
NASA Astrophysics Data System (ADS)
Prokopec, R.; Humer, K.; Fillunger, H.; Maix, R. K.; Weber, H. W.
2010-04-01
The ITER mock-up project was launched in order to demonstrate the feasibility of an industrial impregnation process using the new cyanate ester/epoxy blend. The mock-up simulates the TF winding pack cross section by a stainless steel structure with the same dimensions as the TF winding pack at a length of 1 m. It consists of 7 plates simulating the double pancakes, each of them is wrapped with glass fiber/Kapton sandwich tapes. After stacking the 7 plates, additional insulation layers are wrapped to simulate the ground insulation. This paper presents the results of the mechanical quality tests on the mock-up pancake insulation. Tensile and short beam shear specimens were cut from the plates extracted from the mock-up and tested at 77 K using a servo-hydraulic material testing device. All tests were repeated after reactor irradiation to a fast neutron fluence of 1×1022 m-2 (E>0.1 MeV). In order to simulate the pulsed operation of ITER, tension-tension fatigue measurements were performed in the load controlled mode. Initial results show a high mechanical strength as expected from the high number of thin glass fiber layers, and an excellent homogeneity of the material.
Influence of Iterative Reconstruction Algorithms on PET Image Resolution
NASA Astrophysics Data System (ADS)
Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.
2015-09-01
The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.
Self-consistent field for fragmented quantum mechanical model of large molecular systems.
Jin, Yingdi; Su, Neil Qiang; Xu, Xin; Hu, Hao
2016-01-30
Fragment-based linear scaling quantum chemistry methods are a promising tool for the accurate simulation of chemical and biomolecular systems. Because of the coupled inter-fragment electrostatic interactions, a dual-layer iterative scheme is often employed to compute the fragment electronic structure and the total energy. In the dual-layer scheme, the self-consistent field (SCF) of the electronic structure of a fragment must be solved first, then followed by the updating of the inter-fragment electrostatic interactions. The two steps are sequentially carried out and repeated; as such a significant total number of fragment SCF iterations is required to converge the total energy and becomes the computational bottleneck in many fragment quantum chemistry methods. To reduce the number of fragment SCF iterations and speed up the convergence of the total energy, we develop here a new SCF scheme in which the inter-fragment interactions can be updated concurrently without converging the fragment electronic structure. By constructing the global, block-wise Fock matrix and density matrix, we prove that the commutation between the two global matrices guarantees the commutation of the corresponding matrices in each fragment. Therefore, many highly efficient numerical techniques such as the direct inversion of the iterative subspace method can be employed to converge simultaneously the electronic structure of all fragments, reducing significantly the computational cost. Numerical examples for water clusters of different sizes suggest that the method shall be very useful in improving the scalability of fragment quantum chemistry methods. © 2015 Wiley Periodicals, Inc.
Sauter, Andreas P; Kopp, Felix K; Münzel, Daniela; Dangelmaier, Julia; Renz, Martin; Renger, Bernhard; Braren, Rickmer; Fingerle, Alexander A; Rummeny, Ernst J; Noël, Peter B
2018-05-01
Evaluation of the influence of iterative reconstruction, tube settings and patient habitus on the accuracy of iodine quantification with dual-layer spectral CT (DL-CT). A CT abdomen phantom with different extension rings and four iodine inserts (1, 2, 5 and 10 mg/ml) was scanned on a DL-CT. The phantom was scanned with tube-voltages of 120 and 140 kVp and CTDI vol of 2.5, 5, 10 and 20 mGy. Reconstructions were performed for eight levels of iterative reconstruction (i0-i7). Diagnostic dose levels are classified depending on patient-size and radiation dose. Measurements of iodine concentration showed accurate and reliable results. Taking all CTDI vol -levels into account, the mean absolute percentage difference (MAPD) showed less accuracy for low CTDI vol -levels (2.5 mGy: 34.72%) than for high CTDI vol -levels (20 mGy: 5.89%). At diagnostic dose levels, accurate quantification of iodine was possible (MAPD 3.38%). Level of iterative reconstruction did not significantly influence iodine measurements. Iodine quantification worked more accurately at a tube voltage of 140 kVp. Phantom size had a considerable effect only at low-dose-levels; at diagnostic dose levels the effect of phantom size decreased (MAPD <5% for all phantom sizes). With DL-CT, even low iodine concentrations can be accurately quantified. Accuracies are higher when diagnostic radiation doses are employed. Copyright © 2018 Elsevier B.V. All rights reserved.
Material nonlinear analysis via mixed-iterative finite element method
NASA Technical Reports Server (NTRS)
Sutjahjo, Edhi; Chamis, Christos C.
1992-01-01
The performance of elastic-plastic mixed-iterative analysis is examined through a set of convergence studies. Membrane and bending behaviors are tested using 4-node quadrilateral finite elements. The membrane result is excellent, which indicates the implementation of elastic-plastic mixed-iterative analysis is appropriate. On the other hand, further research to improve bending performance of the method seems to be warranted.
Experimental study and modelling of deuterium thermal release from Be-D co-deposited layers
NASA Astrophysics Data System (ADS)
Baldwin, M. J.; Schwarz-Selinger, T.; Doerner, R. P.
2014-07-01
A study of the thermal desorption of deuterium from 1 µm thick co-deposited Be-(0.1)D layers formed at 330 K by a magnetron sputtering technique is reported. A range of thermal desorption rates 0 ⩽ β ⩽ 1.0 K s-1 are explored with a view to studying the effectiveness of the proposed ITER wall and divertor bake procedure (β = 0 K s-1) to be carried out at 513 and 623 K. Fixed temperature bake durations up to 24 h are examined. The experimental thermal release data are used to validate a model input into the Tritium Migration and Analysis Program (TMAP-7). Good agreement with experiment is observed for a TMAP-7 model incorporating trap populations of activation energies for D release of 0.80 and 0.98 eV, and a dynamically computed surface D atomic to molecular recombination rate.
Numerical Analysis of Modeling Based on Improved Elman Neural Network
Jie, Shao
2014-01-01
A modeling based on the improved Elman neural network (IENN) is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE) varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA) with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL) model, Chebyshev neural network (CNN) model, and basic Elman neural network (BENN) model, the proposed model has better performance. PMID:25054172
Semantic focusing allows fully automated single-layer slide scanning of cervical cytology slides.
Lahrmann, Bernd; Valous, Nektarios A; Eisenmann, Urs; Wentzensen, Nicolas; Grabe, Niels
2013-01-01
Liquid-based cytology (LBC) in conjunction with Whole-Slide Imaging (WSI) enables the objective and sensitive and quantitative evaluation of biomarkers in cytology. However, the complex three-dimensional distribution of cells on LBC slides requires manual focusing, long scanning-times, and multi-layer scanning. Here, we present a solution that overcomes these limitations in two steps: first, we make sure that focus points are only set on cells. Secondly, we check the total slide focus quality. From a first analysis we detected that superficial dust can be separated from the cell layer (thin layer of cells on the glass slide) itself. Then we analyzed 2,295 individual focus points from 51 LBC slides stained for p16 and Ki67. Using the number of edges in a focus point image, specific color values and size-inclusion filters, focus points detecting cells could be distinguished from focus points on artifacts (accuracy 98.6%). Sharpness as total focus quality of a virtual LBC slide is computed from 5 sharpness features. We trained a multi-parameter SVM classifier on 1,600 images. On an independent validation set of 3,232 cell images we achieved an accuracy of 94.8% for classifying images as focused. Our results show that single-layer scanning of LBC slides is possible and how it can be achieved. We assembled focus point analysis and sharpness classification into a fully automatic, iterative workflow, free of user intervention, which performs repetitive slide scanning as necessary. On 400 LBC slides we achieved a scanning-time of 13.9±10.1 min with 29.1±15.5 focus points. In summary, the integration of semantic focus information into whole-slide imaging allows automatic high-quality imaging of LBC slides and subsequent biomarker analysis.
Marching iterative methods for the parabolized and thin layer Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Israeli, M.
1985-01-01
Downstream marching iterative schemes for the solution of the Parabolized or Thin Layer (PNS or TL) Navier-Stokes equations are described. Modifications of the primitive equation global relaxation sweep procedure result in efficient second-order marching schemes. These schemes take full account of the reduced order of the approximate equations as they behave like the SLOR for a single elliptic equation. The improved smoothing properties permit the introduction of Multi-Grid acceleration. The proposed algorithm is essentially Reynolds number independent and therefore can be applied to the solution of the subsonic Euler equations. The convergence rates are similar to those obtained by the Multi-Grid solution of a single elliptic equation; the storage is also comparable as only the pressure has to be stored on all levels. Extensions to three-dimensional and compressible subsonic flows are discussed. Numerical results are presented.
Crustal Structure Beneath Taiwan Using Frequency-band Inversion of Receiver Function Waveforms
NASA Astrophysics Data System (ADS)
Tomfohrde, D. A.; Nowack, R. L.
Receiver function analysis is used to determine local crustal structure beneath Taiwan. We have performed preliminary data processing and polarization analysis for the selection of stations and events and to increase overall data quality. Receiver function analysis is then applied to data from the Taiwan Seismic Network to obtain radial and transverse receiver functions. Due to the limited azimuthal coverage, only the radial receiver functions are analyzed in terms of horizontally layered crustal structure for each station. In order to improve convergence of the receiver function inversion, frequency-band inversion (FBI) is implemented, in which an iterative inversion procedure with sequentially higher low-pass corner frequencies is used to stabilize the waveform inversion. Frequency-band inversion is applied to receiver functions at six stations of the Taiwan Seismic Network. Initial 20-layer crustal models are inverted for using prior tomographic results for the initial models. The resulting 20-1ayer models are then simplified to 4 to 5 layer models and input into an alternating depth and velocity frequency-band inversion. For the six stations investigated, the resulting simplified models provide an average estimate of 38 km for the Moho thickness surrounding the Central Range of Taiwan. Also, the individual station estimates compare well with the recent tomographic model of and the refraction results of Rau and Wu (1995) and the refraction results of Ma and Song (1997).
NASA Technical Reports Server (NTRS)
Morrison, Andrew D. (Inventor); Daud, Taher (Inventor)
1986-01-01
A method for growing a high purity, low defect layer of semiconductor is described. This method involves depositing a patterned mask of a material impervious to impurities of the semiconductor on a surface of a blank. When a layer of semiconductor is grown on the mask, the semiconductor will first grow from the surface portions exposed by the openings in the mask and will bridge the connecting portions of the mask to form a continuous layer having improved purity, since only the portions overlying the openings are exposed to defects and impurities. The process can be iterated and the mask translated to further improve the quality of grown layers.
Metallic mirrors for plasma diagnosis in current and future reactors: tests for ITER and DEMO
NASA Astrophysics Data System (ADS)
Rubel, M.; Moon, Soonwoo; Petersson, P.; Garcia-Carrasco, A.; Hallén, A.; Krawczynska, A.; Fortuna-Zaleśna, E.; Gilbert, M.; Płociński, T.; Widdowson, A.; Contributors, JET
2017-12-01
Optical spectroscopy and imaging diagnostics in next-step fusion devices will rely on metallic mirrors. The performance of mirrors is studied in present-day tokamaks and in laboratory systems. This work deals with comprehensive tests of mirrors: (a) exposed in JET with the ITER-like wall (JET-ILW); (b) irradiated by hydrogen, helium and heavy ions to simulate transmutation effects and damage which may be induced by neutrons under reactor conditions. The emphasis has been on surface modification: deposited layers on JET mirrors from the divertor and on near-surface damage in ion-irradiated targets. Analyses performed with ion beams, microscopy and spectro-photometry techniques have revealed: (i) the formation of multiple co-deposited layers; (ii) flaking-off of the layers already in the tokamak, despite the small thickness (130-200 nm) of the granular deposits; (iii) deposition of dust particles (0.2-5 μm, 300-400 mm-2) composed mainly of tungsten and nickel; (iv) that the stepwise irradiation of up to 30 dpa by heavy ions (Mo, Zr or Nb) caused only small changes in the optical performance, in some cases even improving reflectivity due to the removal of the surface oxide layer; (v) significant reflectivity degradation related to bubble formation caused by the irradiation with He and H ions.
A structural coarse-grained model for clays using simple iterative Boltzmann inversion
NASA Astrophysics Data System (ADS)
Schaettle, Karl; Ruiz Pestana, Luis; Head-Gordon, Teresa; Lammers, Laura Nielsen
2018-06-01
Cesium-137 is a major byproduct of nuclear energy generation and is environmentally threatening due to its long half-life and affinity for naturally occurring micaceous clays. Recent experimental observations of illite and phlogopite mica indicate that Cs+ is capable of exchanging with K+ bound in the anhydrous interlayers of layered silicates, forming sharp exchange fronts, leading to interstratification of Cs- and K-illite. We present here a coarse-grained (CG) model of the anhydrous illite interlayer developed using iterative Boltzmann inversion that qualitatively and quantitatively reproduces features of a previously proposed feedback mechanism of ion exchange. The CG model represents a 70-fold speedup over all-atom models of clay systems and predicts interlayer expansion for K-illite near ion exchange fronts. Contrary to the longstanding theory that ion exchange in a neighboring layer increases the binding of K in lattice counterion sites leading to interstratification, we find that the presence of neighboring exchanged layers leads to short-range structural relaxations that increase basal spacing and decrease cohesion of the neighboring K-illite layers. We also provide evidence that the formation of alternating Cs- and K-illite interlayers (i.e., ordered interstratification) is both thermodynamically and mechanically favorable compared to exchange in adjacent interlayers.
Neural Generalized Predictive Control: A Newton-Raphson Implementation
NASA Technical Reports Server (NTRS)
Soloway, Donald; Haley, Pamela J.
1997-01-01
An efficient implementation of Generalized Predictive Control using a multi-layer feedforward neural network as the plant's nonlinear model is presented. In using Newton-Raphson as the optimization algorithm, the number of iterations needed for convergence is significantly reduced from other techniques. The main cost of the Newton-Raphson algorithm is in the calculation of the Hessian, but even with this overhead the low iteration numbers make Newton-Raphson faster than other techniques and a viable algorithm for real-time control. This paper presents a detailed derivation of the Neural Generalized Predictive Control algorithm with Newton-Raphson as the minimization algorithm. Simulation results show convergence to a good solution within two iterations and timing data show that real-time control is possible. Comments about the algorithm's implementation are also included.
Interactive boundary-layer calculations of a transonic wing flow
NASA Technical Reports Server (NTRS)
Kaups, Kalle; Cebeci, Tuncer; Mehta, Unmeel
1989-01-01
Results obtained from iterative solutions of inviscid and boundary-layer equations are presented and compared with experimental values. The calculated results were obtained with an Euler code and a transonic potential code in order to furnish solutions for the inviscid flow; they were interacted with solutions of two-dimensional boundary-layer equations having a strip-theory approximation. Euler code results are found to be in better agreement with the experimental data than with the full potential code, especially in the presence of shock waves, (with the sole exception of the near-tip region).
NASA Astrophysics Data System (ADS)
Broggini, Filippo; Wapenaar, Kees; van der Neut, Joost; Snieder, Roel
2014-01-01
An iterative method is presented that allows one to retrieve the Green's function originating from a virtual source located inside a medium using reflection data measured only at the acquisition surface. In addition to the reflection response, an estimate of the travel times corresponding to the direct arrivals is required. However, no detailed information about the heterogeneities in the medium is needed. The iterative scheme generalizes the Marchenko equation for inverse scattering to the seismic reflection problem. To give insight in the mechanism of the iterative method, its steps for a simple layered medium are analyzed using physical arguments based on the stationary phase method. The retrieved Green's wavefield is shown to correctly contain the multiples due to the inhomogeneities present in the medium. Additionally, a variant of the iterative scheme enables decomposition of the retrieved wavefield into its downgoing and upgoing components. These wavefields then enable creation of a ghost-free image of the medium with either cross correlation or multidimensional deconvolution, presenting an advantage over standard prestack migration.
Material migration studies with an ITER first wall panel proxy on EAST
NASA Astrophysics Data System (ADS)
Ding, R.; Pitts, R. A.; Borodin, D.; Carpentier, S.; Ding, F.; Gong, X. Z.; Guo, H. Y.; Kirschner, A.; Kocan, M.; Li, J. G.; Luo, G.-N.; Mao, H. M.; Qian, J. P.; Stangeby, P. C.; Wampler, W. R.; Wang, H. Q.; Wang, W. Z.
2015-02-01
The ITER beryllium (Be) first wall (FW) panels are shaped to protect leading edges between neighbouring panels arising from assembly tolerances. This departure from a perfectly cylindrical surface automatically leads to magnetically shadowed regions where eroded Be can be re-deposited, together with co-deposition of tritium fuel. To provide a benchmark for a series of erosion/re-deposition simulation studies performed for the ITER FW panels, dedicated experiments have been performed on the EAST tokamak using a specially designed, instrumented test limiter acting as a proxy for the FW panel geometry. Carbon coated molybdenum plates forming the limiter front surface were exposed to the outer midplane boundary plasma of helium discharges using the new Material and Plasma Evaluation System (MAPES). Net erosion and deposition patterns are estimated using ion beam analysis to measure the carbon layer thickness variation across the surface after exposure. The highest erosion of about 0.8 µm is found near the midplane, where the surface is closest to the plasma separatrix. No net deposition above the measurement detection limit was found on the proxy wall element, even in shadowed regions. The measured 2D surface erosion distribution has been modelled with the 3D Monte Carlo code ERO, using the local plasma parameter measurements together with a diffusive transport assumption. Excellent agreement between the experimentally observed net erosion and the modelled erosion profile has been obtained.
Discrete-Time Deterministic $Q$ -Learning: A Novel Convergence Analysis.
Wei, Qinglai; Lewis, Frank L; Sun, Qiuye; Yan, Pengfei; Song, Ruizhuo
2017-05-01
In this paper, a novel discrete-time deterministic Q -learning algorithm is developed. In each iteration of the developed Q -learning algorithm, the iterative Q function is updated for all the state and control spaces, instead of updating for a single state and a single control in traditional Q -learning algorithm. A new convergence criterion is established to guarantee that the iterative Q function converges to the optimum, where the convergence criterion of the learning rates for traditional Q -learning algorithms is simplified. During the convergence analysis, the upper and lower bounds of the iterative Q function are analyzed to obtain the convergence criterion, instead of analyzing the iterative Q function itself. For convenience of analysis, the convergence properties for undiscounted case of the deterministic Q -learning algorithm are first developed. Then, considering the discounted factor, the convergence criterion for the discounted case is established. Neural networks are used to approximate the iterative Q function and compute the iterative control law, respectively, for facilitating the implementation of the deterministic Q -learning algorithm. Finally, simulation results and comparisons are given to illustrate the performance of the developed algorithm.
TMAP-7 simulation of D2 thermal release data from Be co-deposited layers
NASA Astrophysics Data System (ADS)
Baldwin, M. J.; Schwarz-Selinger, T.; Yu, J. H.; Doerner, R. P.
2013-07-01
The efficacy of (1) bake-out at 513 K and 623 K, and (2) thermal transient (10 ms) loading to up to 1000 K, is explored for reducing D inventory in 1 μm thick Be-D (D/Be ˜0.1) co-deposited layers formed at 323 K for experiment (1) and ˜500 K for experiment (2). D release data from co-deposits are obtained by thermal desorption and used to validate a model input into the Tritium Migration & Analysis Program 7 (TMAP). In (1), good agreement with experiment is found for a TMAP model encorporating traps of activation energies, 0.80 eV and 0.98 eV, whereas an additional 2 eV trap was required to model experiment (2). Thermal release is found to be trap limited, but simulations are optimal when surface recombination is taken into account. Results suggest that thick built-up co-deposited layers will hinder ITER inventory control, and that bake periods (˜1 day) will be more effective in inventory reduction than transient thermal loading.
Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2011-01-01
A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony
Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.
Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony; ...
2018-04-20
Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.
Energy deposition and thermal effects of runaway electrons in ITER-FEAT plasma facing components
NASA Astrophysics Data System (ADS)
Maddaluno, G.; Maruccia, G.; Merola, M.; Rollet, S.
2003-03-01
The profile of energy deposited by runaway electrons (RAEs) of 10 or 50 MeV in International Thermonuclear Experimental Reactor-Fusion Energy Advanced Tokamak (ITER-FEAT) plasma facing components (PFCs) and the subsequent temperature pattern have been calculated by using the Monte Carlo code FLUKA and the finite element heat conduction code ANSYS. The RAE energy deposition density was assumed to be 50 MJ/m 2 and both 10 and 100 ms deposition times were considered. Five different configurations of PFCs were investigated: primary first wall armoured with Be, with and without protecting CFC poloidal limiters, both port limiter first wall options (Be flat tile and CFC monoblock), divertor baffle first wall, armoured with W. The analysis has outlined that for all the configurations but one (port limiter with Be flat tile) the heat sink and the cooling tube beneath the armour are well protected for both RAE energies and for both energy deposition times. On the other hand large melting (W, Be) or sublimation (C) of the surface layer occurs, eventually affecting the PFCs lifetime.
ERIC Educational Resources Information Center
Lee, Kyungmee; Brett, Clare
2013-01-01
This qualitative case study is the first phase of a large-scale design-based research project to implement a theoretically derived double-layered CoP model within real-world teacher development practices. The main goal of this first iteration is to evaluate the courses and test and refine the CoP model for future implementations. This paper…
An incremental strategy for calculating consistent discrete CFD sensitivity derivatives
NASA Technical Reports Server (NTRS)
Korivi, Vamshi Mohan; Taylor, Arthur C., III; Newman, Perry A.; Hou, Gene W.; Jones, Henry E.
1992-01-01
In this preliminary study involving advanced computational fluid dynamic (CFD) codes, an incremental formulation, also known as the 'delta' or 'correction' form, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods appear to be needed for future 3D applications; however, because direct solver methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form result in certain difficulties, such as ill-conditioning of the coefficient matrix, which can be overcome when these equations are cast in the incremental form; these and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two laminar sample problems: (1) transonic flow through a double-throat nozzle; and (2) flow over an isolated airfoil.
NASA Technical Reports Server (NTRS)
Ashby, G. C., Jr.; Harris, J. E.
1974-01-01
Wave and skin-friction drag have been numerically calculated for a series of power-law bodies at a Mach number of 6 and Reynolds numbers, based on body length, from 1.5 million to 9.5 million. Pressure distributions were computed on the nose by the inverse method and on the body by the method of characteristics. These pressure distributions and the measured locations of boundary-layer transition were used in a nonsimilar-boundary-layer program to determine viscous effects. A coupled iterative approach between the boundary-layer and pressure-distribution programs was used to account for boundary-layer displacement-thickness effects. The calculated-drag coefficients compared well with previously obtained experimental data.
NASA Technical Reports Server (NTRS)
Charette, R. F.; Hyer, M. W.
1990-01-01
The influence is investigated of a curvilinear fiber format on load carrying capacity of a layered fiber reinforced plate with a centrally located hole. A curvilinear fiber format is descriptive of layers in a laminate having fibers which are aligned with the principal stress directions in those layers. Laminates of five curvilinear fiber format designs and four straightline fiber format designs are considered. A quasi-isotropic laminate having a straightline fiber format is used to define a baseline design for comparison with the other laminate designs. Four different plate geometries are considered and differentiated by two values of hole diameter/plate width equal to 1/6 and 1/3, and two values of plate length/plate width equal to 2 and 1. With the plates under uniaxial tensile loading on two opposing edges, alignment of fibers in the curvilinear layers with the principal stress directions is determined analytically by an iteration procedure. In-plane tensile load capacity is computed for all of the laminate designs using a finite element analysis method. A maximum strain failure criterion and the Tsai-Wu failure criterion are applied to determine failure loads and failure modes. Resistance to buckling of the laminate designs to uniaxial compressive loading is analyzed using the commercial code Engineering Analysis Language. Results indicate that the curvilinear fiber format laminates have higher in-plane tensile load capacity and comparable buckling resistance relative to the straightline fiber format laminates.
Effect of layer thickness on the thermal release from Be-D co-deposited layers
NASA Astrophysics Data System (ADS)
Baldwin, M. J.; Doerner, R. P.
2014-08-01
The results of previous work (Baldwin et al 2013 J. Nucl. Mater. 438 S967-70 and Baldwin et al 2014 Nucl. Fusion 54 073005) are extended to explore the influence of layer thickness on the thermal D2 release from co-deposited Be-(0.05)D layers produced at ˜323 K. Bake desorption of layers of thickness 0.2-0.7 µm are explored with a view to examine the influence of layer thickness on the efficacy of the proposed ITER bake procedure, to be carried out at the fixed temperatures of 513 K on the first wall and 623 K in the divertor. The results of experiment and modelling with the TMAP-7 hydrogen transport code, show that thicker Be-D co-deposited layers are relatively more difficult to desorb (time-wise) than thinner layers with the same concentrations of intrinsic traps and retained hydrogen isotope fraction.
de Vries, Peter C.; Luce, Timothy C.; Bae, Young-soon; ...
2017-11-22
To improve our understanding of the dynamics and control of ITER terminations, a study has been carried out on data from existing tokamaks. The aim of this joint analysis is to compare the assumptions for ITER terminations with the present experience basis. The study examined the parameter ranges in which present day devices operated during their terminations, as well as the dynamics of these parameters. The analysis of a database, built using a selected set of experimental termination cases, showed that, the H-mode density decays slower than the plasma current ramp-down. The consequential increase in fGW limits the duration ofmore » the H-mode phase or result in disruptions. The lower temperatures after the drop out of H-mode will allow the plasma internal inductance to increase. But vertical stability control remains manageable in ITER at high internal inductance when accompanied by a strong elongation reduction. This will result in ITER terminations remaining longer at low q (q95~3) than most present-day devices during the current ramp-down. A fast power ramp-down leads to a larger change in βp at the H-L transition, but the experimental data showed that these are manageable for the ITER radial position control. The analysis of JET data shows that radiation and impurity levels significantly alter the H-L transition dynamics. Self-consistent calculations of the impurity content and resulting radiation should be taken into account when modelling ITER termination scenarios. Here, the results from this analysis can be used to better prescribe the inputs for the detailed modelling and preparation of ITER termination scenarios.« less
NASA Astrophysics Data System (ADS)
de Vries, P. C.; Luce, T. C.; Bae, Y. S.; Gerhardt, S.; Gong, X.; Gribov, Y.; Humphreys, D.; Kavin, A.; Khayrutdinov, R. R.; Kessel, C.; Kim, S. H.; Loarte, A.; Lukash, V. E.; de la Luna, E.; Nunes, I.; Poli, F.; Qian, J.; Reinke, M.; Sauter, O.; Sips, A. C. C.; Snipes, J. A.; Stober, J.; Treutterer, W.; Teplukhina, A. A.; Voitsekhovitch, I.; Woo, M. H.; Wolfe, S.; Zabeo, L.; the Alcator C-MOD Team; the ASDEX Upgrade Team; the DIII-D Team; the EAST Team; contributors, JET; the KSTAR Team; the NSTX-U Team; the TCV Team; IOS members, ITPA; experts
2018-02-01
To improve our understanding of the dynamics and control of ITER terminations, a study has been carried out on data from existing tokamaks. The aim of this joint analysis is to compare the assumptions for ITER terminations with the present experience basis. The study examined the parameter ranges in which present day devices operated during their terminations, as well as the dynamics of these parameters. The analysis of a database, built using a selected set of experimental termination cases, showed that, the H-mode density decays slower than the plasma current ramp-down. The consequential increase in f GW limits the duration of the H-mode phase or result in disruptions. The lower temperatures after the drop out of H-mode will allow the plasma internal inductance to increase. But vertical stability control remains manageable in ITER at high internal inductance when accompanied by a strong elongation reduction. This will result in ITER terminations remaining longer at low q (q 95 ~ 3) than most present-day devices during the current ramp-down. A fast power ramp-down leads to a larger change in β p at the H-L transition, but the experimental data showed that these are manageable for the ITER radial position control. The analysis of JET data shows that radiation and impurity levels significantly alter the H-L transition dynamics. Self-consistent calculations of the impurity content and resulting radiation should be taken into account when modelling ITER termination scenarios. The results from this analysis can be used to better prescribe the inputs for the detailed modelling and preparation of ITER termination scenarios.
Inverse boundary-layer theory and comparison with experiment
NASA Technical Reports Server (NTRS)
Carter, J. E.
1978-01-01
Inverse boundary layer computational procedures, which permit nonsingular solutions at separation and reattachment, are presented. In the first technique, which is for incompressible flow, the displacement thickness is prescribed; in the second technique, for compressible flow, a perturbation mass flow is the prescribed condition. The pressure is deduced implicitly along with the solution in each of these techniques. Laminar and turbulent computations, which are typical of separated flow, are presented and comparisons are made with experimental data. In both inverse procedures, finite difference techniques are used along with Newton iteration. The resulting procedure is no more complicated than conventional boundary layer computations. These separated boundary layer techniques appear to be well suited for complete viscous-inviscid interaction computations.
Wei, Jianming; Zhang, Youan; Sun, Meimei; Geng, Baoliang
2017-09-01
This paper presents an adaptive iterative learning control scheme for a class of nonlinear systems with unknown time-varying delays and control direction preceded by unknown nonlinear backlash-like hysteresis. Boundary layer function is introduced to construct an auxiliary error variable, which relaxes the identical initial condition assumption of iterative learning control. For the controller design, integral Lyapunov function candidate is used, which avoids the possible singularity problem by introducing hyperbolic tangent funciton. After compensating for uncertainties with time-varying delays by combining appropriate Lyapunov-Krasovskii function with Young's inequality, an adaptive iterative learning control scheme is designed through neural approximation technique and Nussbaum function method. On the basis of the hyperbolic tangent function's characteristics, the system output is proved to converge to a small neighborhood of the desired trajectory by constructing Lyapunov-like composite energy function (CEF) in two cases, while keeping all the closed-loop signals bounded. Finally, a simulation example is presented to verify the effectiveness of the proposed approach. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Solving Upwind-Biased Discretizations: Defect-Correction Iterations
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
1999-01-01
This paper considers defect-correction solvers for a second order upwind-biased discretization of the 2D convection equation. The following important features are reported: (1) The asymptotic convergence rate is about 0.5 per defect-correction iteration. (2) If the operators involved in defect-correction iterations have different approximation order, then the initial convergence rates may be very slow. The number of iterations required to get into the asymptotic convergence regime might grow on fine grids as a negative power of h. In the case of a second order target operator and a first order driver operator, this number of iterations is roughly proportional to h-1/3. (3) If both the operators have the second approximation order, the defect-correction solver demonstrates the asymptotic convergence rate after three iterations at most. The same three iterations are required to converge algebraic error below the truncation error level. A novel comprehensive half-space Fourier mode analysis (which, by the way, can take into account the influence of discretized outflow boundary conditions as well) for the defect-correction method is developed. This analysis explains many phenomena observed in solving non-elliptic equations and provides a close prediction of the actual solution behavior. It predicts the convergence rate for each iteration and the asymptotic convergence rate. As a result of this analysis, a new very efficient adaptive multigrid algorithm solving the discrete problem to within a given accuracy is proposed. Numerical simulations confirm the accuracy of the analysis and the efficiency of the proposed algorithm. The results of the numerical tests are reported.
NASA Astrophysics Data System (ADS)
Suzuki, S.; Enoeda, M.; Hatano, T.; Hirose, T.; Hayashi, K.; Tanigawa, H.; Ochiai, K.; Nishitani, T.; Tobita, K.; Akiba, M.
2006-02-01
This paper presents the significant progress made in the research and development (R&D) of key technologies on the water-cooled solid breeder blanket for the ITER test blanket modules in JAERI. Development of module fabrication technology, bonding technology of armours, measurement of thermo-mechanical properties of pebble beds, neutronics studies on a blanket module mockup and tritium release behaviour from a Li2TiO3 pebble bed under neutron-pulsed operation conditions are summarized. With the improvement of the heat treatment process for blanket module fabrication, a fine-grained microstructure of F82H can be obtained by homogenizing it at 1150 °C followed by normalizing it at 930 °C after the hot isostatic pressing process. Moreover, a promising bonding process for a tungsten armour and an F82H structural material was developed using a solid-state bonding method based on uniaxial hot compression without any artificial compliant layer. As a result of high heat flux tests of F82H first wall mockups, it has been confirmed that a fatigue lifetime correlation, which was developed for the ITER divertor, can be made applicable for the F82H first wall mockup. As for R&D on the breeder material, Li2TiO3, the effect of compression loads on effective thermal conductivity of pebble beds has been clarified for the Li2TiO3 pebble bed. The tritium breeding ratio of a simulated multi-layer blanket structure has successfully been measured using 14 MeV neutrons with an accuracy of 10%. The tritium release rate from the Li2TiO3 pebble has also been successfully measured with pulsed neutron irradiation, which simulates ITER operation.
A Monte Carlo Study of an Iterative Wald Test Procedure for DIF Analysis
ERIC Educational Resources Information Center
Cao, Mengyang; Tay, Louis; Liu, Yaowu
2017-01-01
This study examined the performance of a proposed iterative Wald approach for detecting differential item functioning (DIF) between two groups when preknowledge of anchor items is absent. The iterative approach utilizes the Wald-2 approach to identify anchor items and then iteratively tests for DIF items with the Wald-1 approach. Monte Carlo…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Youchison, D.L.; Watson, R.D.; McDonald, J.M.
Thermal response and thermal fatigue tests of four 5-mm-thick beryllium tiles on a Russian Federation International Thermonuclear Experimental Reactor (ITER)-relevant divertor mock-up were completed on the electron beam test system at Sandia National Laboratories. Thermal response tests were performed on the tiles to an absorbed heat flux of 5 MW/m{sup 2} and surface temperatures near 300{degree}C using 1.4 MPa water at 5 m/s flow velocity and an inlet temperature of 8 to 15{degree}C. One tile was exposed to incrementally increasing heat fluxes up to 9.5 MW/m{sup 2} and surface temperatures up to 690{degree}C before debonding at 10MW/m{sup 2}. A secondmore » tile debonded in 25 to 30 cycles at <0.5 MW/m{sup 2}. However, a third tile debonded after 9200 thermal fatigue cycles at 5 MW/m{sup 2}, while another debonded after 6800 cycles. Posttest surface analysis indicated that fatigue failure occurred in the intermetallic layers between the beryllium and copper. No fatigue cracking of the bulk beryllium was observed. It appears that microcracks growing at the diffusion bond produced the observed gradual temperature increases during thermal cycling. These experiments indicate that diffusion-bonded beryllium tiles can survive several thousand thermal cycles under ITER-relevant conditions. However, the reliability of the diffusion-bonded joint remains a serious issue. 17 refs., 25 figs., 6 tabs.« less
Material migration studies with an ITER first wall panel proxy on EAST
Ding, R.; Pitts, R. A.; Borodin, D.; ...
2015-01-23
The ITER beryllium (Be) first wall (FW) panels are shaped to protect leading edges between neighbouring panels arising from assembly tolerances. This departure from a perfectly cylindrical surface automatically leads to magnetically shadowed regions where eroded Be can be re-deposited, together with co-deposition of tritium fuel. To provide a benchmark for a series of erosion/re-deposition simulation studies performed for the ITER FW panels, dedicated experiments have been performed on the EAST tokamak using a specially designed, instrumented test limiter acting as a proxy for the FW panel geometry. Carbon coated molybdenum plates forming the limiter front surface were exposed tomore » the outer midplane boundary plasma of helium discharges using the new Material and Plasma Evaluation System (MAPES). Net erosion and deposition patterns are estimated using ion beam analysis to measure the carbon layer thickness variation across the surface after exposure. The highest erosion of about 0.8 µm is found near the midplane, where the surface is closest to the plasma separatrix. No net deposition above the measurement detection limit was found on the proxy wall element, even in shadowed regions. The measured 2D surface erosion distribution has been modelled with the 3D Monte Carlo code ERO, using the local plasma parameter measurements together with a diffusive transport assumption. In conclusion, excellent agreement between the experimentally observed net erosion and the modelled erosion profile has been obtained.« less
Simulation and Analysis of Launch Teams (SALT)
NASA Technical Reports Server (NTRS)
2008-01-01
A SALT effort was initiated in late 2005 with seed funding from the Office of Safety and Mission Assurance Human Factors organization. Its objectives included demonstrating human behavior and performance modeling and simulation technologies for launch team analysis, training, and evaluation. The goal of the research is to improve future NASA operations and training. The project employed an iterative approach, with the first iteration focusing on the last 70 minutes of a nominal-case Space Shuttle countdown, the second iteration focusing on aborts and launch commit criteria violations, the third iteration focusing on Ares I-X communications, and the fourth iteration focusing on Ares I-X Firing Room configurations. SALT applied new commercial off-the-shelf technologies from industry and the Department of Defense in the spaceport domain.
Electron-cyclotron wave scattering by edge density fluctuations in ITER
NASA Astrophysics Data System (ADS)
Tsironis, Christos; Peeters, Arthur G.; Isliker, Heinz; Strintzi, Dafni; Chatziantonaki, Ioanna; Vlahos, Loukas
2009-11-01
The effect of edge turbulence on the electron-cyclotron wave propagation in ITER is investigated with emphasis on wave scattering, beam broadening, and its influence on localized heating and current drive. A wave used for electron-cyclotron current drive (ECCD) must cross the edge of the plasma, where density fluctuations can be large enough to bring on wave scattering. The scattering angle due to the density fluctuations is small, but the beam propagates over a distance of several meters up to the resonance layer and even small angle scattering leads to a deviation of several centimeters at the deposition location. Since the localization of ECCD is crucial for the control of neoclassical tearing modes, this issue is of great importance to the ITER design. The wave scattering process is described on the basis of a Fokker-Planck equation, where the diffusion coefficient is calculated analytically as well as computed numerically using a ray tracing code.
A frequency dependent preconditioned wavelet method for atmospheric tomography
NASA Astrophysics Data System (ADS)
Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny
2013-12-01
Atmospheric tomography, i.e. the reconstruction of the turbulence in the atmosphere, is a main task for the adaptive optics systems of the next generation telescopes. For extremely large telescopes, such as the European Extremely Large Telescope, this problem becomes overly complex and an efficient algorithm is needed to reduce numerical costs. Recently, a conjugate gradient method based on wavelet parametrization of turbulence layers was introduced [5]. An iterative algorithm can only be numerically efficient when the number of iterations required for a sufficient reconstruction is low. A way to achieve this is to design an efficient preconditioner. In this paper we propose a new frequency-dependent preconditioner for the wavelet method. In the context of a multi conjugate adaptive optics (MCAO) system simulated on the official end-to-end simulation tool OCTOPUS of the European Southern Observatory we demonstrate robustness and speed of the preconditioned algorithm. We show that three iterations are sufficient for a good reconstruction.
Melt damage simulation of W-macrobrush and divertor gaps after multiple transient events in ITER
NASA Astrophysics Data System (ADS)
Bazylev, B. N.; Janeschitz, G.; Landman, I. S.; Loarte, A.; Pestchanyi, S. E.
2007-06-01
Tungsten in the form of macrobrush structure is foreseen as one of two candidate materials for the ITER divertor and dome. In ITER, even for moderate and weak ELMs when a thin shielding layer does not protect the armour surface from the dumped plasma, the main mechanisms of metallic target damage remain surface melting and melt motion erosion, which determines the lifetime of the plasma facing components. The melt erosion of W-macrobrush targets with different geometry of brush surface under the heat loads caused by weak ELMs is numerically investigated using the modified code MEMOS. The optimal angle of brush surface inclination that provides a minimum of surface roughness is estimated for given inclination angles of impacting plasma stream and given parameters of the macrobrush target. For multiple disruptions the damage of the dome gaps and the gaps between divertor cassettes caused by the radiation impact is estimated.
Erosion simulation of first wall beryllium armour under ITER transient heat loads
NASA Astrophysics Data System (ADS)
Bazylev, B.; Janeschitz, G.; Landman, I.; Pestchanyi, S.; Loarte, A.
2009-04-01
The beryllium is foreseen as plasma facing armour for the first wall in the ITER in form of Be-clad blanket modules in macrobrush design with brush size about 8-10 cm. In ITER significant heat loads during transient events (TE) are expected at the main chamber wall that may leads to the essential damage of the Be armour. The main mechanisms of metallic target damage remain surface melting and melt motion erosion, which determines the lifetime of the plasma facing components. Melting thresholds and melt layer depth of the Be armour under transient loads are estimated for different temperatures of the bulk Be and different shapes of transient loads. The melt motion damages of Be macrobrush armour caused by the tangential friction force and the Lorentz force are analyzed for bulk Be and different sizes of Be-brushes. The damage of FW under radiative loads arising during mitigated disruptions is numerically simulated.
Calibration and Data Analysis of the MC-130 Air Balance
NASA Technical Reports Server (NTRS)
Booth, Dennis; Ulbrich, N.
2012-01-01
Design, calibration, calibration analysis, and intended use of the MC-130 air balance are discussed. The MC-130 balance is an 8.0 inch diameter force balance that has two separate internal air flow systems and one external bellows system. The manual calibration of the balance consisted of a total of 1854 data points with both unpressurized and pressurized air flowing through the balance. A subset of 1160 data points was chosen for the calibration data analysis. The regression analysis of the subset was performed using two fundamentally different analysis approaches. First, the data analysis was performed using a recently developed extension of the Iterative Method. This approach fits gage outputs as a function of both applied balance loads and bellows pressures while still allowing the application of the iteration scheme that is used with the Iterative Method. Then, for comparison, the axial force was also analyzed using the Non-Iterative Method. This alternate approach directly fits loads as a function of measured gage outputs and bellows pressures and does not require a load iteration. The regression models used by both the extended Iterative and Non-Iterative Method were constructed such that they met a set of widely accepted statistical quality requirements. These requirements lead to reliable regression models and prevent overfitting of data because they ensure that no hidden near-linear dependencies between regression model terms exist and that only statistically significant terms are included. Finally, a comparison of the axial force residuals was performed. Overall, axial force estimates obtained from both methods show excellent agreement as the differences of the standard deviation of the axial force residuals are on the order of 0.001 % of the axial force capacity.
Effect of non-equilibrium flow chemistry and surface catalysis on surface heating to AFE
NASA Technical Reports Server (NTRS)
Stewart, David A.; Henline, William D.; Chen, Yih-Kanq
1991-01-01
The effect of nonequilibrium flow chemistry on the surface temperature distribution over the forebody heat shield on the Aeroassisted Flight Experiment (AFE) vehicle was investigated using a reacting boundary-layer code. Computations were performed by using boundary-layer-edge properties determined from global iterations between the boundary-layer code and flow field solutions from a viscous shock layer (VSL) and a full Navier-Stokes solution. Surface temperature distribution over the AFE heat shield was calculated for two flight conditions during a nominal AFE trajectory. This study indicates that the surface temperature distribution is sensitive to the nonequilibrium chemistry in the shock layer. Heating distributions over the AFE forebody calculated using nonequilibrium edge properties were similar to values calculated using the VSL program.
Ene, Remus-Daniel; Marinca, Vasile; Marinca, Bogdan
2016-01-01
Analytic approximate solutions using Optimal Homotopy Perturbation Method (OHPM) are given for steady boundary layer flow over a nonlinearly stretching wall in presence of partial slip at the boundary. The governing equations are reduced to nonlinear ordinary differential equation by means of similarity transformations. Some examples are considered and the effects of different parameters are shown. OHPM is a very efficient procedure, ensuring a very rapid convergence of the solutions after only two iterations.
Ene, Remus-Daniel; Marinca, Vasile; Marinca, Bogdan
2016-01-01
Analytic approximate solutions using Optimal Homotopy Perturbation Method (OHPM) are given for steady boundary layer flow over a nonlinearly stretching wall in presence of partial slip at the boundary. The governing equations are reduced to nonlinear ordinary differential equation by means of similarity transformations. Some examples are considered and the effects of different parameters are shown. OHPM is a very efficient procedure, ensuring a very rapid convergence of the solutions after only two iterations. PMID:27031232
Fourier mode analysis of slab-geometry transport iterations in spatially periodic media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, E; Zika, M
1999-04-01
We describe a Fourier analysis of the diffusion-synthetic acceleration (DSA) and transport-synthetic acceleration (TSA) iteration schemes for a spatially periodic, but otherwise arbitrarily heterogeneous, medium. Both DSA and TSA converge more slowly in a heterogeneous medium than in a homogeneous medium composed of the volume-averaged scattering ratio. In the limit of a homogeneous medium, our heterogeneous analysis contains eigenvalues of multiplicity two at ''resonant'' wave numbers. In the presence of material heterogeneities, error modes corresponding to these resonant wave numbers are ''excited'' more than other error modes. For DSA and TSA, the iteration spectral radius may occur at these resonantmore » wave numbers, in which case the material heterogeneities most strongly affect iterative performance.« less
Analysis of drift effects on the tokamak power scrape-off width using SOLPS-ITER
NASA Astrophysics Data System (ADS)
Meier, E. T.; Goldston, R. J.; Kaveeva, E. G.; Makowski, M. A.; Mordijck, S.; Rozhansky, V. A.; Senichenkov, I. Yu; Voskoboynikov, S. P.
2016-12-01
SOLPS-ITER, a comprehensive 2D scrape-off layer modeling package, is used to examine the physical mechanisms that set the scrape-off width ({λq} ) for inter-ELM power exhaust. Guided by Goldston’s heuristic drift (HD) model, which shows remarkable quantitative agreement with experimental data, this research examines drift effects on {λq} in a DIII-D H-mode magnetic equilibrium. As a numerical expedient, a low target recycling coefficient of 0.9 is used in the simulations, resulting in outer target plasma that is sheath limited instead of conduction limited as in the experiment. Scrape-off layer (SOL) particle diffusivity (D SOL) is scanned from 1 to 0.1 m2 s-1. Across this diffusivity range, outer divertor heat flux is dominated by a narrow (˜3-4 mm when mapped to the outer midplane) electron convection channel associated with thermoelectric current through the SOL from outer to inner divertor. An order-unity up-down ion pressure asymmetry allows net ion drift flux across the separatrix, facilitated by an artificial mechanism that mimics the anomalous electron transport required for overall ambipolarity in the HD model. At {{D}\\text{SOL}}=0.1 m2 s-1, the density fall-off length is similar to the electron temperature fall-off length, as predicted by the HD model and as seen experimentally. This research represents a step toward a deeper understanding of the power scrape-off width, and serves as a basis for extending fluid modeling to more experimentally relevant, high-collisionality regimes.
Analysis of drift effects on the tokamak power scrape-off width using SOLPS-ITER
Meier, E. T.; Goldston, R. J.; Kaveeva, E. G.; ...
2016-11-02
SOLPS-ITER, a comprehensive 2D scrape-off layer modeling package, is used to examine the physical mechanisms that set the scrape-off width (more » $${{\\lambda}_{q}}$$ ) for inter-ELM power exhaust. Guided by Goldston's heuristic drift (HD) model, which shows remarkable quantitative agreement with experimental data, this research examines drift effects on $${{\\lambda}_{q}}$$ in a DIII-D H-mode magnetic equilibrium. As a numerical expedient, a low target recycling coefficient of 0.9 is used in the simulations, resulting in outer target plasma that is sheath limited instead of conduction limited as in the experiment. Scrape-off layer (SOL) particle diffusivity (D SOL) is scanned from 1 to 0.1 m2 s –1. Across this diffusivity range, outer divertor heat flux is dominated by a narrow (~3–4mm when mapped to the outer midplane) electron convection channel associated with thermoelectric current through the SOL from outer to inner divertor. An order-unity up–down ion pressure asymmetry allows net ion drift flux across the separatrix, facilitated by an artificial mechanism that mimics the anomalous electron transport required for overall ambipolarity in the HD model. At $${{D}_{\\text{SOL}}}=0.1$$ m2 s –1, the density fall-off length is similar to the electron temperature fall-off length, as predicted by the HD model and as seen experimentally. Furthermore, this research represents a step toward a deeper understanding of the power scrape-off width, and serves as a basis for extending fluid modeling to more experimentally relevant, high-collisionality regimes.« less
Experimental validation of prototype high voltage bushing
NASA Astrophysics Data System (ADS)
Shah, Sejal; Tyagi, H.; Sharma, D.; Parmar, D.; M. N., Vishnudev; Joshi, K.; Patel, K.; Yadav, A.; Patel, R.; Bandyopadhyay, M.; Rotti, C.; Chakraborty, A.
2017-08-01
Prototype High voltage bushing (PHVB) is a scaled down configuration of DNB High Voltage Bushing (HVB) of ITER. It is designed for operation at 50 kV DC to ensure operational performance and thereby confirming the design configuration of DNB HVB. Two concentric insulators viz. Ceramic and Fiber reinforced polymer (FRP) rings are used as double layered vacuum boundary for 50 kV isolation between grounded and high voltage flanges. Stress shields are designed for smooth electric field distribution. During ceramic to Kovar brazing, spilling cannot be controlled which may lead to high localized electrostatic stress. To understand spilling phenomenon and precise stress calculation, quantitative analysis was performed using Scanning Electron Microscopy (SEM) of brazed sample and similar configuration modeled while performing the Finite Element (FE) analysis. FE analysis of PHVB is performed to find out electrical stresses on different areas of PHVB and are maintained similar to DNB HV Bushing. With this configuration, the experiment is performed considering ITER like vacuum and electrical parameters. Initial HV test is performed by temporary vacuum sealing arrangements using gaskets/O-rings at both ends in order to achieve desired vacuum and keep the system maintainable. During validation test, 50 kV voltage withstand is performed for one hour. Voltage withstand test for 60 kV DC (20% higher rated voltage) have also been performed without any breakdown. Successful operation of PHVB confirms the design of DNB HV Bushing. In this paper, configuration of PHVB with experimental validation data is presented.
Heterogeneous dissipative composite structures
NASA Astrophysics Data System (ADS)
Ryabov, Victor; Yartsev, Boris; Parshina, Ludmila
2018-05-01
The paper suggests mathematical models of decaying vibrations in layered anisotropic plates and orthotropic rods based on Hamilton variation principle, first-order shear deformation laminated plate theory (FSDT), as well as on the viscous-elastic correspondence principle of the linear viscoelasticity theory. In the description of the physical relationships between the materials of the layers forming stiff polymeric composites, the effect of vibration frequency and ambient temperature is assumed as negligible, whereas for the viscous-elastic polymer layer, temperature-frequency relationship of elastic dissipation and stiffness properties is considered by means of the experimentally determined generalized curves. Mitigation of Hamilton functional makes it possible to describe decaying vibration of anisotropic structures by an algebraic problem of complex eigenvalues. The system of algebraic equation is generated through Ritz method using Legendre polynomials as coordinate functions. First, real solutions are found. To find complex natural frequencies of the system, the obtained real natural frequencies are taken as input values, and then, by means of the 3rd order iteration method, complex natural frequencies are calculated. The paper provides convergence estimates for the numerical procedures. Reliability of the obtained results is confirmed by a good correlation between analytical and experimental values of natural frequencies and loss factors in the lower vibration tones for the two series of unsupported orthotropic rods formed by stiff GRP and CRP layers and a viscoelastic polymer layer. Analysis of the numerical test data has shown the dissipation & stiffness properties of heterogeneous composite plates and rods to considerably depend on relative thickness of the viscoelastic polymer layer, orientation of stiff composite layers, vibration frequency and ambient temperature.
NASA Technical Reports Server (NTRS)
Schoenauer, W.; Daeubler, H. G.; Glotz, G.; Gruening, J.
1986-01-01
An implicit difference procedure for the solution of equations for a chemically reacting hypersonic boundary layer is described. Difference forms of arbitrary error order in the x and y coordinate plane were used to derive estimates for discretization error. Computational complexity and time were minimized by the use of this difference method and the iteration of the nonlinear boundary layer equations was regulated by discretization error. Velocity and temperature profiles are presented for Mach 20.14 and Mach 18.5; variables are velocity profiles, temperature profiles, mass flow factor, Stanton number, and friction drag coefficient; three figures include numeric data.
Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.
Wei, Qinglai; Liu, Derong; Lin, Hanquan
2016-03-01
In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.
Heated-Atmosphere Airship for the Titan Environment: Thermal Analysis
NASA Technical Reports Server (NTRS)
Heller, R. S.; Landis, G. A.; Hepp, A. F.; Colozza, A. J.
2012-01-01
Future exploration of Saturn's moon Titan can be carried out by airships. Several lighter-than-atmosphere gas airships and passive drifting heated-atmosphere balloon designs have been studied, but a heated-atmosphere airship could combine the best characteristics of both. This work analyses the thermal design of such a heated-atmosphere vehicle, and compares the result with a lighter-than-atmosphere (hydrogen) airship design. A design tool was created to enable iteration through different design parameters of a heated-atmosphere airship (diameter, number of layers, and insulating gas pocket thicknesses) and evaluate the feasibility of the resulting airship. A baseline heated-atmosphere airship was designed to have a diameter of 6 m (outer diameter of 6.2 m), three-layers of material, and an insulating gas pocket thickness of 0.05 m between each layer. The heated-atmosphere airship has a mass of 161.9 kg. A similar mission making use of a hydrogen-filled airship would require a diameter of 4.3 m and a mass of about 200 kg. For a long-duration mission, the heated-atmosphere airship appears better suited. However, for a mission lifetime under 180 days, the less complex hydrogen airship would likely be a better option.
NASA Technical Reports Server (NTRS)
Pindera, Marek-Jerzy; Salzar, Robert S.; Williams, Todd O.
1993-01-01
The utility of a recently developed analytical micromechanics model for the response of metal matrix composites under thermal loading is illustrated by comparison with the results generated using the finite-element approach. The model is based on the concentric cylinder assemblage consisting of an arbitrary number of elastic or elastoplastic sublayers with isotropic or orthotropic, temperature-dependent properties. The elastoplastic boundary-value problem of an arbitrarily layered concentric cylinder is solved using the local/global stiffness matrix formulation (originally developed for elastic layered media) and Mendelson's iterative technique of successive elastic solutions. These features of the model facilitate efficient investigation of the effects of various microstructural details, such as functionally graded architectures of interfacial layers, on the evolution of residual stresses during cool down. The available closed-form expressions for the field variables can readily be incorporated into an optimization algorithm in order to efficiently identify optimal configurations of graded interfaces for given applications. Comparison of residual stress distributions after cool down generated using finite-element analysis and the present micromechanics model for four composite systems with substantially different temperature-dependent elastic, plastic, and thermal properties illustrates the efficacy of the developed analytical scheme.
Interpretation of magnetotelluric resistivity and phase soundings over horizontal layers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patella, D.
1976-02-01
The present paper deals with a new inverse method for quantitatively interpreting magnetotelluric apparent resistivity and phase-lag sounding curves over horizontally stratified earth sections. The recurrent character of the general formula relating the wave impedance of an (n-l)-layered medium to that of an n-layered medium suggests the use of the method of reduction to a lower boundary plane, as originally termed by Koefoed in the case of dc resistivity soundings. The layering parameters are so directly derived by a simple iterative procedure. The method is applicable for any number of layers but only when both apparent resistivity and phase-lag soundingmore » curves are jointly available. Moreover no sophisticated algorithm is required: a simple desk electronic calculator together with a sheet of two-layer apparent resistivity and phase-lag master curves are sufficient to reproduce earth sections which, in the range of equivalence, are all consistent with field data.« less
Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin
2018-04-20
An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.
Aslam, Muhammad; Hu, Xiaopeng; Wang, Fan
2017-12-13
Smart reconfiguration of a dynamic networking environment is offered by the central control of Software-Defined Networking (SDN). Centralized SDN-based management architectures are capable of retrieving global topology intelligence and decoupling the forwarding plane from the control plane. Routing protocols developed for conventional Wireless Sensor Networks (WSNs) utilize limited iterative reconfiguration methods to optimize environmental reporting. However, the challenging networking scenarios of WSNs involve a performance overhead due to constant periodic iterative reconfigurations. In this paper, we propose the SDN-based Application-aware Centralized adaptive Flow Iterative Reconfiguring (SACFIR) routing protocol with the centralized SDN iterative solver controller to maintain the load-balancing between flow reconfigurations and flow allocation cost. The proposed SACFIR's routing protocol offers a unique iterative path-selection algorithm, which initially computes suitable clustering based on residual resources at the control layer and then implements application-aware threshold-based multi-hop report transmissions on the forwarding plane. The operation of the SACFIR algorithm is centrally supervised by the SDN controller residing at the Base Station (BS). This paper extends SACFIR to SDN-based Application-aware Main-value Centralized adaptive Flow Iterative Reconfiguring (SAMCFIR) to establish both proactive and reactive reporting. The SAMCFIR transmission phase enables sensor nodes to trigger direct transmissions for main-value reports, while in the case of SACFIR, all reports follow computed routes. Our SDN-enabled proposed models adjust the reconfiguration period according to the traffic burden on sensor nodes, which results in heterogeneity awareness, load-balancing and application-specific reconfigurations of WSNs. Extensive experimental simulation-based results show that SACFIR and SAMCFIR yield the maximum scalability, network lifetime and stability period when compared to existing routing protocols.
Hu, Xiaopeng; Wang, Fan
2017-01-01
Smart reconfiguration of a dynamic networking environment is offered by the central control of Software-Defined Networking (SDN). Centralized SDN-based management architectures are capable of retrieving global topology intelligence and decoupling the forwarding plane from the control plane. Routing protocols developed for conventional Wireless Sensor Networks (WSNs) utilize limited iterative reconfiguration methods to optimize environmental reporting. However, the challenging networking scenarios of WSNs involve a performance overhead due to constant periodic iterative reconfigurations. In this paper, we propose the SDN-based Application-aware Centralized adaptive Flow Iterative Reconfiguring (SACFIR) routing protocol with the centralized SDN iterative solver controller to maintain the load-balancing between flow reconfigurations and flow allocation cost. The proposed SACFIR’s routing protocol offers a unique iterative path-selection algorithm, which initially computes suitable clustering based on residual resources at the control layer and then implements application-aware threshold-based multi-hop report transmissions on the forwarding plane. The operation of the SACFIR algorithm is centrally supervised by the SDN controller residing at the Base Station (BS). This paper extends SACFIR to SDN-based Application-aware Main-value Centralized adaptive Flow Iterative Reconfiguring (SAMCFIR) to establish both proactive and reactive reporting. The SAMCFIR transmission phase enables sensor nodes to trigger direct transmissions for main-value reports, while in the case of SACFIR, all reports follow computed routes. Our SDN-enabled proposed models adjust the reconfiguration period according to the traffic burden on sensor nodes, which results in heterogeneity awareness, load-balancing and application-specific reconfigurations of WSNs. Extensive experimental simulation-based results show that SACFIR and SAMCFIR yield the maximum scalability, network lifetime and stability period when compared to existing routing protocols. PMID:29236031
Application of viscous-inviscid interaction methods to transonic turbulent flows
NASA Technical Reports Server (NTRS)
Lee, D.; Pletcher, R. H.
1986-01-01
Two different viscous-inviscid interaction schemes were developed for the analysis of steady, turbulent, transonic, separated flows over axisymmetric bodies. The viscous and inviscid solutions are coupled through the displacement concept using a transpiration velocity approach. In the semi-inverse interaction scheme, the viscous and inviscid equations are solved in an explicitly separate manner and the displacement thickness distribution is iteratively updated by a simple coupling algorithm. In the simultaneous interaction method, local solutions of viscous and inviscid equations are treated simultaneously, and the displacement thickness is treated as an unknown and is obtained as a part of the solution through a global iteration procedure. The inviscid flow region is described by a direct finite-difference solution of a velocity potential equation in conservative form. The potential equation is solved on a numerically generated mesh by an approximate factorization (AF2) scheme in the semi-inverse interaction method and by a successive line overrelaxation (SLOR) scheme in the simultaneous interaction method. The boundary-layer equations are used for the viscous flow region. The continuity and momentum equations are solved inversely in a coupled manner using a fully implicit finite-difference scheme.
Drift effects on the tokamak power scrape-off width
NASA Astrophysics Data System (ADS)
Meier, E. T.; Goldston, R. J.; Kaveeva, E. G.; Mordijck, S.; Rozhansky, V. A.; Senichenkov, I. Yu.; Voskoboynikov, S. P.
2015-11-01
Recent experimental analysis suggests that the scrape-off layer (SOL) heat flux width (λq) for ITER will be near 1 mm, sharply narrowing the planned operating window. In this work, motivated by the heuristic drift (HD) model, which predicts the observed inverse plasma current scaling, SOLPS-ITER is used to explore drift effects on λq. Modeling focuses on an H-mode DIII-D discharge. In initial results, target recycling is set to 90%, resulting in sheath-limited SOL conditions. SOL particle diffusivity (DSOL) is varied from 0.1 to 1 m2/s. When drifts are included, λq is insensitive to DSOL, consistent with the HD model, with λq near 3 mm; in no-drift cases, λq varies from 2 to 5 mm. Drift effects depress near-separatrix potential, generating a channel of strong electron heat convection that is insensitive to DSOL. Sensitivities to thermal diffusivities, plasma current, toroidal magnetic field, and device size are also assessed. These initial results will be discussed in detail, and progress toward modeling experimentally relevant high-recycling conditions will be reported. Supported by U.S. DOE Contract DE-SC0010434.
An efficient flexible-order model for 3D nonlinear water waves
NASA Astrophysics Data System (ADS)
Engsig-Karup, A. P.; Bingham, H. B.; Lindberg, O.
2009-04-01
The flexible-order, finite difference based fully nonlinear potential flow model described in [H.B. Bingham, H. Zhang, On the accuracy of finite difference solutions for nonlinear water waves, J. Eng. Math. 58 (2007) 211-228] is extended to three dimensions (3D). In order to obtain an optimal scaling of the solution effort multigrid is employed to precondition a GMRES iterative solution of the discretized Laplace problem. A robust multigrid method based on Gauss-Seidel smoothing is found to require special treatment of the boundary conditions along solid boundaries, and in particular on the sea bottom. A new discretization scheme using one layer of grid points outside the fluid domain is presented and shown to provide convergent solutions over the full physical and discrete parameter space of interest. Linear analysis of the fundamental properties of the scheme with respect to accuracy, robustness and energy conservation are presented together with demonstrations of grid independent iteration count and optimal scaling of the solution effort. Calculations are made for 3D nonlinear wave problems for steep nonlinear waves and a shoaling problem which show good agreement with experimental measurements and other calculations from the literature.
Study of Unsteady Flows with Concave Wall Effect
NASA Technical Reports Server (NTRS)
Wang, Chi R.
2003-01-01
This paper presents computational fluid dynamic studies of the inlet turbulence and wall curvature effects on the flow steadiness at near wall surface locations in boundary layer flows. The time-stepping RANS numerical solver of the NASA Glenn-HT RANS code and a one-equation turbulence model, with a uniform inlet turbulence modeling level of the order of 10 percent of molecular viscosity, were used to perform the numerical computations. The approach was first calibrated for its predictabilities of friction factor, velocity, and temperature at near surface locations within a transitional boundary layer over concave wall. The approach was then used to predict the velocity and friction factor variations in a boundary layer recovering from concave curvature. As time iteration proceeded in the computations, the computed friction factors converged to their values from existing experiments. The computed friction factors, velocity, and static temperatures at near wall surface locations oscillated periodically in terms of time iteration steps and physical locations along the span-wise direction. At the upstream stations, the relationship among the normal and tangential velocities showed vortices effects on the velocity variations. Coherent vortices effect on the velocity components broke down at downstream stations. The computations also predicted the vortices effects on the velocity variations within a boundary layer flow developed along a concave wall surface with a downstream recovery flat wall surface. It was concluded that the computational approach might have the potential to analyze the flow steadiness in a turbine blade flow.
A Model and Simple Iterative Algorithm for Redundancy Analysis.
ERIC Educational Resources Information Center
Fornell, Claes; And Others
1988-01-01
This paper shows that redundancy maximization with J. K. Johansson's extension can be accomplished via a simple iterative algorithm based on H. Wold's Partial Least Squares. The model and the iterative algorithm for the least squares approach to redundancy maximization are presented. (TJH)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boggs, Paul T.; Althsuler, Alan; Larzelere, Alex R.
2005-08-01
The Design-through-Analysis Realization Team (DART) is chartered with reducing the time Sandia analysts require to complete the engineering analysis process. The DART system analysis team studied the engineering analysis processes employed by analysts in Centers 9100 and 8700 at Sandia to identify opportunities for reducing overall design-through-analysis process time. The team created and implemented a rigorous analysis methodology based on a generic process flow model parameterized by information obtained from analysts. They also collected data from analysis department managers to quantify the problem type and complexity distribution throughout Sandia's analyst community. They then used this information to develop a communitymore » model, which enables a simple characterization of processes that span the analyst community. The results indicate that equal opportunity for reducing analysis process time is available both by reducing the ''once-through'' time required to complete a process step and by reducing the probability of backward iteration. In addition, reducing the rework fraction (i.e., improving the engineering efficiency of subsequent iterations) offers approximately 40% to 80% of the benefit of reducing the ''once-through'' time or iteration probability, depending upon the process step being considered. Further, the results indicate that geometry manipulation and meshing is the largest portion of an analyst's effort, especially for structural problems, and offers significant opportunity for overall time reduction. Iteration loops initiated late in the process are more costly than others because they increase ''inner loop'' iterations. Identifying and correcting problems as early as possible in the process offers significant opportunity for time savings.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henning, C.
This report contains papers on the following topics: conceptual design; radiation damage of ITER magnet systems; insulation system of the magnets; critical current density and strain sensitivity; toroidal field coil structural analysis; stress analysis for the ITER central solenoid; and volt-second capabilities and PF magnet configurations.
Design of a -1 MV dc UHV power supply for ITER NBI
NASA Astrophysics Data System (ADS)
Watanabe, K.; Yamamoto, M.; Takemoto, J.; Yamashita, Y.; Dairaku, M.; Kashiwagi, M.; Taniguchi, M.; Tobari, H.; Umeda, N.; Sakamoto, K.; Inoue, T.
2009-05-01
Procurement of a dc -1 MV power supply system for the ITER neutral beam injector (NBI) is shared by Japan and the EU. The Japan Atomic Energy Agency as the Japan Domestic Agency (JADA) for ITER contributes to the procurement of dc -1 MV ultra-high voltage (UHV) components such as a dc -1 MV generator, a transmission line and a -1 MV insulating transformer for the ITER NBI power supply. The inverter frequency of 150 Hz in the -1 MV power supply and major circuit parameters have been proposed and adopted in the ITER NBI. The dc UHV insulation has been carefully designed since dc long pulse insulation is quite different from conventional ac insulation or dc short pulse systems. A multi-layer insulation structure of the transformer for a long pulse up to 3600 s has been designed with electric field simulation. Based on the simulation the overall dimensions of the dc UHV components have been finalized. A surge energy suppression system is also essential to protect the accelerator from electric breakdowns. The JADA contributes to provide an effective surge suppression system composed of core snubbers and resistors. Input energy into the accelerator from the power supply can be reduced to about 20 J, which satisfies the design criteria of 50 J in total in the case of breakdown at -1 MV.
Fast time- and frequency-domain finite-element methods for electromagnetic analysis
NASA Astrophysics Data System (ADS)
Lee, Woochan
Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution is a new method for making an explicit time-domain finite-element method (TDFEM) unconditionally stable for general electromagnetic analysis. In this method, for a given time step, we find the unstable modes that are the root cause of instability, and deduct them directly from the system matrix resulting from a TDFEM based analysis. As a result, an explicit TDFEM simulation is made stable for an arbitrarily large time step irrespective of the space step. The third contribution is a new method for full-wave applications from low to very high frequencies in a TDFEM based on matrix exponential. In this method, we directly deduct the eigenmodes having large eigenvalues from the system matrix, thus achieving a significantly increased time step in the matrix exponential based TDFEM. The fourth contribution is a new method for transforming the indefinite system matrix of a frequency-domain FEM to a symmetric positive definite one. We deduct non-positive definite component directly from the system matrix resulting from a frequency-domain FEM-based analysis. The resulting new representation of the finite-element operator ensures an iterative solution to converge in a small number of iterations. We then add back the non-positive definite component to synthesize the original solution with negligible cost.
Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...
2017-03-05
Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.
A non-iterative extension of the multivariate random effects meta-analysis.
Makambi, Kepher H; Seung, Hyunuk
2015-01-01
Multivariate methods in meta-analysis are becoming popular and more accepted in biomedical research despite computational issues in some of the techniques. A number of approaches, both iterative and non-iterative, have been proposed including the multivariate DerSimonian and Laird method by Jackson et al. (2010), which is non-iterative. In this study, we propose an extension of the method by Hartung and Makambi (2002) and Makambi (2001) to multivariate situations. A comparison of the bias and mean square error from a simulation study indicates that, in some circumstances, the proposed approach perform better than the multivariate DerSimonian-Laird approach. An example is presented to demonstrate the application of the proposed approach.
Iterative inversion of deformation vector fields with feedback control.
Dubey, Abhishek; Iliopoulos, Alexandros-Stavros; Sun, Xiaobai; Yin, Fang-Fang; Ren, Lei
2018-05-14
Often, the inverse deformation vector field (DVF) is needed together with the corresponding forward DVF in four-dimesional (4D) reconstruction and dose calculation, adaptive radiation therapy, and simultaneous deformable registration. This study aims at improving both accuracy and efficiency of iterative algorithms for DVF inversion, and advancing our understanding of divergence and latency conditions. We introduce a framework of fixed-point iteration algorithms with active feedback control for DVF inversion. Based on rigorous convergence analysis, we design control mechanisms for modulating the inverse consistency (IC) residual of the current iterate, to be used as feedback into the next iterate. The control is designed adaptively to the input DVF with the objective to enlarge the convergence area and expedite convergence. Three particular settings of feedback control are introduced: constant value over the domain throughout the iteration; alternating values between iteration steps; and spatially variant values. We also introduce three spectral measures of the displacement Jacobian for characterizing a DVF. These measures reveal the critical role of what we term the nontranslational displacement component (NTDC) of the DVF. We carry out inversion experiments with an analytical DVF pair, and with DVFs associated with thoracic CT images of six patients at end of expiration and end of inspiration. The NTDC-adaptive iterations are shown to attain a larger convergence region at a faster pace compared to previous nonadaptive DVF inversion iteration algorithms. By our numerical experiments, alternating control yields smaller IC residuals and inversion errors than constant control. Spatially variant control renders smaller residuals and errors by at least an order of magnitude, compared to other schemes, in no more than 10 steps. Inversion results also show remarkable quantitative agreement with analysis-based predictions. Our analysis captures properties of DVF data associated with clinical CT images, and provides new understanding of iterative DVF inversion algorithms with a simple residual feedback control. Adaptive control is necessary and highly effective in the presence of nonsmall NTDCs. The adaptive iterations or the spectral measures, or both, may potentially be incorporated into deformable image registration methods. © 2018 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Zanino, R.; Bonifetto, R.; Brighenti, A.; Isono, T.; Ozeki, H.; Savoldi, L.
2018-07-01
The ITER toroidal field insert (TFI) coil is a single-layer Nb3Sn solenoid tested in 2016-2017 at the National Institutes for Quantum and Radiological Science and Technology (former JAEA) in Naka, Japan. The TFI, the last in a series of ITER insert coils, was tested in operating conditions relevant for the actual ITER TF coils, inserting it in the borehole of the central solenoid model coil, which provided the background magnetic field. In this paper, we consider the five quench propagation tests that were performed using one or two inductive heaters (IHs) as drivers; out of these, three used just one IH but with increasing delay times, up to 7.5 s, between the quench detection and the TFI current dump. The results of the 4C code prediction of the quench propagation up to the current dump are presented first, based on simulations performed before the tests. We then describe the experimental results, showing good reproducibility. Finally, we compare the 4C code predictions with the measurements, confirming the 4C code capability to accurately predict the quench propagation, and the evolution of total and local voltages, as well as of the hot spot temperature. To the best of our knowledge, such a predictive validation exercise is performed here for the first time for the quench of a Nb3Sn coil. Discrepancies between prediction and measurement are found in the evolution of the jacket temperatures, in the He pressurization and quench acceleration in the late phase of the transient before the dump, as well as in the early evolution of the inlet and outlet He mass flow rate. Based on the lessons learned in the predictive exercise, the model is then refined to try and improve a posteriori (i.e. in interpretive, as opposed to predictive mode) the agreement between simulation and experiment.
Influence of Primary Gage Sensitivities on the Convergence of Balance Load Iterations
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2012-01-01
The connection between the convergence of wind tunnel balance load iterations and the existence of the primary gage sensitivities of a balance is discussed. First, basic elements of two load iteration equations that the iterative method uses in combination with results of a calibration data analysis for the prediction of balance loads are reviewed. Then, the connection between the primary gage sensitivities, the load format, the gage output format, and the convergence characteristics of the load iteration equation choices is investigated. A new criterion is also introduced that may be used to objectively determine if the primary gage sensitivity of a balance gage exists. Then, it is shown that both load iteration equations will converge as long as a suitable regression model is used for the analysis of the balance calibration data, the combined influence of non linear terms of the regression model is very small, and the primary gage sensitivities of all balance gages exist. The last requirement is fulfilled, e.g., if force balance calibration data is analyzed in force balance format. Finally, it is demonstrated that only one of the two load iteration equation choices, i.e., the iteration equation used by the primary load iteration method, converges if one or more primary gage sensitivities are missing. This situation may occur, e.g., if force balance calibration data is analyzed in direct read format using the original gage outputs. Data from the calibration of a six component force balance is used to illustrate the connection between the convergence of the load iteration equation choices and the existence of the primary gage sensitivities.
Structure of the classical scrape-off layer of a tokamak
NASA Astrophysics Data System (ADS)
Rozhansky, V.; Kaveeva, E.; Senichenkov, I.; Vekshina, E.
2018-03-01
The structure of the scrape-off layer (SOL) of a tokamak with little or no turbulent transport is analyzed. The analytical estimates of the density and electron temperature fall-off lengths of the SOL are put forward. It is demonstrated that the SOL width could be of the order of the ion poloidal gyroradius, as suggested in Goldston (2012 Nuclear Fusion 52 013009). The analytical results are supported by the results of the 2D simulations of the edge plasma with reduced transport coefficients performed by SOLPS-ITER transport code.
NASA Technical Reports Server (NTRS)
Tsang, L.; Kubacsi, M. C.; Kong, J. A.
1981-01-01
The radiative transfer theory is applied within the Rayleigh approximation to calculate the backscattering cross section of a layer of randomly positioned and oriented small ellipsoids. The orientation of the ellipsoids is characterized by a probability density function of the Eulerian angles of rotation. The radiative transfer equations are solved by an iterative approach to first order in albedo. In the half space limit the results are identical to those obtained via the approach of Foldy's and distorted Born approximation. Numerical results of the theory are illustrated using parameters encountered in active remote sensing of vegetation layers. A distinctive characteristic is the strong depolarization shown by vertically aligned leaves.
Band Structure Simulations of the Photoinduced Changes in the MgB₂:Cr Films.
Kityk, Iwan V; Fedorchuk, Anatolii O; Ozga, Katarzyna; AlZayed, Nasser S
2015-04-02
An approach for description of the photoinduced nonlinear optical effects in the superconducting MgB₂:Cr₂O₃ nanocrystalline film is proposed. It includes the molecular dynamics step-by-step optimization of the two separate crystalline phases. The principal role for the photoinduced nonlinear optical properties plays nanointerface between the two phases. The first modified layers possess a form of slightly modified perfect crystalline structure. The next layer is added to the perfect crystalline structure and the iteration procedure is repeated for the next layer. The total energy here is considered as a varied parameter. To avoid potential jumps on the borders we have carried out additional derivative procedure.
USDA-ARS?s Scientific Manuscript database
The energy transport in a vegetated (corn) surface layer is examined by solving the vector radiative transfer equation using a numerical iterative approach. This approach allows a higher order that includes the multiple scattering effects. Multiple scattering effects are important when the optical t...
Experiments on Learning by Back Propagation.
ERIC Educational Resources Information Center
Plaut, David C.; And Others
This paper describes further research on a learning procedure for layered networks of deterministic, neuron-like units, described by Rumelhart et al. The units, the way they are connected, the learning procedure, and the extension to iterative networks are presented. In one experiment, a network learns a set of filters, enabling it to discriminate…
NASA Astrophysics Data System (ADS)
Wimmer, C.; Schiesko, L.; Fantz, U.
2016-02-01
BATMAN (Bavarian Test Machine for Negative ions) is a test facility equipped with a 1/8 scale H- source for the ITER heating neutral beam injection. Several diagnostics in the boundary layer close to the plasma grid (first grid of the accelerator system) followed the transition from volume to surface dominated H- production starting with a Cs-free, cleaned source and subsequent evaporation of caesium, while the source has been operated at ITER relevant pressure of 0.3 Pa: Langmuir probes are used to determine the plasma potential, optical emission spectroscopy is used to follow the caesiation process, and cavity ring-down spectroscopy allows for the measurement of the H- density. The influence on the plasma during the transition from an electron-ion plasma towards an ion-ion plasma, in which negative hydrogen ions become the dominant negatively charged particle species, is seen in a strong increase of the H- density combined with a reduction of the plasma potential. A clear correlation of the extracted current densities (jH-, je) exists with the Cs emission.
Wimmer, C; Schiesko, L; Fantz, U
2016-02-01
BATMAN (Bavarian Test Machine for Negative ions) is a test facility equipped with a 18 scale H(-) source for the ITER heating neutral beam injection. Several diagnostics in the boundary layer close to the plasma grid (first grid of the accelerator system) followed the transition from volume to surface dominated H(-) production starting with a Cs-free, cleaned source and subsequent evaporation of caesium, while the source has been operated at ITER relevant pressure of 0.3 Pa: Langmuir probes are used to determine the plasma potential, optical emission spectroscopy is used to follow the caesiation process, and cavity ring-down spectroscopy allows for the measurement of the H(-) density. The influence on the plasma during the transition from an electron-ion plasma towards an ion-ion plasma, in which negative hydrogen ions become the dominant negatively charged particle species, is seen in a strong increase of the H(-) density combined with a reduction of the plasma potential. A clear correlation of the extracted current densities (j(H(-)), j(e)) exists with the Cs emission.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Y.; Loesser, G.; Smith, M.
ITER diagnostic first walls (DFWs) and diagnostic shield modules (DSMs) inside the port plugs (PPs) are designed to protect diagnostic instrument and components from a harsh plasma environment and provide structural support while allowing for diagnostic access to the plasma. The design of DFWs and DSMs are driven by 1) plasma radiation and nuclear heating during normal operation 2) electromagnetic loads during plasma events and associate component structural responses. A multi-physics engineering analysis protocol for the design has been established at Princeton Plasma Physics Laboratory and it was used for the design of ITER DFWs and DSMs. The analyses weremore » performed to address challenging design issues based on resultant stresses and deflections of the DFW-DSM-PP assembly for the main load cases. ITER Structural Design Criteria for In-Vessel Components (SDC-IC) required for design by analysis and three major issues driving the mechanical design of ITER DFWs are discussed. The general guidelines for the DSM design have been established as a result of design parametric studies.« less
Fundamental Design based on Current Distribution in Coaxial Multi-Layer Cable-in-Conduit Conductor
NASA Astrophysics Data System (ADS)
Hamajima, Takataro; Tsuda, Makoto; Yagai, Tsuyoshi; Takahata, Kazuya; Imagawa, Shinsaku
An imbalanced current distribution is often observed in cable-in-conduit (CIC) superconductors which are composed of multi-staged, triplet type sub-cables, and hence deteriorates the performance of the coils. Therefore, since it is very important to obtain a homogeneous current distribution in the superconducting strands, we propose a coaxial multi-layer type CIC conductor. We use a circuit model for all layers in the coaxial multi-layer CIC conductor, and derive a generalized formula governing the current distribution as explicit functions of the superconductor construction parameters, such as twist pitch, twist direction, radius of each layer, and number of superconducting (SC) strands and copper (Cu) strands. We apply the formula to design the coaxial multi-layer CIC which has the same number of SC strands and Cu strands of the CIC for Central Solenoid of ITER. We can design three kinds of the coaxial multi-layer CIC depending on distribution of SC and Cu strands on all layers. It is shown that the SC strand volume should be optimized as a function of SC and Cu strand distribution on the layers.
Imaging complex objects using learning tomography
NASA Astrophysics Data System (ADS)
Lim, JooWon; Goy, Alexandre; Shoreh, Morteza Hasani; Unser, Michael; Psaltis, Demetri
2018-02-01
Optical diffraction tomography (ODT) can be described using the scattering process through an inhomogeneous media. An inherent nonlinearity exists relating the scattering medium and the scattered field due to multiple scattering. Multiple scattering is often assumed to be negligible in weakly scattering media. This assumption becomes invalid as the sample gets more complex resulting in distorted image reconstructions. This issue becomes very critical when we image a complex sample. Multiple scattering can be simulated using the beam propagation method (BPM) as the forward model of ODT combined with an iterative reconstruction scheme. The iterative error reduction scheme and the multi-layer structure of BPM are similar to neural networks. Therefore we refer to our imaging method as learning tomography (LT). To fairly assess the performance of LT in imaging complex samples, we compared LT with the conventional iterative linear scheme using Mie theory which provides the ground truth. We also demonstrate the capacity of LT to image complex samples using experimental data of a biological cell.
Development of the low-field side reflectometer for ITER
NASA Astrophysics Data System (ADS)
Muscatello, Christopher; Anderson, James; Gattuso, Anthony; Doyle, Edward; Peebles, William; Seraydarian, Raymond; Wang, Guiding; Kramer, Gerrit; Zolfaghari, Ali; Atomics Team, General; University of California Los Angeles Team; Princeton Plasma Physics Laboratory Team
2017-10-01
The Low-Field Side Reflectometer (LFSR) for ITER will provide real-time edge density profiles every 10 ms for feedback control and every 24 μs for physics evaluation. The spatial resolution will be better than 5 mm over 30 - 165 GHz, probing the scrape-off layer to the top of the pedestal in H-mode plasmas. An antenna configuration has been selected for measurements covering anticipated plasma elevations. Laboratory validation of diagnostic performance is underway using a LFSR transmission line (TL) mockup. The 40-meter TL includes circular corrugated waveguide, length calibration feature, Gaussian telescope, vacuum windows, containment membranes, and expansion joint. Transceiver modules coupled to the input of the TL provide frequency-modulated (FM) data for evaluation of performance as a monostatic reflectometer. Results from the mockup tests are presented and show that, with some further optimization, the LFSR will meet or exceed the measurement requirements for ITER. An update of the LFSR instrumentation design status is also presented with preliminary test results. Work supported by PPPL under subcontract S013252-A.
Centrality measures in temporal networks with time series analysis
NASA Astrophysics Data System (ADS)
Huang, Qiangjuan; Zhao, Chengli; Zhang, Xue; Wang, Xiaojie; Yi, Dongyun
2017-05-01
The study of identifying important nodes in networks has a wide application in different fields. However, the current researches are mostly based on static or aggregated networks. Recently, the increasing attention to networks with time-varying structure promotes the study of node centrality in temporal networks. In this paper, we define a supra-evolution matrix to depict the temporal network structure. With using of the time series analysis, the relationships between different time layers can be learned automatically. Based on the special form of the supra-evolution matrix, the eigenvector centrality calculating problem is turned into the calculation of eigenvectors of several low-dimensional matrices through iteration, which effectively reduces the computational complexity. Experiments are carried out on two real-world temporal networks, Enron email communication network and DBLP co-authorship network, the results of which show that our method is more efficient at discovering the important nodes than the common aggregating method.
Reducing neural network training time with parallel processing
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Lamarsh, William J., II
1995-01-01
Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.
NASA Astrophysics Data System (ADS)
Pigott, John D.; Abouelresh, Mohamed O.
2016-02-01
To construct a model of a sedimentary basin's thermal tectonic history is first to deconstruct it: taking apart its geological elements, searching for its initial conditions, and then to reassemble the elements in the temporal order that the basin is assumed to have evolved. Two inherent difficulties implicit to the analysis are that most organic thermal indicators are cumulative, irreversible and a function of both temperature and time and the non-uniqueness of crustal strain histories which complicates tectonic interpretations. If the initial conditions (e.g. starting maturity of the reactants and initial crustal temperature) can be specified and the boundary conditions incrementally designated from changes in the lithospheric heat engine owing to stratigraphic structural constraints, then the number of pathways for the temporal evolution of a basin is greatly reduced. For this investigation, model input uncertainties are reduced through seeking a solution that iteratively integrates the geologically constrained tectonic subsidence, geochemically constrained thermal indicators, and geophysically constrained fault mechanical stratigraphy. The Faras oilfield in the Abu Gharadig Basin, North Western Desert, Egypt, provides an investigative example of such a basin's deconstructive procedure. Multiple episodes of crustal extension and shortening are apparent in the tectonic subsidence analyses which are constrained from the fault mechanical stratigraphy interpreted from reflection seismic profiles. The model was iterated with different thermal boundary conditions until outputs best fit the geochemical observations. In so doing, the thermal iterations demonstrate that general relationship that basin heat flow increases decrease vertical model maturity gradients, increases in surface temperatures shift vertical maturity gradients linearly to higher values, increases in sediment conductivities lower vertical maturities with depth, and the addition of ;ghost; layers (those layers removed) prior to the erosional event increase maturities beneath, and conversely. These integrated constraints upon the basin evolution model indicate that the principal source rocks, Khatatba and the lowest part of the Alam El Bueib formations, entered the oil window at approximately 95 Ma and the gas window at approximately 25 Ma. The upper part of the Alam El Bueib Formation is within the oil window at the present day. Establishing initial and boundary value conditions for a basin's thermal evolution when geovalidated by the integration of seismic fault mechanical stratigraphy, tectonic subsidence analysis, and organic geochemical maturity indicators provides a powerful tool for optimizing petroleum exploration in both mature and frontier basins.
Computational aspects of helicopter trim analysis and damping levels from Floquet theory
NASA Technical Reports Server (NTRS)
Gaonkar, Gopal H.; Achar, N. S.
1992-01-01
Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.
Wei, Qinglai; Liu, Derong; Lin, Qiao
In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.
NASA Astrophysics Data System (ADS)
Dhruv, Akash; Blower, Christopher; Wickenheiser, Adam M.
2015-03-01
The ability of UAVs to operate in complex and hostile environments makes them useful in military and civil operations concerning surveillance and reconnaissance. However, limitations in size of UAVs and communication delays prohibit their operation close to the ground and in cluttered environments, which increase risks associated with turbulence and wind gusts that cause trajectory deviations and potential loss of the vehicle. In the last decade, scientists and engineers have turned towards bio-inspiration to solve these issues by developing innovative flow control methods that offer better stability, controllability, and maneuverability. This paper presents an aerodynamic load solver for bio-inspired wings that consist of an array of feather-like flaps installed across the upper and lower surfaces in both the chord- and span-wise directions, mimicking the feathers of an avian wing. Each flap has the ability to rotate into both the wing body and the inbound airflow, generating complex flap configurations unobtainable by traditional wings that offer improved aerodynamic stability against gusting flows and turbulence. The solver discussed is an unsteady three-dimensional iterative doublet panel method with vortex particle wakes. This panel method models the wake-body interactions between multiple flaps effectively without the need to define specific wake geometries, thereby eliminating the need to manually model the wake for each configuration. To incorporate viscous flow characteristics, an iterative boundary layer theory is employed, modeling laminar, transitional and turbulent regions over the wing's surfaces, in addition to flow separation and reattachment locations. This technique enables the boundary layer to influence the wake strength and geometry both within the wing and aft of the trailing edge. The results obtained from this solver are validated using experimental data from a low-speed suction wind tunnel operating at Reynolds Number 300,000. This method enables fast and accurate assessment of aerodynamic loads for initial design of complex wing configurations compared to other methods available.
RTE: A computer code for Rocket Thermal Evaluation
NASA Technical Reports Server (NTRS)
Naraghi, Mohammad H. N.
1995-01-01
The numerical model for a rocket thermal analysis code (RTE) is discussed. RTE is a comprehensive thermal analysis code for thermal analysis of regeneratively cooled rocket engines. The input to the code consists of the composition of fuel/oxidant mixture and flow rates, chamber pressure, coolant temperature and pressure. dimensions of the engine, materials and the number of nodes in different parts of the engine. The code allows for temperature variation in axial, radial and circumferential directions. By implementing an iterative scheme, it provides nodal temperature distribution, rates of heat transfer, hot gas and coolant thermal and transport properties. The fuel/oxidant mixture ratio can be varied along the thrust chamber. This feature allows the user to incorporate a non-equilibrium model or an energy release model for the hot-gas-side. The user has the option of bypassing the hot-gas-side calculations and directly inputting the gas-side fluxes. This feature is used to link RTE to a boundary layer module for the hot-gas-side heat flux calculations.
Optimization of multi-element airfoils for maximum lift
NASA Technical Reports Server (NTRS)
Olsen, L. E.
1979-01-01
Two theoretical methods are presented for optimizing multi-element airfoils to obtain maximum lift. The analyses assume that the shapes of the various high lift elements are fixed. The objective of the design procedures is then to determine the optimum location and/or deflection of the leading and trailing edge devices. The first analysis determines the optimum horizontal and vertical location and the deflection of a leading edge slat. The structure of the flow field is calculated by iteratively coupling potential flow and boundary layer analysis. This design procedure does not require that flow separation effects be modeled. The second analysis determines the slat and flap deflection required to maximize the lift of a three element airfoil. This approach requires that the effects of flow separation from one or more of the airfoil elements be taken into account. The theoretical results are in good agreement with results of a wind tunnel test used to corroborate the predicted optimum slat and flap positions.
A novel artificial neural network method for biomedical prediction based on matrix pseudo-inversion.
Cai, Binghuang; Jiang, Xia
2014-04-01
Biomedical prediction based on clinical and genome-wide data has become increasingly important in disease diagnosis and classification. To solve the prediction problem in an effective manner for the improvement of clinical care, we develop a novel Artificial Neural Network (ANN) method based on Matrix Pseudo-Inversion (MPI) for use in biomedical applications. The MPI-ANN is constructed as a three-layer (i.e., input, hidden, and output layers) feed-forward neural network, and the weights connecting the hidden and output layers are directly determined based on MPI without a lengthy learning iteration. The LASSO (Least Absolute Shrinkage and Selection Operator) method is also presented for comparative purposes. Single Nucleotide Polymorphism (SNP) simulated data and real breast cancer data are employed to validate the performance of the MPI-ANN method via 5-fold cross validation. Experimental results demonstrate the efficacy of the developed MPI-ANN for disease classification and prediction, in view of the significantly superior accuracy (i.e., the rate of correct predictions), as compared with LASSO. The results based on the real breast cancer data also show that the MPI-ANN has better performance than other machine learning methods (including support vector machine (SVM), logistic regression (LR), and an iterative ANN). In addition, experiments demonstrate that our MPI-ANN could be used for bio-marker selection as well. Copyright © 2013 Elsevier Inc. All rights reserved.
Improvements in surface singularity analysis and design methods. [applicable to airfoils
NASA Technical Reports Server (NTRS)
Bristow, D. R.
1979-01-01
The coupling of the combined source vortex distribution of Green's potential flow function with contemporary numerical techniques is shown to provide accurate, efficient, and stable solutions to subsonic inviscid analysis and design problems for multi-element airfoils. The analysis problem is solved by direct calculation of the surface singularity distribution required to satisfy the flow tangency boundary condition. The design or inverse problem is solved by an iteration process. In this process, the geometry and the associated pressure distribution are iterated until the pressure distribution most nearly corresponding to the prescribed design distribution is obtained. Typically, five iteration cycles are required for convergence. A description of the analysis and design method is presented, along with supporting examples.
Establishing Factor Validity Using Variable Reduction in Confirmatory Factor Analysis.
ERIC Educational Resources Information Center
Hofmann, Rich
1995-01-01
Using a 21-statement attitude-type instrument, an iterative procedure for improving confirmatory model fit is demonstrated within the context of the EQS program of P. M. Bentler and maximum likelihood factor analysis. Each iteration systematically eliminates the poorest fitting statement as identified by a variable fit index. (SLD)
NASA Technical Reports Server (NTRS)
Parker, Hermon M
1953-01-01
An analysis is made of the transient heat-conduction effects in three simple semi-infinite bodies: the flat insulated plate, the conical shell, and the slender solid cone. The bodies are assumed to have constant initial temperatures and, at zero time, to begin to move at a constant speed and zero angle of attack through a homogeneous atmosphere. The heat input is taken as that through a laminar boundary layer. Radiation heat transfer and transverse temperature gradients are assumed to be zero. The appropriate heat-conduction equations are solved by an iteration method, the zeroeth-order terms describing the situation in the limit of small time. The method is presented and the solutions are calculated to three orders which are sufficient to give reasonably accurate results when the forward edge has attained one-half the total temperature rise (nose half-rise time). Flight Mach number and air properties occur as parameters in the result. Approximate expressions for the extent of the conduction region and nose half-rise times as functions of the parameters of the problem are presented. (author)
A computational efficient modelling of laminar separation bubbles
NASA Technical Reports Server (NTRS)
Dini, Paolo; Maughmer, Mark D.
1990-01-01
In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.
Performance analysis of improved iterated cubature Kalman filter and its application to GNSS/INS.
Cui, Bingbo; Chen, Xiyuan; Xu, Yuan; Huang, Haoqian; Liu, Xiao
2017-01-01
In order to improve the accuracy and robustness of GNSS/INS navigation system, an improved iterated cubature Kalman filter (IICKF) is proposed by considering the state-dependent noise and system uncertainty. First, a simplified framework of iterated Gaussian filter is derived by using damped Newton-Raphson algorithm and online noise estimator. Then the effect of state-dependent noise coming from iterated update is analyzed theoretically, and an augmented form of CKF algorithm is applied to improve the estimation accuracy. The performance of IICKF is verified by field test and numerical simulation, and results reveal that, compared with non-iterated filter, iterated filter is less sensitive to the system uncertainty, and IICKF improves the accuracy of yaw, roll and pitch by 48.9%, 73.1% and 83.3%, respectively, compared with traditional iterated KF. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Development and tests of molybdenum armored copper components for MITICA ion source
NASA Astrophysics Data System (ADS)
Pavei, Mauro; Böswirth, Bernd; Greuner, Henri; Marcuzzi, Diego; Rizzolo, Andrea; Valente, Matteo
2016-02-01
In order to prevent detrimental material erosion of components impinged by back-streaming positive D or H ions in the megavolt ITER injector and concept advancement beam source, a solution based on explosion bonding technique has been identified for producing a 1 mm thick molybdenum armour layer on copper substrate, compatible with ITER requirements. Prototypes have been recently manufactured and tested in the high heat flux test facility Garching Large Divertor Sample Test Facility (GLADIS) to check the capability of the molybdenum-copper interface to withstand several thermal shock cycles at high power density. This paper presents both the numerical fluid-dynamic analyses of the prototypes simulating the test conditions in GLADIS as well as the experimental results.
NASA Technical Reports Server (NTRS)
Maskew, B.
1982-01-01
VSAERO is a computer program used to predict the nonlinear aerodynamic characteristics of arbitrary three-dimensional configurations in subsonic flow. Nonlinear effects of vortex separation and vortex surface interaction are treated in an iterative wake-shape calculation procedure, while the effects of viscosity are treated in an iterative loop coupling potential-flow and integral boundary-layer calculations. The program employs a surface singularity panel method using quadrilateral panels on which doublet and source singularities are distributed in a piecewise constant form. This user's manual provides a brief overview of the mathematical model, instructions for configuration modeling and a description of the input and output data. A listing of a sample case is included.
Development and tests of molybdenum armored copper components for MITICA ion source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavei, Mauro, E-mail: mauro.pavei@igi.cnr.it; Marcuzzi, Diego; Rizzolo, Andrea
2016-02-15
In order to prevent detrimental material erosion of components impinged by back-streaming positive D or H ions in the megavolt ITER injector and concept advancement beam source, a solution based on explosion bonding technique has been identified for producing a 1 mm thick molybdenum armour layer on copper substrate, compatible with ITER requirements. Prototypes have been recently manufactured and tested in the high heat flux test facility Garching Large Divertor Sample Test Facility (GLADIS) to check the capability of the molybdenum-copper interface to withstand several thermal shock cycles at high power density. This paper presents both the numerical fluid-dynamic analysesmore » of the prototypes simulating the test conditions in GLADIS as well as the experimental results.« less
Development and tests of molybdenum armored copper components for MITICA ion source.
Pavei, Mauro; Böswirth, Bernd; Greuner, Henri; Marcuzzi, Diego; Rizzolo, Andrea; Valente, Matteo
2016-02-01
In order to prevent detrimental material erosion of components impinged by back-streaming positive D or H ions in the megavolt ITER injector and concept advancement beam source, a solution based on explosion bonding technique has been identified for producing a 1 mm thick molybdenum armour layer on copper substrate, compatible with ITER requirements. Prototypes have been recently manufactured and tested in the high heat flux test facility Garching Large Divertor Sample Test Facility (GLADIS) to check the capability of the molybdenum-copper interface to withstand several thermal shock cycles at high power density. This paper presents both the numerical fluid-dynamic analyses of the prototypes simulating the test conditions in GLADIS as well as the experimental results.
Non-iterative determination of the stress-density relation from ramp wave data through a window
NASA Astrophysics Data System (ADS)
Dowling, Evan; Fratanduono, Dayne; Swift, Damian
2017-06-01
In the canonical ramp compression experiment, a smoothly-increasing load is applied the surface of the sample, and the particle velocity history is measured at interfaces two or more different distances into the sample. The velocity histories are used to deduce a stress-density relation by correcting for perturbations caused by reflected release waves, usually via the iterative Lagrangian analysis technique of Rothman and Maw. We previously described a non-iterative (recursive) method of analysis, which was more stable and orders of magnitude faster than iteration, but was subject to the limitation that the free surface velocity had to be sampled at uniform intervals. We have now developed more general recursive algorithms suitable for analyzing ramp data through a finite-impedance window. Free surfaces can be treated seamlessly, and the need for uniform velocity sampling has been removed. These calculations require interpolation of partially-released states using the partially-constructed isentrope, making them slower than the previous free-surface scheme, but they are still much faster than iterative analysis. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
A single-scattering correction for the seismo-acoustic parabolic equation.
Collins, Michael D
2012-04-01
An efficient single-scattering correction that does not require iterations is derived and tested for the seismo-acoustic parabolic equation. The approach is applicable to problems involving gradual range dependence in a waveguide with fluid and solid layers, including the key case of a sloping fluid-solid interface. The single-scattering correction is asymptotically equivalent to a special case of a single-scattering correction for problems that only have solid layers [Küsel et al., J. Acoust. Soc. Am. 121, 808-813 (2007)]. The single-scattering correction has a simple interpretation (conservation of interface conditions in an average sense) that facilitated its generalization to problems involving fluid layers. Promising results are obtained for problems in which the ocean bottom interface has a small slope.
Holograms for power-efficient excitation of optical surface waves
NASA Astrophysics Data System (ADS)
Ignatov, Anton I.; Merzlikin, Alexander M.
2018-02-01
A method for effective excitation of optical surface waves based on holography principles has been proposed. For a particular example of excitation of a plasmonic wave in a dielectric layer on metal the efficiency of proposed volume holograms in the dielectric layer has been analyzed in comparison with optimized periodic gratings in the dielectric layer. Conditions when the holograms are considerably more efficient than the gratings have been found out. In addition, holograms recorded in two iterations have been proposed and studied. Such holograms are substantially more efficient than the optimized periodic gratings for all incidence angles of an exciting Gaussian beam. The proposed method is universal: it can be extended for efficient excitation of different types of optical surface waves and optical waveguide modes.
Shape reanalysis and sensitivities utilizing preconditioned iterative boundary solvers
NASA Technical Reports Server (NTRS)
Guru Prasad, K.; Kane, J. H.
1992-01-01
The computational advantages associated with the utilization of preconditined iterative equation solvers are quantified for the reanalysis of perturbed shapes using continuum structural boundary element analysis (BEA). Both single- and multi-zone three-dimensional problems are examined. Significant reductions in computer time are obtained by making use of previously computed solution vectors and preconditioners in subsequent analyses. The effectiveness of this technique is demonstrated for the computation of shape response sensitivities required in shape optimization. Computer times and accuracies achieved using the preconditioned iterative solvers are compared with those obtained via direct solvers and implicit differentiation of the boundary integral equations. It is concluded that this approach employing preconditioned iterative equation solvers in reanalysis and sensitivity analysis can be competitive with if not superior to those involving direct solvers.
Wavelet-based analysis of transient electromagnetic wave propagation in photonic crystals.
Shifman, Yair; Leviatan, Yehuda
2004-03-01
Photonic crystals and optical bandgap structures, which facilitate high-precision control of electromagnetic-field propagation, are gaining ever-increasing attention in both scientific and commercial applications. One common photonic device is the distributed Bragg reflector (DBR), which exhibits high reflectivity at certain frequencies. Analysis of the transient interaction of an electromagnetic pulse with such a device can be formulated in terms of the time-domain volume integral equation and, in turn, solved numerically with the method of moments. Owing to the frequency-dependent reflectivity of such devices, the extent of field penetration into deep layers of the device will be different depending on the frequency content of the impinging pulse. We show how this phenomenon can be exploited to reduce the number of basis functions needed for the solution. To this end, we use spatiotemporal wavelet basis functions, which possess the multiresolution property in both spatial and temporal domains. To select the dominant functions in the solution, we use an iterative impedance matrix compression (IMC) procedure, which gradually constructs and solves a compressed version of the matrix equation until the desired degree of accuracy has been achieved. Results show that when the electromagnetic pulse is reflected, the transient IMC omits basis functions defined over the last layers of the DBR, as anticipated.
Performance analysis of cross-layer design with average PER constraint over MIMO fading channels
NASA Astrophysics Data System (ADS)
Dang, Xiaoyu; Liu, Yan; Yu, Xiangbin
2015-12-01
In this article, a cross-layer design (CLD) scheme for multiple-input and multiple-output system with the dual constraints of imperfect feedback and average packet error rate (PER) is presented, which is based on the combination of the adaptive modulation and the automatic repeat request protocols. The design performance is also evaluated over wireless Rayleigh fading channel. With the constraint of target PER and average PER, the optimum switching thresholds (STs) for attaining maximum spectral efficiency (SE) are developed. An effective iterative algorithm for finding the optimal STs is proposed via Lagrange multiplier optimisation. With different thresholds available, the analytical expressions of the average SE and PER are provided for the performance evaluation. To avoid the performance loss caused by the conventional single estimate, multiple outdated estimates (MOE) method, which utilises multiple previous channel estimation information, is presented for CLD to improve the system performance. It is shown that numerical simulations for average PER and SE are in consistent with the theoretical analysis and that the developed CLD with average PER constraint can meet the target PER requirement and show better performance in comparison with the conventional CLD with instantaneous PER constraint. Especially, the CLD based on the MOE method can obviously increase the system SE and reduce the impact of feedback delay greatly.
Preconditioned MoM Solutions for Complex Planar Arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fasenfest, B J; Jackson, D; Champagne, N
2004-01-23
The numerical analysis of large arrays is a complex problem. There are several techniques currently under development in this area. One such technique is the FAIM (Faster Adaptive Integral Method). This method uses a modification of the standard AIM approach which takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basis functions, such as the RWG basis. These bases are then projected onto a regular grid of interpolating polynomials. This grid can then be used in a 2D ormore » 3D FFT to accelerate the matrix-vector product used in an iterative solver. The method has been proven to greatly reduce solve time by speeding the matrix-vector product computation. The FAIM approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends FAIM by modifying it to allow for layered material Green's Functions and dielectrics. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the FAIM method is reported in; this contribution is limited to presenting new results.« less
Comparison of different filter methods for data assimilation in the unsaturated zone
NASA Astrophysics Data System (ADS)
Lange, Natascha; Berkhahn, Simon; Erdal, Daniel; Neuweiler, Insa
2016-04-01
The unsaturated zone is an important compartment, which plays a role for the division of terrestrial water fluxes into surface runoff, groundwater recharge and evapotranspiration. For data assimilation in coupled systems it is therefore important to have a good representation of the unsaturated zone in the model. Flow processes in the unsaturated zone have all the typical features of flow in porous media: Processes can have long memory and as observations are scarce, hydraulic model parameters cannot be determined easily. However, they are important for the quality of model predictions. On top of that, the established flow models are highly non-linear. For these reasons, the use of the popular Ensemble Kalman filter as a data assimilation method to estimate state and parameters in unsaturated zone models could be questioned. With respect to the long process memory in the subsurface, it has been suggested that iterative filters and smoothers may be more suitable for parameter estimation in unsaturated media. We test the performance of different iterative filters and smoothers for data assimilation with a focus on parameter updates in the unsaturated zone. In particular we compare the Iterative Ensemble Kalman Filter and Smoother as introduced by Bocquet and Sakov (2013) as well as the Confirming Ensemble Kalman Filter and the modified Restart Ensemble Kalman Filter proposed by Song et al. (2014) to the original Ensemble Kalman Filter (Evensen, 2009). This is done with simple test cases generated numerically. We consider also test examples with layering structure, as a layering structure is often found in natural soils. We assume that observations are water content, obtained from TDR probes or other observation methods sampling relatively small volumes. Particularly in larger data assimilation frameworks, a reasonable balance between computational effort and quality of results has to be found. Therefore, we compare computational costs of the different methods as well as the quality of open loop model predictions and the estimated parameters. Bocquet, M. and P. Sakov, 2013: Joint state and parameter estimation with an iterative ensemble Kalman smoother, Nonlinear Processes in Geophysics 20(5): 803-818. Evensen, G., 2009: Data assimilation: The ensemble Kalman filter. Springer Science & Business Media. Song, X.H., L.S. Shi, M. Ye, J.Z. Yang and I.M. Navon, 2014: Numerical comparison of iterative ensemble Kalman filters for unsaturated flow inverse modeling. Vadose Zone Journal 13(2), 10.2136/vzj2013.05.0083.
NASA Astrophysics Data System (ADS)
Zhou, Jianmei; Wang, Jianxun; Shang, Qinglong; Wang, Hongnian; Yin, Changchun
2014-04-01
We present an algorithm for inverting controlled source audio-frequency magnetotelluric (CSAMT) data in horizontally layered transversely isotropic (TI) media. The popular inversion method parameterizes the media into a large number of layers which have fixed thickness and only reconstruct the conductivities (e.g. Occam's inversion), which does not enable the recovery of the sharp interfaces between layers. In this paper, we simultaneously reconstruct all the model parameters, including both the horizontal and vertical conductivities and layer depths. Applying the perturbation principle and the dyadic Green's function in TI media, we derive the analytic expression of Fréchet derivatives of CSAMT responses with respect to all the model parameters in the form of Sommerfeld integrals. A regularized iterative inversion method is established to simultaneously reconstruct all the model parameters. Numerical results show that the inverse algorithm, including the depths of the layer interfaces, can significantly improve the inverse results. It can not only reconstruct the sharp interfaces between layers, but also can obtain conductivities close to the true value.
NASA Astrophysics Data System (ADS)
Hodille, E. A.; Ghiorghiu, F.; Addab, Y.; Založnik, A.; Minissale, M.; Piazza, Z.; Martin, C.; Angot, T.; Gallais, L.; Barthe, M.-F.; Becquart, C. S.; Markelj, S.; Mougenot, J.; Grisolia, C.; Bisson, R.
2017-07-01
Fusion fuel retention (trapping) and release (desorption) from plasma-facing components are critical issues for ITER and for any future industrial demonstration reactors such as DEMO. Therefore, understanding the fundamental mechanisms behind the retention of hydrogen isotopes in first wall and divertor materials is necessary. We developed an approach that couples dedicated experimental studies with modelling at all relevant scales, from microscopic elementary steps to macroscopic observables, in order to build a reliable and predictive fusion reactor wall model. This integrated approach is applied to the ITER divertor material (tungsten), and advances in the development of the wall model are presented. An experimental dataset, including focused ion beam scanning electron microscopy, isothermal desorption, temperature programmed desorption, nuclear reaction analysis and Auger electron spectroscopy, is exploited to initialize a macroscopic rate equation wall model. This model includes all elementary steps of modelled experiments: implantation of fusion fuel, fuel diffusion in the bulk or towards the surface, fuel trapping on defects and release of trapped fuel during a thermal excursion of materials. We were able to show that a single-trap-type single-detrapping-energy model is not able to reproduce an extended parameter space study of a polycrystalline sample exhibiting a single desorption peak. It is therefore justified to use density functional theory to guide the initialization of a more complex model. This new model still contains a single type of trap, but includes the density functional theory findings that the detrapping energy varies as a function of the number of hydrogen isotopes bound to the trap. A better agreement of the model with experimental results is obtained when grain boundary defects are included, as is consistent with the polycrystalline nature of the studied sample. Refinement of this grain boundary model is discussed as well as the inclusion in the model of a thin defective oxide layer following the experimental observation of the presence of an oxygen layer on the surface even after annealing to 1300 K.
Perturbation-iteration theory for analyzing microwave striplines
NASA Technical Reports Server (NTRS)
Kretch, B. E.
1985-01-01
A perturbation-iteration technique is presented for determining the propagation constant and characteristic impedance of an unshielded microstrip transmission line. The method converges to the correct solution with a few iterations at each frequency and is equivalent to a full wave analysis. The perturbation-iteration method gives a direct solution for the propagation constant without having to find the roots of a transcendental dispersion equation. The theory is presented in detail along with numerical results for the effective dielectric constant and characteristic impedance for a wide range of substrate dielectric constants, stripline dimensions, and frequencies.
Bragg x-ray survey spectrometer for ITER.
Varshney, S K; Barnsley, R; O'Mullane, M G; Jakhar, S
2012-10-01
Several potential impurity ions in the ITER plasmas will lead to loss of confined energy through line and continuum emission. For real time monitoring of impurities, a seven channel Bragg x-ray spectrometer (XRCS survey) is considered. This paper presents design and analysis of the spectrometer, including x-ray tracing by the Shadow-XOP code, sensitivity calculations for reference H-mode plasma and neutronics assessment. The XRCS survey performance analysis shows that the ITER measurement requirements of impurity monitoring in 10 ms integration time at the minimum levels for low-Z to high-Z impurity ions can largely be met.
Simultaneous and iterative weighted regression analysis of toxicity tests using a microplate reader.
Galgani, F; Cadiou, Y; Gilbert, F
1992-04-01
A system is described for determination of LC50 or IC50 by an iterative process based on data obtained from a plate reader using a marine unicellular alga as a target species. The esterase activity of Tetraselmis suesica on fluorescein diacetate as a substrate was measured using a fluorescence titerplate. Simultaneous analysis of results was performed using an iterative process adopting the sigmoid function Y = y/1 (dose of toxicant/IC50)slope for dose-response relationships. IC50 (+/- SEM) was estimated (P less than 0.05). An application with phosalone as a toxicant is presented.
NASA Technical Reports Server (NTRS)
Davis, R. L.
1986-01-01
A program called ALESEP is presented for the analysis of the inviscid-viscous interaction which occurs due to the presence of a closed laminar-transitional separation bubble on an airfoil or infinite swept wing. The ALESEP code provides an iterative solution of the boundary layer equations expressed in an inverse formulation coupled to a Cauchy integral representation of the inviscid flow. This interaction analysis is treated as a local perturbation to a known solution obtained from a global airfoil analysis; hence, part of the required input to the ALESEP code are the reference displacement thickness and tangential velocity distributions. Special windward differencing may be used in the reversed flow regions of the separation bubble to accurately account for the flow direction in the discretization of the streamwise convection of momentum. The ALESEP code contains a forced transition model based on a streamwise intermittency function, a natural transition model based on a solution of the integral form of the turbulent kinetic energy equation, and an empirical natural transition model.
Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace.
Zhang, Cheng; Lai, Chun-Liang; Pettitt, B Montgomery
The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution.
Michael L. Hoppus; Andrew J. Lister
2002-01-01
A Landsat TM classification method (iterative guided spectral class rejection) produced a forest cover map of southern West Virginia that provided the stratification layer for producing estimates of timberland area from Forest Service FIA ground plots using a stratified sampling technique. These same high quality and expensive FIA ground plots provided ground reference...
Anti-alias filter in AORSA for modeling ICRF heating of DT plasmas in ITER
NASA Astrophysics Data System (ADS)
Berry, L. A.; Batchelor, D. B.; Jaeger, E. F.; RF SciDAC Team
2011-10-01
The spectral wave solver AORSA has been used extensively to model full-field, ICRF heating scenarios for DT plasmas in ITER. In these scenarios, the tritium (T) second harmonic cyclotron resonance is positioned near the magnetic axis, where fast magnetosonic waves are efficiently absorbed by tritium ions. In some cases, a fundamental deuterium (D) cyclotron layer can also be located within the plasma, but close to the high field boundary. In this case, the existence of multiple ion cyclotron resonances presents a serious challenge for numerical simulation because short-wavelength, mode-converted waves can be excited close to the plasma edge at the ion-ion hybrid layer. Although the left hand circularly polarized component of the wave field is partially shielded from the fundamental D resonance, some power penetrates, and a small fraction (typically <10%) can be absorbed by the D ions. We find that an anti-aliasing filter is required in AORSA to calculate this fraction correctly while including up-shift and down-shift in the parallel wave spectrum. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.
Analytical Models of Cross-Layer Protocol Optimization in Real-Time Wireless Sensor Ad Hoc Networks
NASA Astrophysics Data System (ADS)
Hortos, William S.
The real-time interactions among the nodes of a wireless sensor network (WSN) to cooperatively process data from multiple sensors are modeled. Quality-of-service (QoS) metrics are associated with the quality of fused information: throughput, delay, packet error rate, etc. Multivariate point process (MVPP) models of discrete random events in WSNs establish stochastic characteristics of optimal cross-layer protocols. Discrete-event, cross-layer interactions in mobile ad hoc network (MANET) protocols have been modeled using a set of concatenated design parameters and associated resource levels by the MVPPs. Characterization of the "best" cross-layer designs for a MANET is formulated by applying the general theory of martingale representations to controlled MVPPs. Performance is described in terms of concatenated protocol parameters and controlled through conditional rates of the MVPPs. Modeling limitations to determination of closed-form solutions versus explicit iterative solutions for ad hoc WSN controls are examined.
Simulations of carbon sputtering in fusion reactor divertor plates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marian, J; Zepeda-Ruiz, L A; Gilmer, G H
2005-10-03
The interaction of edge plasma with material surfaces raises key issues for the viability of the International Thermonuclear Reactor (ITER) and future fusion reactors, including heat-flux limits, net material erosion, and impurity production. After exposure of the graphite divertor plate to the plasma in a fusion device, an amorphous C/H layer forms. This layer contains 20-30 atomic percent D/T bonded to C. Subsequent D/T impingement on this layer produces a variety of hydrocarbons that are sputtered back into the sheath region. We present molecular dynamics (MD) simulations of D/T impacts on amorphous carbon layer as a function of ion energymore » and orientation, using the AIREBO potential. In particular, energies are varied between 10 and 150 eV to transition from chemical to physical sputtering. These results are used to quantify yield, hydrocarbon composition and eventual plasma contamination.« less
Compositional Verification of a Communication Protocol for a Remotely Operated Vehicle
NASA Technical Reports Server (NTRS)
Goodloe, Alwyn E.; Munoz, Cesar A.
2009-01-01
This paper presents the specification and verification in the Prototype Verification System (PVS) of a protocol intended to facilitate communication in an experimental remotely operated vehicle used by NASA researchers. The protocol is defined as a stack-layered com- position of simpler protocols. It can be seen as the vertical composition of protocol layers, where each layer performs input and output message processing, and the horizontal composition of different processes concurrently inhabiting the same layer, where each process satisfies a distinct requirement. It is formally proven that the protocol components satisfy certain delivery guarantees. Compositional techniques are used to prove these guarantees also hold in the composed system. Although the protocol itself is not novel, the methodology employed in its verification extends existing techniques by automating the tedious and usually cumbersome part of the proof, thereby making the iterative design process of protocols feasible.
Interacting domain-specific languages with biological problem solving environments
NASA Astrophysics Data System (ADS)
Cickovski, Trevor M.
Iteratively developing a biological model and verifying results with lab observations has become standard practice in computational biology. This process is currently facilitated by biological Problem Solving Environments (PSEs), multi-tiered and modular software frameworks which traditionally consist of two layers: a computational layer written in a high level language using design patterns, and a user interface layer which hides its details. Although PSEs have proven effective, they still enforce some communication overhead between biologists refining their models through repeated comparison with experimental observations in vitro or in vivo, and programmers actually implementing model extensions and modifications within the computational layer. I illustrate the use of biological Domain-Specific Languages (DSLs) as a middle-level PSE tier to ameliorate this problem by providing experimentalists with the ability to iteratively test and develop their models using a higher degree of expressive power compared to a graphical interface, while saving the requirement of general purpose programming knowledge. I develop two radically different biological DSLs: XML-based BIOLOGO will model biological morphogenesis using a cell-centered stochastic cellular automaton and translate into C++ modules for an object-oriented PSE C OMPUCELL3D, and MDLab will provide a set of high-level Python libraries for running molecular dynamics simulations, using wrapped functionality from the C++ PSE PROTOMOL. I describe each language in detail, including its its roles within the larger PSE and its expressibility in terms of representable phenomena, and a discussion of observations from users of the languages. Moreover I will use these studies to draw general conclusions about biological DSL development, including dependencies upon the goals of the corresponding PSE, strategies, and tradeoffs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budaev, V. P., E-mail: budaev@mail.ru
2016-12-15
Heat loads on the tungsten divertor targets in the ITER and the tokamak power reactors reach ~10MW m{sup −2} in the steady state of DT discharges, increasing to ~0.6–3.5 GW m{sup −2} under disruptions and ELMs. The results of high heat flux tests (HHFTs) of tungsten under such transient plasma heat loads are reviewed in the paper. The main attention is paid to description of the surface microstructure, recrystallization, and the morphology of the cracks on the target. Effects of melting, cracking of tungsten, drop erosion of the surface, and formation of corrugated and porous layers are observed. Production ofmore » submicron-sized tungsten dust and the effects of the inhomogeneous surface of tungsten on the plasma–wall interaction are discussed. In conclusion, the necessity of further HHFTs and investigations of the durability of tungsten under high pulsed plasma loads on the ITER divertor plates, including disruptions and ELMs, is stressed.« less
Mixed plasma species effects on Tungsten
NASA Astrophysics Data System (ADS)
Baldwin, Matt; Doerner, Russ; Nishijima, Daisuke; Ueda, Yoshio
2007-11-01
The diverted reactor exhaust in confinement machines like ITER and DEMO will be intense-mixed plasmas of fusion (D, T, He) and wall species (Be, C, W, in ITER and W in DEMO), characterized by tremendous heat and particle fluxes. In both devices, the divertor walls are to be exposed to such plasma and must operate at high temperature for long durations. Tungsten, with its high-melting point and low-sputtering yield is currently viewed as the leading choice for divertor-wall material in this next generation class of fusion devices, and is supported by an enormous amount of work that has been done to examine its performance in hydrogen isotope plasmas. However, studies of the more realistic scenario, involving mixed species interactions, are considerably less. Current experiments on the PISCES-B device are focused on these issues. The formation of Be-W alloys, He induced nanoscopic morphology, and blistering, as well as mitigation influences on these effects caused by Be and C layer formation have all been observed. These results and the corresponding implications for ITER and DEMO will be presented.
Robust non-rigid registration algorithm based on local affine registration
NASA Astrophysics Data System (ADS)
Wu, Liyang; Xiong, Lei; Du, Shaoyi; Bi, Duyan; Fang, Ting; Liu, Kun; Wu, Dongpeng
2018-04-01
Aiming at the problem that the traditional point set non-rigid registration algorithm has low precision and slow convergence speed for complex local deformation data, this paper proposes a robust non-rigid registration algorithm based on local affine registration. The algorithm uses a hierarchical iterative method to complete the point set non-rigid registration from coarse to fine. In each iteration, the sub data point sets and sub model point sets are divided and the shape control points of each sub point set are updated. Then we use the control point guided affine ICP algorithm to solve the local affine transformation between the corresponding sub point sets. Next, the local affine transformation obtained by the previous step is used to update the sub data point sets and their shape control point sets. When the algorithm reaches the maximum iteration layer K, the loop ends and outputs the updated sub data point sets. Experimental results demonstrate that the accuracy and convergence of our algorithm are greatly improved compared with the traditional point set non-rigid registration algorithms.
NASA Astrophysics Data System (ADS)
Prokopec, R.; Humer, K.; Fillunger, H.; Maix, R. K.; Weber, H. W.
2006-03-01
Fiber reinforced plastics will be used as insulation systems for the superconducting magnet coils of ITER. The fast neutron and gamma radiation environment present at the magnet location will lead to serious material degradation, particularly of the insulation. For this reason, advanced radiation-hard resin systems are of special interest. In this study various R-glass fiber / Kapton reinforced DGEBA epoxy and cyanate ester composites fabricated by the vacuum pressure impregnation method were investigated. All systems were irradiated at ambient temperature (340 K) in the TRIGA reactor (Vienna) to a fast neutron fluence of 1×1022 m-2 (E>0.1 MeV). Short-beam shear and static tensile tests were carried out at 77 K prior to and after irradiation. In addition, tension-tension fatigue measurements were used in order to assess the mechanical performance of the insulation systems under the pulsed operation conditions of ITER. For the cyanate ester based system the influence of interleaving Kapton layers on the static and dynamic material behavior was investigated as well.
Melt layer erosion of pure and lanthanum doped tungsten under VDE-like high heat flux loads
NASA Astrophysics Data System (ADS)
Yuan, Y.; Greuner, H.; Böswirth, B.; Luo, G.-N.; Fu, B. Q.; Xu, H. Y.; Liu, W.
2013-07-01
Heat loads expected for VDEs in ITER were applied in the neutral beam facility GLADIS at IPP Garching. Several ˜3 mm thick rolled pure W and W-1 wt% La2O3 plates were exposed to pulsed hydrogen beams with a central heat flux of 23 MW/m2 for 1.5-1.8 s. The melting thresholds are determined, and melt layer motion as well as material structure evolutions are shown. The melting thresholds of the two W grades are very close in this experimental setup. Lots of big bubbles with diameters from several μm to several 10 μm in the re-solidified layer of W were observed and they spread deeper with increasing heat flux. However, for W-1 wt% La2O3, no big bubbles were found in the corrugated melt layer. The underlying mechanisms referred to the melt layer motion and bubble issues are tentatively discussed based on comparison of the erosion characteristics between the two W grades.
A microwave scattering model for layered vegetation
NASA Technical Reports Server (NTRS)
Karam, Mostafa A.; Fung, Adrian K.; Lang, Roger H.; Chauhan, Narinder S.
1992-01-01
A microwave scattering model was developed for layered vegetation based on an iterative solution of the radiative transfer equation up to the second order to account for multiple scattering within the canopy and between the ground and the canopy. The model is designed to operate over a wide frequency range for both deciduous and coniferous forest and to account for the branch size distribution, leaf orientation distribution, and branch orientation distribution for each size. The canopy is modeled as a two-layered medium above a rough interface. The upper layer is the crown containing leaves, stems, and branches. The lower layer is the trunk region modeled as randomly positioned cylinders with a preferred orientation distribution above an irregular soil surface. Comparisons of this model with measurements from deciduous and coniferous forests show good agreements at several frequencies for both like and cross polarizations. Major features of the model needed to realize the agreement include allowance for: (1) branch size distribution, (2) second-order effects, and (3) tree component models valid over a wide range of frequencies.
Finite Volume Element (FVE) discretization and multilevel solution of the axisymmetric heat equation
NASA Astrophysics Data System (ADS)
Litaker, Eric T.
1994-12-01
The axisymmetric heat equation, resulting from a point-source of heat applied to a metal block, is solved numerically; both iterative and multilevel solutions are computed in order to compare the two processes. The continuum problem is discretized in two stages: finite differences are used to discretize the time derivatives, resulting is a fully implicit backward time-stepping scheme, and the Finite Volume Element (FVE) method is used to discretize the spatial derivatives. The application of the FVE method to a problem in cylindrical coordinates is new, and results in stencils which are analyzed extensively. Several iteration schemes are considered, including both Jacobi and Gauss-Seidel; a thorough analysis of these schemes is done, using both the spectral radii of the iteration matrices and local mode analysis. Using this discretization, a Gauss-Seidel relaxation scheme is used to solve the heat equation iteratively. A multilevel solution process is then constructed, including the development of intergrid transfer and coarse grid operators. Local mode analysis is performed on the components of the amplification matrix, resulting in the two-level convergence factors for various combinations of the operators. A multilevel solution process is implemented by using multigrid V-cycles; the iterative and multilevel results are compared and discussed in detail. The computational savings resulting from the multilevel process are then discussed.
NASA Astrophysics Data System (ADS)
Entler, S.; Duran, I.; Kocan, M.; Vayakis, G.
2017-07-01
Three vacuum vessel sectors in ITER will be instrumented by the outer vessel steady-state magnetic field sensors. Each sensor unit features a pair of metallic Hall sensors with a sensing layer made of bismuth to measure tangential and normal components of the local magnetic field. The influence of temperature and magnetic field on the Hall coefficient was tested for the temperature range from 25 to 250 oC and the magnetic field range from 0 to 0.5 T. A fit of the Hall coefficient normalized temperature function independent of magnetic field was found, and a model of the Hall coefficient functional dependence at a wide range of temperature and magnetic field was built with the purpose to simplify the calibration procedure.
Manufacture and Quality Control of Insert Coil with Real ITER TF Conductor
Ozeki, H.; Isono, T.; Uno, Y.; ...
2016-03-02
JAEA successfully completed the manufacture of the toroidal field (TF) insert coil (TFIC) for a performance test of the ITER TF conductor in the final design in cooperation with Hitachi, Ltd. The TFIC is a single-layer 8.875-turn solenoid coil with 1.44-m diameter. This will be tested for 68-kA current application in a 13-T external magnetic field. TFIC was manufactured in the following order: winding of the TF conductor, lead bending, fabrication of the electrical termination, heat treatment, turn insulation, installation of the coil into the support mandrel structure, vacuum pressure impregnation (VPI), structure assembly, and instrumentation. Here in this presentation,more » manufacture process and quality control status for the TFIC manufacturing are reported.« less
Parabolized Navier-Stokes solutions of separation and trailing-edge flows
NASA Technical Reports Server (NTRS)
Brown, J. L.
1983-01-01
A robust, iterative solution procedure is presented for the parabolized Navier-Stokes or higher order boundary layer equations as applied to subsonic viscous-inviscid interaction flows. The robustness of the present procedure is due, in part, to an improved algorithmic formulation. The present formulation is based on a reinterpretation of stability requirements for this class of algorithms and requires only second order accurate backward or central differences for all streamwise derivatives. Upstream influence is provided for through the algorithmic formulation and iterative sweeps in x. The primary contribution to robustness, however, is the boundary condition treatment, which imposes global constraints to control the convergence path. Discussed are successful calculations of subsonic, strong viscous-inviscid interactions, including separation. These results are consistent with Navier-Stokes solutions and triple deck theory.
NASA Astrophysics Data System (ADS)
Silva, João Carlos; Souto, Nuno; Cercas, Francisco; Dinis, Rui
A MMSE (Minimum Mean Square Error) DS-CDMA (Direct Sequence-Code Division Multiple Access) receiver coupled with a low-complexity iterative interference suppression algorithm was devised for a MIMO/BLAST (Multiple Input, Multiple Output / Bell Laboratories Layered Space Time) system in order to improve system performance, considering frequency selective fading channels. The scheme is compared against the simple MMSE receiver, for both QPSK and 16QAM modulations, under SISO (Single Input, Single Output) and MIMO systems, the latter with 2Tx by 2Rx and 4Tx by 4Rx (MIMO order 2 and 4 respectively) antennas. To assess its performance in an existing system, the uncoded UMTS HSDPA (High Speed Downlink Packet Access) standard was considered.
A numerical scheme to solve unstable boundary value problems
NASA Technical Reports Server (NTRS)
Kalnay-Rivas, E.
1977-01-01
The considered scheme makes it possible to determine an unstable steady state solution in cases in which, because of lack of symmetry, such a solution cannot be obtained analytically, and other time integration or relaxation schemes, because of instability, fail to converge. The iterative solution of a single complex equation is discussed and a nonlinear system of equations is considered. Described applications of the scheme are related to a steady state solution with shear instability, an unstable nonlinear Ekman boundary layer, and the steady state solution of a baroclinic atmosphere with asymmetric forcing. The scheme makes use of forward and backward time integrations of the original spatial differential operators and of an approximation of the adjoint operators. Only two computations of the time derivative per iteration are required.
Corneal power evaluation after myopic corneal refractive surgery using artificial neural networks.
Koprowski, Robert; Lanza, Michele; Irregolare, Carlo
2016-11-15
Efficacy and high availability of surgery techniques for refractive defect correction increase the number of patients who undergo to this type of surgery. Regardless of that, with increasing age, more and more patients must undergo cataract surgery. Accurate evaluation of corneal power is an extremely important element affecting the precision of intraocular lens (IOL) power calculation and errors in this procedure could affect quality of life of patients and satisfaction with the service provided. The available device able to measure corneal power have been tested to be not reliable after myopic refractive surgery. Artificial neural networks with error backpropagation and one hidden layer were proposed for corneal power prediction. The article analysed the features acquired from the Pentacam HR tomograph, which was necessary to measure the corneal power. Additionally, several billion iterations of artificial neural networks were conducted for several hundred simulations of different network configurations and different features derived from the Pentacam HR. The analysis was performed on a PC with Intel ® Xeon ® X5680 3.33 GHz CPU in Matlab ® Version 7.11.0.584 (R2010b) with Signal Processing Toolbox Version 7.1 (R2010b), Neural Network Toolbox 7.0 (R2010b) and Statistics Toolbox (R2010b). A total corneal power prediction error was obtained for 172 patients (113 patients forming the training set and 59 patients in the test set) with an average age of 32 ± 9.4 years, including 67% of men. The error was at an average level of 0.16 ± 0.14 diopters and its maximum value did not exceed 0.75 dioptres. The Pentacam parameters (measurement results) providing the above result are tangential anterial/posterior. The corneal net power and equivalent k-reading power. The analysis time for a single patient (a single eye) did not exceed 0.1 s, whereas the time of network training was about 3 s for 1000 iterations (the number of neurons in the hidden layer was 400).
Is there radar evidence for liquid water on Mars?
NASA Technical Reports Server (NTRS)
Roth, L. E.
1984-01-01
The hypothesis that an extraordinary radar smoothness of a lunar target suggests that ground moisture is rest on the assumption that on the penetration-depth scale, the dielectric constant be an isotropic quantity. In other words, the planet's surface should have no vertical structure. Results of modeling exercises (based on the early lunar two-layer models) conducted to simulate the behavior of radar reflectivity, at S-band, over Solis Lacus, without manipulating the dielectric constant of the base layer (i.e., without adding moisture) are summarized. More sophisticated, explicit, rather than iterative multi-layer models involving dust, duricrust, mollisol, and permafrost are under study. It is anticipated that a paradoxical situation will be reached when each improvement in the model introduces additional ambiguities into the data interpretation.
Arc detection for the ICRF system on ITER
NASA Astrophysics Data System (ADS)
D'Inca, R.
2011-12-01
The ICRF system for ITER is designed to respect the high voltage breakdown limits. However arcs can still statistically happen and must be quickly detected and suppressed by shutting the RF power down. For the conception of a reliable and efficient detector, the analysis of the mechanism of arcs is necessary to find their unique signature. Numerous systems have been conceived to address the issues of arc detection. VSWR-based detectors, RF noise detectors, sound detectors, optical detectors, S-matrix based detectors. Until now, none of them has succeeded in demonstrating the fulfillment of all requirements and the studies for ITER now follow three directions: improvement of the existing concepts to fix their flaws, development of new theoretically fully compliant detectors (like the GUIDAR) and combination of several detectors to benefit from the advantages of each of them. Together with the physical and engineering challenges, the development of an arc detection system for ITER raises methodological concerns to extrapolate the results from basic experiments and present machines to the ITER scale ICRF system and to conduct a relevant risk analysis.
Joanny, M; Salasca, S; Dapena, M; Cantone, B; Travère, J M; Thellier, C; Fermé, J J; Marot, L; Buravand, O; Perrollaz, G; Zeile, C
2012-10-01
ITER first mirrors (FMs), as the first components of most ITER optical diagnostics, will be exposed to high plasma radiation flux and neutron load. To reduce the FMs heating and optical surface deformation induced during ITER operation, the use of relevant materials and cooling system are foreseen. The calculations led on different materials and FMs designs and geometries (100 mm and 200 mm) show that the use of CuCrZr and TZM, and a complex integrated cooling system can limit efficiently the FMs heating and reduce their optical surface deformation under plasma radiation flux and neutron load. These investigations were used to evaluate, for the ITER equatorial port visible∕infrared wide angle viewing system, the impact of the FMs properties change during operation on the instrument main optical performances. The results obtained are presented and discussed.
NASA Astrophysics Data System (ADS)
Rahbarimanesh, Saeed; Brinkerhoff, Joshua
2017-11-01
The mutual interaction of shear layer instabilities and phase change in a two-dimensional cryogenic cavitating mixing layer is investigated using a numerical model. The developed model employs the homogeneous equilibrium mixture (HEM) approach in a density-based framework to compute the temperature-dependent cavitation field for liquefied natural gas (LNG). Thermal and baroclinic effects are captured via iterative coupled solution of the governing equations with dynamic thermophysical models that accurately capture the properties of LNG. The mixing layer is simulated for vorticity-thickness Reynolds numbers of 44 to 215 and cavitation numbers of 0.1 to 1.1. Attached cavity structures develop on the splitter plate followed by roll-up of the separated shear layer via the well-known Kelvin-Helmholtz mode, leading to streamwise accumulation of vorticity and eventual shedding of discrete vortices. Cavitation occurs as vapor cavities nucleate and grow from the low-pressure cores in the rolled-up vortices. Thermal effects and baroclinic vorticity production are found to have significant impacts on the mixing layer instability and cavitation processes.
NASA Astrophysics Data System (ADS)
Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.
2014-07-01
The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.
NASA Astrophysics Data System (ADS)
Mochalskyy, S.; Wünderlich, D.; Ruf, B.; Fantz, U.; Franzen, P.; Minea, T.
2014-10-01
The development of a large area (Asource,ITER = 0.9 × 2 m2) hydrogen negative ion (NI) source constitutes a crucial step in construction of the neutral beam injectors of the international fusion reactor ITER. To understand the plasma behaviour in the boundary layer close to the extraction system the 3D PIC MCC code ONIX is exploited. Direct cross checked analysis of the simulation and experimental results from the ITER-relevant BATMAN source testbed with a smaller area (Asource,BATMAN ≈ 0.32 × 0.59 m2) has been conducted for a low perveance beam, but for a full set of plasma parameters available. ONIX has been partially benchmarked by comparison to the results obtained using the commercial particle tracing code for positive ion extraction KOBRA3D. Very good agreement has been found in terms of meniscus position and its shape for simulations of different plasma densities. The influence of the initial plasma composition on the final meniscus structure was then investigated for NIs. As expected from the Child-Langmuir law, the results show that not only does the extraction potential play a crucial role on the meniscus formation, but also the initial plasma density and its electronegativity. For the given parameters, the calculated meniscus locates a few mm downstream of the plasma grid aperture provoking a direct NI extraction. Most of the surface produced NIs do not reach the plasma bulk, but move directly towards the extraction grid guided by the extraction field. Even for artificially increased electronegativity of the bulk plasma the extracted NI current from this region is low. This observation indicates a high relevance of the direct NI extraction. These calculations show that the extracted NI current from the bulk region is low even if a complete ion-ion plasma is assumed, meaning that direct extraction from surface produced ions should be present in order to obtain sufficiently high extracted NI current density. The calculated extracted currents, both ions and electrons, agree rather well with the experiment.
Automatic target recognition using a feature-based optical neural network
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin
1992-01-01
An optical neural network based upon the Neocognitron paradigm (K. Fukushima et al. 1983) is introduced. A novel aspect of the architectural design is shift-invariant multichannel Fourier optical correlation within each processing layer. Multilayer processing is achieved by iteratively feeding back the output of the feature correlator to the input spatial light modulator and updating the Fourier filters. By training the neural net with characteristic features extracted from the target images, successful pattern recognition with intra-class fault tolerance and inter-class discrimination is achieved. A detailed system description is provided. Experimental demonstration of a two-layer neural network for space objects discrimination is also presented.
Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming
2018-01-01
There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L0 gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements. PMID:29414893
Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming
2018-02-07
There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L ₀ gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements.
Exploiting parallel computing with limited program changes using a network of microcomputers
NASA Technical Reports Server (NTRS)
Rogers, J. L., Jr.; Sobieszczanski-Sobieski, J.
1985-01-01
Network computing and multiprocessor computers are two discernible trends in parallel processing. The computational behavior of an iterative distributed process in which some subtasks are completed later than others because of an imbalance in computational requirements is of significant interest. The effects of asynchronus processing was studied. A small existing program was converted to perform finite element analysis by distributing substructure analysis over a network of four Apple IIe microcomputers connected to a shared disk, simulating a parallel computer. The substructure analysis uses an iterative, fully stressed, structural resizing procedure. A framework of beams divided into three substructures is used as the finite element model. The effects of asynchronous processing on the convergence of the design variables are determined by not resizing particular substructures on various iterations.
NASA Astrophysics Data System (ADS)
Akolkar, A.; Petrasch, J.; Finck, S.; Rahmatian, N.
2018-02-01
An inverse analysis of the phosphor layer of a commercially available, conformally coated, white LED is done based on tomographic and spectrometric measurements. The aim is to determine the radiative transfer coefficients of the phosphor layer from the measurements of the finished device, with minimal assumptions regarding the composition of the phosphor layer. These results can be used for subsequent opto-thermal modelling and optimization of the device. For this purpose, multiple integrating sphere and gonioradiometric measurements are done to obtain statistical bounds on spectral radiometric values and angular color distributions for ten LEDs belonging to the same color bin of the product series. Tomographic measurements of the LED package are used to generate a tetrahedral grid of the 3D LED geometry. A radiative transfer model using Monte Carlo Ray Tracing in the tetrahedral grid is developed. Using a two-wavelength model consisting of a blue emission wavelength and a yellow, Stokes-shifted re-emission wavelength, the angular color distribution of the LED is simulated over wide ranges of the absorption and scattering coefficients of the phosphor layer, for the blue and yellow wavelengths. Using a two-step, iterative space search, combinations of the radiative transfer coefficients are obtained for which the simulations are consistent with the integrating sphere and gonioradiometric measurements. The results show an inverse relationship between the scattering and absorption coefficients of the phosphor layer for blue light. Scattering of yellow light acts as a distribution and loss mechanism for yellow light and affects the shape of the angular color distribution significantly, especially at larger viewing angles. The spread of feasible coefficients indicates that measured optical behavior of the LEDs may be reproduced using a range of combinations of radiative coefficients. Given that coefficients predicted by the Mie theory usually must be corrected in order to reproduce experimental results, these results indicate that a more complete model of radiative transfer in phosphor layers is required.
Design and applications of a multimodality image data warehouse framework.
Wong, Stephen T C; Hoo, Kent Soo; Knowlton, Robert C; Laxer, Kenneth D; Cao, Xinhau; Hawkins, Randall A; Dillon, William P; Arenson, Ronald L
2002-01-01
A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications--namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains.
Design and Applications of a Multimodality Image Data Warehouse Framework
Wong, Stephen T.C.; Hoo, Kent Soo; Knowlton, Robert C.; Laxer, Kenneth D.; Cao, Xinhau; Hawkins, Randall A.; Dillon, William P.; Arenson, Ronald L.
2002-01-01
A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications—namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains. PMID:11971885
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budaev, V. P., E-mail: budaev@mail.ru; Martynenko, Yu. V.; Khimchenko, L. N.
Targets made of ITER-grade 316L(N)-IG stainless steel and Russian-grade 12Cr18Ni10Ti stainless steel with a close composition were exposed at the QSPA-T plasma gun to plasma photonic radiation pulses simulating conditions of disruption mitigation in ITER. After a large number of pulses, modification of the stainless-steel surface was observed, such as the formation of a wavy structure, irregular roughness, and cracks on the target surface. X-ray and optic microscopic analyses of targets revealed changes in the orientation and dimensions of crystallites (grains) over a depth of up to 20 μm for 316L(N)-IG stainless steel after 200 pulses and up to 40more » μm for 12Cr18Ni10Ti stainless steel after 50 pulses, which is significantly larger than the depth of the layer melted in one pulse (∼10 μm). In a series of 200 tests of ITER-grade 316L(N)-IG ITER stainless steel, a linear increase in the height of irregularity (roughness) with increasing number of pulses at a rate of up to ∼1 μm per pulse was observed. No alteration in the chemical composition of the stainless-steel surface in the series of tests was revealed. A model is developed that describes the formation of wavy irregularities on the melted metal surface with allowance for the nonlinear stage of instability of the melted layer with a vapor/plasma flow above it. A decisive factor in this case is the viscous flow of the melted metal from the troughs to tops of the wavy structure. The model predicts saturation of the growth of the wavy structure when its amplitude becomes comparable with its wavelength. Approaches to describing the observed stochastic relief and roughness of the stainless-steel surface formed in the series of tests are considered. The recurrence of the melting-solidification process in which mechanisms of the hill growth compete with the spreading of the material from the hills can result in the formation of a stochastic relief.« less
High-performance equation solvers and their impact on finite element analysis
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Knight, Norman F., Jr.; Davis, D. Dale, Jr.
1990-01-01
The role of equation solvers in modern structural analysis software is described. Direct and iterative equation solvers which exploit vectorization on modern high-performance computer systems are described and compared. The direct solvers are two Cholesky factorization methods. The first method utilizes a novel variable-band data storage format to achieve very high computation rates and the second method uses a sparse data storage format designed to reduce the number of operations. The iterative solvers are preconditioned conjugate gradient methods. Two different preconditioners are included; the first uses a diagonal matrix storage scheme to achieve high computation rates and the second requires a sparse data storage scheme and converges to the solution in fewer iterations that the first. The impact of using all of the equation solvers in a common structural analysis software system is demonstrated by solving several representative structural analysis problems.
High-performance equation solvers and their impact on finite element analysis
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Knight, Norman F., Jr.; Davis, D. D., Jr.
1992-01-01
The role of equation solvers in modern structural analysis software is described. Direct and iterative equation solvers which exploit vectorization on modern high-performance computer systems are described and compared. The direct solvers are two Cholesky factorization methods. The first method utilizes a novel variable-band data storage format to achieve very high computation rates and the second method uses a sparse data storage format designed to reduce the number od operations. The iterative solvers are preconditioned conjugate gradient methods. Two different preconditioners are included; the first uses a diagonal matrix storage scheme to achieve high computation rates and the second requires a sparse data storage scheme and converges to the solution in fewer iterations that the first. The impact of using all of the equation solvers in a common structural analysis software system is demonstrated by solving several representative structural analysis problems.
PREDICTING TURBINE STAGE PERFORMANCE
NASA Technical Reports Server (NTRS)
Boyle, R. J.
1994-01-01
This program was developed to predict turbine stage performance taking into account the effects of complex passage geometries. The method uses a quasi-3D inviscid-flow analysis iteratively coupled to calculated losses so that changes in losses result in changes in the flow distribution. In this manner the effects of both the geometry on the flow distribution and the flow distribution on losses are accounted for. The flow may be subsonic or shock-free transonic. The blade row may be fixed or rotating, and the blades may be twisted and leaned. This program has been applied to axial and radial turbines, and is helpful in the analysis of mixed flow machines. This program is a combination of the flow analysis programs MERIDL and TSONIC coupled to the boundary layer program BLAYER. The subsonic flow solution is obtained by a finite difference, stream function analysis. Transonic blade-to-blade solutions are obtained using information from the finite difference, stream function solution with a reduced flow factor. Upstream and downstream flow variables may vary from hub to shroud and provision is made to correct for loss of stagnation pressure. Boundary layer analyses are made to determine profile and end-wall friction losses. Empirical loss models are used to account for incidence, secondary flow, disc windage, and clearance losses. The total losses are then used to calculate stator, rotor, and stage efficiency. This program is written in FORTRAN IV for batch execution and has been implemented on an IBM 370/3033 under TSS with a central memory requirement of approximately 4.5 Megs of 8 bit bytes. This program was developed in 1985.
Optimal vibration control of a rotating plate with self-sensing active constrained layer damping
NASA Astrophysics Data System (ADS)
Xie, Zhengchao; Wong, Pak Kin; Lo, Kin Heng
2012-04-01
This paper proposes a finite element model for optimally controlled constrained layer damped (CLD) rotating plate with self-sensing technique and frequency-dependent material property in both the time and frequency domain. Constrained layer damping with viscoelastic material can effectively reduce the vibration in rotating structures. However, most existing research models use complex modulus approach to model viscoelastic material, and an additional iterative approach which is only available in frequency domain has to be used to include the material's frequency dependency. It is meaningful to model the viscoelastic damping layer in rotating part by using the anelastic displacement fields (ADF) in order to include the frequency dependency in both the time and frequency domain. Also, unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Thus, in this work, a single layer finite element is adopted to model a three-layer active constrained layer damped rotating plate in which the constraining layer is made of piezoelectric material to work as both the self-sensing sensor and actuator under an linear quadratic regulation (LQR) controller. After being compared with verified data, this newly proposed finite element model is validated and could be used for future research.
Analysis of energy states in modulation doped multiquantum well heterostructures
NASA Technical Reports Server (NTRS)
Ji, G.; Henderson, T.; Peng, C. K.; Huang, D.; Morkoc, H.
1990-01-01
A precise and effective numerical procedure to model the band diagram of modulation doped multiquantum well heterostructures is presented. This method is based on a self-consistent iterative solution of the Schroedinger equation and the Poisson equation. It can be used rather easily in any arbitrary modulation-doped structure. In addition to confined energy subbands, the unconfined states can be calculated as well. Examples on realistic device structures are given to demonstrate capabilities of this procedure. The numerical results are in good agreement with experiments. With the aid of this method the transitions involving both the confined and unconfined conduction subbands in a modulation doped AlGaAs/GaAs superlattice, and in a strained layer InGaAs/GaAs superlattice are identified. These results represent the first observation of unconfined transitions in modulation doped multiquantum well structures.
Modeling place field activity with hierarchical slow feature analysis
Schönfeld, Fabian; Wiskott, Laurenz
2015-01-01
What are the computational laws of hippocampal activity? In this paper we argue for the slowness principle as a fundamental processing paradigm behind hippocampal place cell firing. We present six different studies from the experimental literature, performed with real-life rats, that we replicated in computer simulations. Each of the chosen studies allows rodents to develop stable place fields and then examines a distinct property of the established spatial encoding: adaptation to cue relocation and removal; directional dependent firing in the linear track and open field; and morphing and scaling the environment itself. Simulations are based on a hierarchical Slow Feature Analysis (SFA) network topped by a principal component analysis (ICA) output layer. The slowness principle is shown to account for the main findings of the presented experimental studies. The SFA network generates its responses using raw visual input only, which adds to its biological plausibility but requires experiments performed in light conditions. Future iterations of the model will thus have to incorporate additional information, such as path integration and grid cell activity, in order to be able to also replicate studies that take place during darkness. PMID:26052279
NASA Technical Reports Server (NTRS)
Hyer, M. W.; Charette, R. F.
1987-01-01
To increase the effectiveness and efficiency of fiber-reinforced materials, the use of fibers in a curvilinear rather than the traditional straightline format is explored. The capacity of a laminated square plate with a central circular hole loaded in tension is investigated. The orientation of the fibers is chosen so that the fibers in a particular layer are aligned with the principle stress directions in that layer. Finite elements and an iteration scheme are used to find the fiber orientation. A noninteracting maximum strain criterion is used to predict load capacity. The load capacities of several plates with different curvilinear fibers format are compared with the capacities of more conventional straightline format designs. It is found that the most practical curvilinear design sandwiches a group of fibers in a curvilinear format between a pair of +/-45 degree layers. This design has a 60% greater load capacity than a conventional quasi-isotropic design with the same number of layers. The +/-45 degree layers are necessary to prevent matrix cracking in the curvilinear layers due to stresses perpendicular to the fibers in those layers. Greater efficiencies are achievable with composite structures than now realized.
Stokes-Doppler coherence imaging for ITER boundary tomography.
Howard, J; Kocan, M; Lisgo, S; Reichle, R
2016-11-01
An optical coherence imaging system is presently being designed for impurity transport studies and other applications on ITER. The wide variation in magnetic field strength and pitch angle (assumed known) across the field of view generates additional Zeeman-polarization-weighting information that can improve the reliability of tomographic reconstructions. Because background reflected light will be somewhat depolarized analysis of only the polarized fraction may be enough to provide a level of background suppression. We present the principles behind these ideas and some simulations that demonstrate how the approach might work on ITER. The views and opinions expressed herein do not necessarily reflect those of the ITER Organization.
Efficient solution of the simplified P N equations
Hamilton, Steven P.; Evans, Thomas M.
2014-12-23
We show new solver strategies for the multigroup SPN equations for nuclear reactor analysis. By forming the complete matrix over space, moments, and energy a robust set of solution strategies may be applied. Moreover, power iteration, shifted power iteration, Rayleigh quotient iteration, Arnoldi's method, and a generalized Davidson method, each using algebraic and physics-based multigrid preconditioners, have been compared on C5G7 MOX test problem as well as an operational PWR model. These results show that the most ecient approach is the generalized Davidson method, that is 30-40 times faster than traditional power iteration and 6-10 times faster than Arnoldi's method.
TH-AB-BRA-09: Stability Analysis of a Novel Dose Calculation Algorithm for MRI Guided Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelyak, O; Fallone, B; Cross Cancer Institute, Edmonton, AB
2016-06-15
Purpose: To determine the iterative deterministic solution stability of the Linear Boltzmann Transport Equation (LBTE) in the presence of magnetic fields. Methods: The LBTE with magnetic fields under investigation is derived using a discrete ordinates approach. The stability analysis is performed using analytical and numerical methods. Analytically, the spectral Fourier analysis is used to obtain the convergence rate of the source iteration procedures based on finding the largest eigenvalue of the iterative operator. This eigenvalue is a function of relevant physical parameters, such as magnetic field strength and material properties, and provides essential information about the domain of applicability requiredmore » for clinically optimal parameter selection and maximum speed of convergence. The analytical results are reinforced by numerical simulations performed using the same discrete ordinates method in angle, and a discontinuous finite element spatial approach. Results: The spectral radius for the source iteration technique of the time independent transport equation with isotropic and anisotropic scattering centers inside infinite 3D medium is equal to the ratio of differential and total cross sections. The result is confirmed numerically by solving LBTE and is in full agreement with previously published results. The addition of magnetic field reveals that the convergence becomes dependent on the strength of magnetic field, the energy group discretization, and the order of anisotropic expansion. Conclusion: The source iteration technique for solving the LBTE with magnetic fields with the discrete ordinates method leads to divergent solutions in the limiting cases of small energy discretizations and high magnetic field strengths. Future investigations into non-stationary Krylov subspace techniques as an iterative solver will be performed as this has been shown to produce greater stability than source iteration. Furthermore, a stability analysis of a discontinuous finite element space-angle approach (which has been shown to provide the greatest stability) will also be investigated. Dr. B Gino Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization)« less
Electromagnetic Analysis of ITER Diagnostic Equatorial Port Plugs During Plasma Disruptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Y. Zhai, R. Feder, A. Brooks, M. Ulrickson, C.S. Pitcher and G.D. Loesser
2012-08-27
ITER diagnostic port plugs perform many functionsincluding structural support of diagnostic systems under high electromagnetic loads while allowing for diagnostic access to the plasma. The design of diagnostic equatorial port plugs (EPP) are largely driven by electromagnetic loads and associate responses of EPP structure during plasma disruptions and VDEs. This paper summarizes results of transient electromagnetic analysis using Opera 3d in support of the design activities for ITER diagnostic EPP. A complete distribution of disruption loads on the Diagnostic First Walls (DFWs), Diagnostic Shield Modules (DSMs) and the EPP structure, as well as impact on the system design integration duemore » to electrical contact among various EPP structural components are discussed.« less
Observer-based distributed adaptive iterative learning control for linear multi-agent systems
NASA Astrophysics Data System (ADS)
Li, Jinsha; Liu, Sanyang; Li, Junmin
2017-10-01
This paper investigates the consensus problem for linear multi-agent systems from the viewpoint of two-dimensional systems when the state information of each agent is not available. Observer-based fully distributed adaptive iterative learning protocol is designed in this paper. A local observer is designed for each agent and it is shown that without using any global information about the communication graph, all agents achieve consensus perfectly for all undirected connected communication graph when the number of iterations tends to infinity. The Lyapunov-like energy function is employed to facilitate the learning protocol design and property analysis. Finally, simulation example is given to illustrate the theoretical analysis.
Layer-Based Approach for Image Pair Fusion.
Son, Chang-Hwan; Zhang, Xiao-Ping
2016-04-20
Recently, image pairs, such as noisy and blurred images or infrared and noisy images, have been considered as a solution to provide high-quality photographs under low lighting conditions. In this paper, a new method for decomposing the image pairs into two layers, i.e., the base layer and the detail layer, is proposed for image pair fusion. In the case of infrared and noisy images, simple naive fusion leads to unsatisfactory results due to the discrepancies in brightness and image structures between the image pair. To address this problem, a local contrast-preserving conversion method is first proposed to create a new base layer of the infrared image, which can have visual appearance similar to another base layer such as the denoised noisy image. Then, a new way of designing three types of detail layers from the given noisy and infrared images is presented. To estimate the noise-free and unknown detail layer from the three designed detail layers, the optimization framework is modeled with residual-based sparsity and patch redundancy priors. To better suppress the noise, an iterative approach that updates the detail layer of the noisy image is adopted via a feedback loop. This proposed layer-based method can also be applied to fuse another noisy and blurred image pair. The experimental results show that the proposed method is effective for solving the image pair fusion problem.
NASA Astrophysics Data System (ADS)
Rusu, M. I.; Pardanaud, C.; Ferro, Y.; Giacometti, G.; Martin, C.; Addab, Y.; Roubin, P.; Minissale, M.; Ferri, L.; Virot, F.; Barrachin, M.; Lungu, C. P.; Porosnicu, C.; Dinca, P.; Lungu, M.; Köppen, M.; Hansen, P.; Linsmeier, Ch.
2017-07-01
This study demonstrates that Raman microscopy is a suitable technique for future post mortem analyses of JET and ITER plasma facing components. We focus here on laboratory deposited and bombarded samples of beryllium and beryllium carbides and start to build a reference spectral databases for fusion relevant beryllium-based materials. We identified the beryllium phonon density of states, its second harmonic and E 2G and B 2G second harmonic and combination modes for defective beryllium in the spectral range 300-700 and 700-1300 cm-1, lying close to Be-D modes of beryllium hydrides. We also identified beryllium carbide signature, Be2C, combining Raman microscopy and DFT calculation. We have shown that, depending on the optical constants of the material probed, in depth sensitivity at the nanometer scale can be performed using different wavelengths. This way, we demonstrate that multi-wavelength Raman microscopy is sensitive to in-depth stress caused by ion implantation (down to ≈30 nm under the surface for Be) and Be/C concentration (down to 400 nm or more under the surface for Be+C), which is a main contribution of this work. The depth resolution reached can then be adapted for studying the supersaturated surface layer found on tokamak deposits.
Numerical analysis of modified Central Solenoid insert design
Khodak, Andrei; Martovetsky, Nicolai; Smirnov, Aleksandre; ...
2015-06-21
The United States ITER Project Office (USIPO) is responsible for fabrication of the Central Solenoid (CS) for ITER project. The ITER machine is currently under construction by seven parties in Cadarache, France. The CS Insert (CSI) project should provide a verification of the conductor performance in relevant conditions of temperature, field, currents and mechanical strain. The US IPO designed the CSI that will be tested at the Central Solenoid Model Coil (CSMC) Test Facility at JAEA, Naka. To validate the modified design we performed three-dimensional numerical simulations using coupled solver for simultaneous structural, thermal and electromagnetic analysis. Thermal and electromagneticmore » simulations supported structural calculations providing necessary loads and strains. According to current analysis design of the modified coil satisfies ITER magnet structural design criteria for the following conditions: (1) room temperature, no current, (2) temperature 4K, no current, (3) temperature 4K, current 60 kA direct charge, and (4) temperature 4K, current 60 kA reverse charge. Fatigue life assessment analysis is performed for the alternating conditions of: temperature 4K, no current, and temperature 4K, current 45 kA direct charge. Results of fatigue analysis show that parts of the coil assembly can be qualified for up to 1 million cycles. Distributions of the Current Sharing Temperature (TCS) in the superconductor were obtained from numerical results using parameterization of the critical surface in the form similar to that proposed for ITER. Lastly, special ADPL scripts were developed for ANSYS allowing one-dimensional representation of TCS along the cable, as well as three-dimensional fields of TCS in superconductor material. Published by Elsevier B.V.« less
NASA Technical Reports Server (NTRS)
Moghaddam, Mahta; Pierce, Leland; Tabatabaeenejad, Alireza; Rodriguez, Ernesto
2005-01-01
Knowledge of subsurface characteristics such as permittivity variations and layering structure could provide a breakthrough in many terrestrial and planetary science disciplines. For Earth science, knowledge of subsurface and subcanopy soil moisture layers can enable the estimation of vertical flow in the soil column linking surface hydrologic processes with that in the subsurface. For planetary science, determining the existence of subsurface water and ice is regarded as one of the most critical information needs for the study of the origins of the solar system. The subsurface in general can be described as several near-parallel layers with rough interfaces. Each homogenous rough layer can be defined by its average thickness, permittivity, and rms interface roughness assuming a known surface spectral distribution. As the number and depth of layers increase, the number of measurements needed to invert for the layer unknowns also increases, and deeper penetration capability would be required. To nondestructively calculate the characteristics of the rough layers, a multifrequency polarimetric radar backscattering approach can be used. One such system is that we have developed for data prototyping of the Microwave Observatory of Subcanopy and Subsurface (MOSS) mission concept. A tower-mounted radar makes backscattering measurements at VHF, UHF, and L-band frequencies. The radar is a pulsed CW system, which uses the same wideband antenna to transmit and receive the signals at all three frequencies. To focus the beam at various incidence angles within the beamwidth of the antenna, the tower is moved vertically and measurements made at each position. The signals are coherently summed to achieve focusing and image formation in the subsurface. This requires an estimate of wave velocity profiles. To solve the inverse scattering problem for subsurface velocity profile simultaneously with radar focusing, we use an iterative technique based on a forward numerical solution of the layered rough surface problem. The layers are each defined in terms of a small number of unknown distributions as given above. An a priori estimate of the solution is first assumed, based on which the forward problem is solved for the backscattered measurements. This is compared with the measured data and using iterative techniques an update to the solution for the unknowns is calculated. The process continues until convergence is achieved. Numerical results will be shown using actual radar data acquired with the MOSS tower radar system in Arizona in Fall 2003, and compared with in-situ measurements.
Modeling Natural Space Ionizing Radiation Effects on External Materials
NASA Technical Reports Server (NTRS)
Alstatt, Richard L.; Edwards, David L.; Parker, Nelson C. (Technical Monitor)
2000-01-01
Predicting the effective life of materials for space applications has become increasingly critical with the drive to reduce mission cost. Programs have considered many solutions to reduce launch costs including novel, low mass materials and thin thermal blankets to reduce spacecraft mass. Determining the long-term survivability of these materials before launch is critical for mission success. This presentation will describe an analysis performed on the outer layer of the passive thermal control blanket of the Hubble Space Telescope. This layer had degraded for unknown reasons during the mission, however ionizing radiation (IR) induced embrittlement was suspected. A methodology was developed which allowed direct comparison between the energy deposition of the natural environment and that of the laboratory generated environment. Commercial codes were used to predict the natural space IR environment model energy deposition in the material from both natural and laboratory IR sources, and design the most efficient test. Results were optimized for total and local energy deposition with an iterative spreadsheet. This method has been used successfully for several laboratory tests at the Marshall Space Flight Center. The study showed that the natural space IR environment, by itself, did not cause the premature degradation observed in the thermal blanket.
Modeling natural space ionizing radiation effects on external materials
NASA Astrophysics Data System (ADS)
Altstatt, Richard L.; Edwards, David L.
2000-10-01
Predicting the effective life of materials for space applications has become increasingly critical with the drive to reduce mission cost. Programs have considered many solutions to reduce launch costs including novel, low mass materials and thin thermal blankets to reduce spacecraft mass. Determining the long-term survivability of these materials before launch is critical for mission success. This presentation will describe an analysis performed on the outer layer of the passive thermal control blanket of the Hubble Space Telescope. This layer had degraded for unknown reasons during the mission, however ionizing radiation (IR) induced embrittlement was suspected. A methodology was developed which allowed direct comparison between the energy deposition of the natural environment and that of the laboratory generated environment. Commercial codes were used to predict the natural space IR environment, model energy deposition in the material from both natural and laboratory IR sources, and design the most efficient test. Results were optimized for total and local energy deposition with an iterative spreadsheet. This method has been used successfully for several laboratory tests at the Marshall Space Flight Center. The study showed that the natural space IR environment, by itself, did not cause the premature degradation observed in the thermal blanket.
Erosion and Retention Properties of Beyllium
NASA Astrophysics Data System (ADS)
Doerner, R.; Grossman, A.; Luckhardt, S.; Serayderian, R.; Sze, F. C.; Whyte, D. G.
1997-11-01
Experiments in PISCES-B have investigated the erosion and hydrogen retention characteristics of beryllium. The sputtering yield is strongly influenced by trace amounts (≈1 percent) of intrinsic plasma impurities. At low sample exposure temperatures (below 250^oC), the beryllium surface remains free of contaminants and a sputtering yield similar to that of beryllium-oxide is measured. At higher exposure temperatures, impurities deposited on the surface can diffuse into the bulk and reduce their chance of subsequent erosion. These impurities form a surface layer mixed with beryllium which exhibits a reduced sputtering yield. Depth profile analysis has determined the composition and chemical bonding of the impurity layer. The hydrogen isotope retention of beryllium under ITER first wall (temperature = 200^oC, ion flux = 1 x 10^21 m-2 s-1) and baffle (temperature = 500^oC, ion flux = 1 x 10^22 m-2 s-1) conditions has been investigated. The retained deuterium saturates above a fluence of 10^23 m-2 at about 4 x 10^20 m-2 for the 200^oC exposure and at 2 x 10^20 m-2 for the 500^oC case. The TMAP code is used to model the deuterium release characteristics.
A new method for designing shock-free transonic configurations
NASA Technical Reports Server (NTRS)
Sobieczky, H.; Fung, K. Y.; Seebass, A. R.; Yu, N. J.
1978-01-01
A method for the design of shock free supercritical airfoils, wings, and three dimensional configurations is described. Results illustrating the procedure in two and three dimensions are given. They include modifications to part of the upper surface of an NACA 64A410 airfoil that will maintain shock free flow over a range of Mach numbers for a fixed lift coefficient, and the modifications required on part of the upper surface of a swept wing with an NACA 64A410 root section to achieve shock free flow. While the results are given for inviscid flow, the same procedures can be employed iteratively with a boundary layer calculation in order to achieve shock free viscous designs. With a shock free pressure field the boundary layer calculation will be reliable and not complicated by the difficulties of shock wave boundary layer interaction.
Non-native three-dimensional block copolymer morphologies
Rahman, Atikur; Majewski, Pawel W.; Doerk, Gregory; ...
2016-12-22
Self-assembly is a powerful paradigm, wherein molecules spontaneously form ordered phases exhibiting well-defined nanoscale periodicity and shapes. However, the inherent energy-minimization aspect of self-assembly yields a very limited set of morphologies, such as lamellae or hexagonally packed cylinders. Here, we show how soft self-assembling materials—block copolymer thin films—can be manipulated to form a diverse library of previously unreported morphologies. In this iterative assembly process, each polymer layer acts as both a structural component of the final morphology and a template for directing the order of subsequent layers. Specifically, block copolymer films are immobilized on surfaces, and template successive layers throughmore » subtle surface topography. As a result, this strategy generates an enormous variety of three-dimensional morphologies that are absent in the native block copolymer phase diagram.« less
Information flow in layered networks of non-monotonic units
NASA Astrophysics Data System (ADS)
Schittler Neves, Fabio; Martim Schubert, Benno; Erichsen, Rubem, Jr.
2015-07-01
Layered neural networks are feedforward structures that yield robust parallel and distributed pattern recognition. Even though much attention has been paid to pattern retrieval properties in such systems, many aspects of their dynamics are not yet well characterized or understood. In this work we study, at different temperatures, the memory activity and information flows through layered networks in which the elements are the simplest binary odd non-monotonic function. Our results show that, considering a standard Hebbian learning approach, the network information content has its maximum always at the monotonic limit, even though the maximum memory capacity can be found at non-monotonic values for small enough temperatures. Furthermore, we show that such systems exhibit rich macroscopic dynamics, including not only fixed point solutions of its iterative map, but also cyclic and chaotic attractors that also carry information.
Non-native three-dimensional block copolymer morphologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rahman, Atikur; Majewski, Pawel W.; Doerk, Gregory
Self-assembly is a powerful paradigm, wherein molecules spontaneously form ordered phases exhibiting well-defined nanoscale periodicity and shapes. However, the inherent energy-minimization aspect of self-assembly yields a very limited set of morphologies, such as lamellae or hexagonally packed cylinders. Here, we show how soft self-assembling materials—block copolymer thin films—can be manipulated to form a diverse library of previously unreported morphologies. In this iterative assembly process, each polymer layer acts as both a structural component of the final morphology and a template for directing the order of subsequent layers. Specifically, block copolymer films are immobilized on surfaces, and template successive layers throughmore » subtle surface topography. As a result, this strategy generates an enormous variety of three-dimensional morphologies that are absent in the native block copolymer phase diagram.« less
Ginsburg, Shiphra; Eva, Kevin; Regehr, Glenn
2013-10-01
Although scores on in-training evaluation reports (ITERs) are often criticized for poor reliability and validity, ITER comments may yield valuable information. The authors assessed across-rotation reliability of ITER scores in one internal medicine program, ability of ITER scores and comments to predict postgraduate year three (PGY3) performance, and reliability and incremental predictive validity of attendings' analysis of written comments. Numeric and narrative data from the first two years of ITERs for one cohort of residents at the University of Toronto Faculty of Medicine (2009-2011) were assessed for reliability and predictive validity of third-year performance. Twenty-four faculty attendings rank-ordered comments (without scores) such that each resident was ranked by three faculty. Mean ITER scores and comment rankings were submitted to regression analyses; dependent variables were PGY3 ITER scores and program directors' rankings. Reliabilities of ITER scores across nine rotations for 63 residents were 0.53 for both postgraduate year one (PGY1) and postgraduate year two (PGY2). Interrater reliabilities across three attendings' rankings were 0.83 for PGY1 and 0.79 for PGY2. There were strong correlations between ITER scores and comments within each year (0.72 and 0.70). Regressions revealed that PGY1 and PGY2 ITER scores collectively explained 25% of variance in PGY3 scores and 46% of variance in PGY3 rankings. Comment rankings did not improve predictions. ITER scores across multiple rotations showed decent reliability and predictive validity. Comment ranks did not add to the predictive ability, but correlation analyses suggest that trainee performance can be measured through these comments.
NASA Astrophysics Data System (ADS)
Arakcheev, A. S.; Skovorodin, D. I.; Burdakov, A. V.; Shoshin, A. A.; Polosatkin, S. V.; Vasilyev, A. A.; Postupaev, V. V.; Vyacheslavov, L. N.; Kasatov, A. A.; Huber, A.; Mertens, Ph; Wirtz, M.; Linsmeier, Ch; Kreter, A.; Löwenhoff, Th; Begrambekov, L.; Grunin, A.; Sadovskiy, Ya
2015-12-01
A mathematical model of surface cracking under pulsed heat load was developed. The model correctly describes a smooth brittle-ductile transition. The elastic deformation is described in a thin-heated-layer approximation. The plastic deformation is described with the Hollomon equation. The time dependence of the deformation and stresses is described for one heating-cooling cycle for a material without initial plastic deformation. The model can be applied to tungsten manufactured according to ITER specifications. The model shows that the stability of stress-relieved tungsten deteriorates when the base temperature increases. This proved to be a result of the close ultimate tensile and yield strengths. For a heat load of arbitrary magnitude a stability criterion was obtained in the form of condition on the relation of the ultimate tensile and yield strengths.
Multi-Mbar Ramp Compression of Copper
NASA Astrophysics Data System (ADS)
Kraus, Rick; Davis, Jean-Paul; Seagle, Christopher; Fratanduono, Dayne; Swift, Damian; Eggert, Jon; Collins, Gilbert
2015-06-01
The cold curve is a critical component of equation of state models. Diamond anvil cell measurements can be used to determine isotherms, but these have generally been limited to pressures below 1 Mbar. The cold curve can also be extracted from Hugoniot data, but only with assumptions about the thermal pressure. As the National Ignition Facility will be using copper as an ablator material at pressures in excess of 10 Mbar, we need a better understanding of the high-density equation of state. Here we present ramp-wave compression experiments at the Sandia Z-Machine that we have used to constrain the isentrope of copper to a stress state of nearly 5 Mbar. We use the iterative Lagrangian analysis technique, developed by Rothman and Maw, to determine the stress-strain path. We also present a new iterative forward analysis (IFA) technique coupled to the ARES hydrocode that performs a non-linear optimization over the pressure drive and equation of state in order to match the free surface velocities. The IFA technique is an advantage over iterative Lagrangian analysis for experiments with growing shocks or systems with time dependent strength, which violate the assumptions of iterative Lagrangian analysis. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Modeling and simulation of thermally actuated bilayer plates
NASA Astrophysics Data System (ADS)
Bartels, Sören; Bonito, Andrea; Muliana, Anastasia H.; Nochetto, Ricardo H.
2018-02-01
We present a mathematical model of polymer bilayers that undergo large bending deformations when actuated by non-mechanical stimuli such as thermal effects. The simple model captures a large class of nonlinear bending effects and can be discretized with standard plate elements. We devise a fully practical iterative scheme and apply it to the simulation of folding of several practically useful compliant structures comprising of thin elastic layers.
Cost-sensitive AdaBoost algorithm for ordinal regression based on extreme learning machine.
Riccardi, Annalisa; Fernández-Navarro, Francisco; Carloni, Sante
2014-10-01
In this paper, the well known stagewise additive modeling using a multiclass exponential (SAMME) boosting algorithm is extended to address problems where there exists a natural order in the targets using a cost-sensitive approach. The proposed ensemble model uses an extreme learning machine (ELM) model as a base classifier (with the Gaussian kernel and the additional regularization parameter). The closed form of the derived weighted least squares problem is provided, and it is employed to estimate analytically the parameters connecting the hidden layer to the output layer at each iteration of the boosting algorithm. Compared to the state-of-the-art boosting algorithms, in particular those using ELM as base classifier, the suggested technique does not require the generation of a new training dataset at each iteration. The adoption of the weighted least squares formulation of the problem has been presented as an unbiased and alternative approach to the already existing ELM boosting techniques. Moreover, the addition of a cost model for weighting the patterns, according to the order of the targets, enables the classifier to tackle ordinal regression problems further. The proposed method has been validated by an experimental study by comparing it with already existing ensemble methods and ELM techniques for ordinal regression, showing competitive results.
Development of a real-time system for ITER first wall heat load control
NASA Astrophysics Data System (ADS)
Anand, Himank; de Vries, Peter; Gribov, Yuri; Pitts, Richard; Snipes, Joseph; Zabeo, Luca
2017-10-01
The steady state heat flux on the ITER first wall (FW) panels are limited by the heat removal capacity of the water cooling system. In case of off-normal events (e.g. plasma displacement during H-L transitions), the heat loads are predicted to exceed the design limits (2-4.7 MW/m2). Intense heat loads are predicted on the FW, even well before the burning plasma phase. Thus, a real-time (RT) FW heat load control system is mandatory from early plasma operation of the ITER tokamak. A heat load estimator based on the RT equilibrium reconstruction has been developed for the plasma control system (PCS). A scheme, estimating the energy state for prescribed gaps defined as the distance between the last closed flux surface (LCFS)/separatrix and the FW is presented. The RT energy state is determined by the product of a weighted function of gap distance and the power crossing the plasma boundary. In addition, a heat load estimator assuming a simplified FW geometry and parallel heat transport model in the scrape-off layer (SOL), benchmarked against a full 3-D magnetic field line tracer is also presented.
2009 Space Shuttle Probabilistic Risk Assessment Overview
NASA Technical Reports Server (NTRS)
Hamlin, Teri L.; Canga, Michael A.; Boyer, Roger L.; Thigpen, Eric B.
2010-01-01
Loss of a Space Shuttle during flight has severe consequences, including loss of a significant national asset; loss of national confidence and pride; and, most importantly, loss of human life. The Shuttle Probabilistic Risk Assessment (SPRA) is used to identify risk contributors and their significance; thus, assisting management in determining how to reduce risk. In 2006, an overview of the SPRA Iteration 2.1 was presented at PSAM 8 [1]. Like all successful PRAs, the SPRA is a living PRA and has undergone revisions since PSAM 8. The latest revision to the SPRA is Iteration 3. 1, and it will not be the last as the Shuttle program progresses and more is learned. This paper discusses the SPRA scope, overall methodology, and results, as well as provides risk insights. The scope, assumptions, uncertainties, and limitations of this assessment provide risk-informed perspective to aid management s decision-making process. In addition, this paper compares the Iteration 3.1 analysis and results to the Iteration 2.1 analysis and results presented at PSAM 8.
Stability of the iterative solutions of integral equations as one phase freezing criterion.
Fantoni, R; Pastore, G
2003-10-01
A recently proposed connection between the threshold for the stability of the iterative solution of integral equations for the pair correlation functions of a classical fluid and the structural instability of the corresponding real fluid is carefully analyzed. Direct calculation of the Lyapunov exponent of the standard iterative solution of hypernetted chain and Percus-Yevick integral equations for the one-dimensional (1D) hard rods fluid shows the same behavior observed in 3D systems. Since no phase transition is allowed in such 1D system, our analysis shows that the proposed one phase criterion, at least in this case, fails. We argue that the observed proximity between the numerical and the structural instability in 3D originates from the enhanced structure present in the fluid but, in view of the arbitrary dependence on the iteration scheme, it seems uneasy to relate the numerical stability analysis to a robust one-phase criterion for predicting a thermodynamic phase transition.
NASA Astrophysics Data System (ADS)
Saroj, Pradeep K.; Sahu, S. A.; Chaudhary, S.; Chattopadhyay, A.
2015-10-01
This paper investigates the propagation behavior of Love-type surface waves in three-layered composite structure with initial stress. The composite structure has been taken in such a way that a functionally graded piezoelectric material (FGPM) layer is bonded between initially stressed piezoelectric upper layer and an elastic substrate. Using the method of separation of variables, frequency equation for the considered wave has been established in the form of determinant for electrical open and short cases on free surface. The bisection method iteration technique has been used to find the roots of the dispersion relations which give the modes for electrical open and short cases. The effects of gradient variation of material constant and initial stress on the phase velocity of surface waves are discussed. Dependence of thickness on each parameter of the study has been shown explicitly. Study has been also done to show the existence of cut-off frequency. Graphical representation has been done to exhibit the findings. The obtained results are significant for the investigation and characterization of Love-type waves in FGPM-layered media.
Visual texture for automated characterisation of geological features in borehole televiewer imagery
NASA Astrophysics Data System (ADS)
Al-Sit, Waleed; Al-Nuaimy, Waleed; Marelli, Matteo; Al-Ataby, Ali
2015-08-01
Detailed characterisation of the structure of subsurface fractures is greatly facilitated by digital borehole logging instruments, the interpretation of which is typically time-consuming and labour-intensive. Despite recent advances towards autonomy and automation, the final interpretation remains heavily dependent on the skill, experience, alertness and consistency of a human operator. Existing computational tools fail to detect layers between rocks that do not exhibit distinct fracture boundaries, and often struggle characterising cross-cutting layers and partial fractures. This paper presents a novel approach to the characterisation of planar rock discontinuities from digital images of borehole logs. Multi-resolution texture segmentation and pattern recognition techniques utilising Gabor filters are combined with an iterative adaptation of the Hough transform to enable non-distinct, partial, distorted and steep fractures and layers to be accurately identified and characterised in a fully automated fashion. This approach has successfully detected fractures and layers with high detection accuracy and at a relatively low computational cost.
An equilibrium model for the coupled ocean-atmosphere boundary layer in the tropics
NASA Technical Reports Server (NTRS)
Sui, C.-H.; Lau, K.-M.; Betts, Alan K.
1991-01-01
An atmospheric convective boundary layer (CBL) model is coupled to an ocean mixed-layer (OML) model in order to study the equilibrium state of the coupled system in the tropics, particularly in the Pacific region. The equilibrium state of the coupled system is solved as a function of sea-surface temperature (SST) for a given surface wind and as a function of surface wind for a given SST. It is noted that in both cases, the depth of the CBL and OML increases and the upwelling below the OML decreases, corresponding to either increasing SST or increasing surface wind. The coupled ocean-atmosphere model is solved iteratively as a function of surface wind for a fixed upwelling and a fixed OML depth, and it is observed that SST falls with increasing wind in both cases. Realistic gradients of mixed-layer depth and upwelling are observed in experiments with surface wind and SST prescribed as a function of longitude.
Layerless fabrication with continuous liquid interface production.
Janusziewicz, Rima; Tumbleston, John R; Quintanilla, Adam L; Mecham, Sue J; DeSimone, Joseph M
2016-10-18
Despite the increasing popularity of 3D printing, also known as additive manufacturing (AM), the technique has not developed beyond the realm of rapid prototyping. This confinement of the field can be attributed to the inherent flaws of layer-by-layer printing and, in particular, anisotropic mechanical properties that depend on print direction, visible by the staircasing surface finish effect. Continuous liquid interface production (CLIP) is an alternative approach to AM that capitalizes on the fundamental principle of oxygen-inhibited photopolymerization to generate a continual liquid interface of uncured resin between the growing part and the exposure window. This interface eliminates the necessity of an iterative layer-by-layer process, allowing for continuous production. Herein we report the advantages of continuous production, specifically the fabrication of layerless parts. These advantages enable the fabrication of large overhangs without the use of supports, reduction of the staircasing effect without compromising fabrication time, and isotropic mechanical properties. Combined, these advantages result in multiple indicators of layerless and monolithic fabrication using CLIP technology.
Layerless fabrication with continuous liquid interface production
Janusziewicz, Rima; Tumbleston, John R.; Quintanilla, Adam L.; Mecham, Sue J.; DeSimone, Joseph M.
2016-01-01
Despite the increasing popularity of 3D printing, also known as additive manufacturing (AM), the technique has not developed beyond the realm of rapid prototyping. This confinement of the field can be attributed to the inherent flaws of layer-by-layer printing and, in particular, anisotropic mechanical properties that depend on print direction, visible by the staircasing surface finish effect. Continuous liquid interface production (CLIP) is an alternative approach to AM that capitalizes on the fundamental principle of oxygen-inhibited photopolymerization to generate a continual liquid interface of uncured resin between the growing part and the exposure window. This interface eliminates the necessity of an iterative layer-by-layer process, allowing for continuous production. Herein we report the advantages of continuous production, specifically the fabrication of layerless parts. These advantages enable the fabrication of large overhangs without the use of supports, reduction of the staircasing effect without compromising fabrication time, and isotropic mechanical properties. Combined, these advantages result in multiple indicators of layerless and monolithic fabrication using CLIP technology. PMID:27671641
A three-layer model of natural image statistics.
Gutmann, Michael U; Hyvärinen, Aapo
2013-11-01
An important property of visual systems is to be simultaneously both selective to specific patterns found in the sensory input and invariant to possible variations. Selectivity and invariance (tolerance) are opposing requirements. It has been suggested that they could be joined by iterating a sequence of elementary selectivity and tolerance computations. It is, however, unknown what should be selected or tolerated at each level of the hierarchy. We approach this issue by learning the computations from natural images. We propose and estimate a probabilistic model of natural images that consists of three processing layers. Two natural image data sets are considered: image patches, and complete visual scenes downsampled to the size of small patches. For both data sets, we find that in the first two layers, simple and complex cell-like computations are performed. In the third layer, we mainly find selectivity to longer contours; for patch data, we further find some selectivity to texture, while for the downsampled complete scenes, some selectivity to curvature is observed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Single-hidden-layer feed-forward quantum neural network based on Grover learning.
Liu, Cheng-Yi; Chen, Chein; Chang, Ching-Ter; Shih, Lun-Min
2013-09-01
In this paper, a novel single-hidden-layer feed-forward quantum neural network model is proposed based on some concepts and principles in the quantum theory. By combining the quantum mechanism with the feed-forward neural network, we defined quantum hidden neurons and connected quantum weights, and used them as the fundamental information processing unit in a single-hidden-layer feed-forward neural network. The quantum neurons make a wide range of nonlinear functions serve as the activation functions in the hidden layer of the network, and the Grover searching algorithm outstands the optimal parameter setting iteratively and thus makes very efficient neural network learning possible. The quantum neuron and weights, along with a Grover searching algorithm based learning, result in a novel and efficient neural network characteristic of reduced network, high efficient training and prospect application in future. Some simulations are taken to investigate the performance of the proposed quantum network and the result show that it can achieve accurate learning. Copyright © 2013 Elsevier Ltd. All rights reserved.
GENERALISATION OF RADIATOR DESIGN TECHNIQUES FOR PERSONAL NEUTRON DOSEMETERS BY UNFOLDING METHOD.
Oda, K; Nakayama, T; Umetani, K; Kajihara, M; Yamauchi, T
2016-09-01
A novel technique for designing a radiator suitable for personal neutron dosemeter based on plastic track detector was discussed. A multi-layer structure has been proposed in the previous report, where the thicknesses of plural polyethylene (PE) layers and insensitive ones were determined by iterative calculations of double integral. In order to arrange this procedure and make it more systematic, unfolding calculation has been employed to estimate an ideal radiator containing an arbitrary hydrogen concentration. In the second step, realistic materials replaced it with consideration of minimisation of the layer number and commercial availability. A radiator consisting of three layers of PE, Upilex and Kapton sheets was finally designed, for which a deviation in the energy dependence between 0.1 and 20 MeV could be controlled within 18 %. An applicability of fluorescent nuclear track detector element has also been discussed. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Performance and capacity analysis of Poisson photon-counting based Iter-PIC OCDMA systems.
Li, Lingbin; Zhou, Xiaolin; Zhang, Rong; Zhang, Dingchen; Hanzo, Lajos
2013-11-04
In this paper, an iterative parallel interference cancellation (Iter-PIC) technique is developed for optical code-division multiple-access (OCDMA) systems relying on shot-noise limited Poisson photon-counting reception. The novel semi-analytical tool of extrinsic information transfer (EXIT) charts is used for analysing both the bit error rate (BER) performance as well as the channel capacity of these systems and the results are verified by Monte Carlo simulations. The proposed Iter-PIC OCDMA system is capable of achieving two orders of magnitude BER improvements and a 0.1 nats of capacity improvement over the conventional chip-level OCDMA systems at a coding rate of 1/10.
A viscous-inviscid interactive compressor calculations
NASA Technical Reports Server (NTRS)
Johnston, W.; Sockol, P. M.
1978-01-01
A viscous-inviscid interactive procedure for subsonic flow is developed and applied to an axial compressor stage. Calculations are carried out on a two-dimensional blade-to-blade region of constant radius assumed to occupy a mid-span location. Hub and tip effects are neglected. The Euler equations are solved by MacCormack's method, a viscous marching procedure is used in the boundary layers and wake, and an iterative interaction scheme is constructed that matches them in a way that incorporates information related to momentum and enthalpy thicknesses as well as the displacement thickness. The calculations are quasi-three-dimensional in the sense that the boundary layer and wake solutions allow for the presence of spanwise (radial) velocities.
Food insecurity: A concept analysis
Schroeder, Krista; Smaldone, Arlene
2015-01-01
Aim To report an analysis of the concept of food insecurity, in order to 1) propose a theoretical model of food insecurity useful to nursing and 2) discuss its implications for nursing practice, nursing research, and health promotion. Background Forty eight million Americans are food insecure. As food insecurity is associated with multiple negative health effects, nursing intervention is warranted. Design Concept Analysis Data sources A literature search was conducted in May 2014 in Scopus and MEDLINE using the exploded term “food insecur*.” No year limit was placed. Government websites and popular media were searched to ensure a full understanding of the concept. Review Methods Iterative analysis, using the Walker and Avant method Results Food insecurity is defined by uncertain ability or inability to procure food, inability to procure enough food, being unable to live a healthy life, and feeling unsatisfied. A proposed theoretical model of food insecurity, adapted from the Socio-Ecological Model, identifies three layers of food insecurity (individual, community, society), with potential for nursing impact at each level. Conclusion Nurses must work to fight food insecurity. There exists a potential for nursing impact that is currently unrealized. Nursing impact can be guided by a new conceptual model, Food Insecurity within the Nursing Paradigm. PMID:25612146
Electromagnetic Analysis For The Design Of ITER Diagnostic Port Plugs During Plasma Disruptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Y
2014-03-03
ITER diagnostic port plugs perform many functions including structural support of diagnostic systems under high electromagnetic loads while allowing for diagnostic access to plasma. The design of diagnotic equatorial port plugs (EPP) are largely driven by electromagnetic loads and associate response of EPP structure during plasma disruptions and VDEs. This paper summarizes results of transient electromagnetic analysis using Opera 3d in support of the design activities for ITER diagnostic EPP. A complete distribution of disruption loads on the Diagnostic First Walls (DFWs). Diagnostic Shield Modules (DSMs) and the EPP structure, as well as impact on the system design integration duemore » to electrical contact among various EPP structural components are discussed.« less
Pfefer, T Joshua; Wang, Quanzeng; Drezek, Rebekah A
2011-11-01
Computational approaches for simulation of light-tissue interactions have provided extensive insight into biophotonic procedures for diagnosis and therapy. However, few studies have addressed simulation of time-resolved fluorescence (TRF) in tissue and none have combined Monte Carlo simulations with standard TRF processing algorithms to elucidate approaches for cancer detection in layered biological tissue. In this study, we investigate how illumination-collection parameters (e.g., collection angle and source-detector separation) influence the ability to measure fluorophore lifetime and tissue layer thickness. Decay curves are simulated with a Monte Carlo TRF light propagation model. Multi-exponential iterative deconvolution is used to determine lifetimes and fractional signal contributions. The ability to detect changes in mucosal thickness is optimized by probes that selectively interrogate regions superficial to the mucosal-submucosal boundary. Optimal accuracy in simultaneous determination of lifetimes in both layers is achieved when each layer contributes 40-60% of the signal. These results indicate that depth-selective approaches to TRF have the potential to enhance disease detection in layered biological tissue and that modeling can play an important role in probe design optimization. Published by Elsevier Ireland Ltd.
Contraction design for small low-speed wind tunnels
NASA Technical Reports Server (NTRS)
Bell, James H.; Mehta, Rabindra D.
1988-01-01
An iterative design procedure was developed for two- or three-dimensional contractions installed on small, low-speed wind tunnels. The procedure consists of first computing the potential flow field and hence the pressure distributions along the walls of a contraction of given size and shape using a three-dimensional numerical panel method. The pressure or velocity distributions are then fed into two-dimensional boundary layer codes to predict the behavior of the boundary layers along the walls. For small, low-speed contractions it is shown that the assumption of a laminar boundary layer originating from stagnation conditions at the contraction entry and remaining laminar throughout passage through the successful designs if justified. This hypothesis was confirmed by comparing the predicted boundary layer data at the contraction exit with measured data in existing wind tunnels. The measured boundary layer momentum thicknesses at the exit of four existing contractions, two of which were 3-D, were found to lie within 10 percent of the predicted values, with the predicted values generally lower. From the contraction wall shapes investigated, the one based on a fifth-order polynomial was selected for installation on a newly designed mixing layer wind tunnel.
Wissler, Eugene H; Havenith, George
2009-03-01
Overall resistances for heat and vapor transport in a multilayer garment depend on the properties of individual layers and the thickness of any air space between layers. Under uncomplicated, steady-state conditions, thermal and mass fluxes are uniform within the garment, and the rate of transport is simply computed as the overall temperature or water concentration difference divided by the appropriate resistance. However, that simple computation is not valid under cool ambient conditions when the vapor permeability of the garment is low, and condensation occurs within the garment. Several recent studies have measured heat and vapor transport when condensation occurs within the garment (Richards et al. in Report on Project ThermProject, Contract No. G6RD-CT-2002-00846, 2002; Havenith et al. in J Appl Physiol 104:142-149, 2008). In addition to measuring cooling rates for ensembles when the skin was either wet or dry, both studies employed a flat-plate apparatus to measure resistances of individual layers. Those data provide information required to define the properties of an ensemble in terms of its individual layers. We have extended the work of previous investigators by developing a rather simple technique for analyzing heat and water vapor transport when condensation occurs within a garment. Computed results agree well with experimental results reported by Richards et al. (Report on Project ThermProject, Contract No. G6RD-CT-2002-00846, 2002) and Havenith et al. (J Appl Physiol 104:142-149, 2008). We discuss application of the method to human subjects for whom the rate of sweat secretion, instead of the partial pressure of water on the skin, is specified. Analysis of a more complicated five-layer system studied by Yoo and Kim (Text Res J 78:189-197, 2008) required an iterative computation based on principles defined in this paper.
Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Bentler, Peter M.
2000-01-01
Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)
Liu, Xiao; Shi, Jun; Zhou, Shichong; Lu, Minhua
2014-01-01
The dimensionality reduction is an important step in ultrasound image based computer-aided diagnosis (CAD) for breast cancer. A newly proposed l2,1 regularized correntropy algorithm for robust feature selection (CRFS) has achieved good performance for noise corrupted data. Therefore, it has the potential to reduce the dimensions of ultrasound image features. However, in clinical practice, the collection of labeled instances is usually expensive and time costing, while it is relatively easy to acquire the unlabeled or undetermined instances. Therefore, the semi-supervised learning is very suitable for clinical CAD. The iterated Laplacian regularization (Iter-LR) is a new regularization method, which has been proved to outperform the traditional graph Laplacian regularization in semi-supervised classification and ranking. In this study, to augment the classification accuracy of the breast ultrasound CAD based on texture feature, we propose an Iter-LR-based semi-supervised CRFS (Iter-LR-CRFS) algorithm, and then apply it to reduce the feature dimensions of ultrasound images for breast CAD. We compared the Iter-LR-CRFS with LR-CRFS, original supervised CRFS, and principal component analysis. The experimental results indicate that the proposed Iter-LR-CRFS significantly outperforms all other algorithms.
Fourier analysis of the SOR iteration
NASA Technical Reports Server (NTRS)
Leveque, R. J.; Trefethen, L. N.
1986-01-01
The SOR iteration for solving linear systems of equations depends upon an overrelaxation factor omega. It is shown that for the standard model problem of Poisson's equation on a rectangle, the optimal omega and corresponding convergence rate can be rigorously obtained by Fourier analysis. The trick is to tilt the space-time grid so that the SOR stencil becomes symmetrical. The tilted grid also gives insight into the relation between convergence rates of several variants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panayotov, Dobromir; Poitevin, Yves; Grief, Andrew
'Fusion for Energy' (F4E) is designing, developing, and implementing the European Helium-Cooled Lead-Lithium (HCLL) and Helium-Cooled Pebble-Bed (HCPB) Test Blanket Systems (TBSs) for ITER (Nuclear Facility INB-174). Safety demonstration is an essential element for the integration of these TBSs into ITER and accident analysis is one of its critical components. A systematic approach to accident analysis has been developed under the F4E contract on TBS safety analyses. F4E technical requirements, together with Amec Foster Wheeler and INL efforts, have resulted in a comprehensive methodology for fusion breeding blanket accident analysis that addresses the specificity of the breeding blanket designs, materials,more » and phenomena while remaining consistent with the approach already applied to ITER accident analyses. Furthermore, the methodology phases are illustrated in the paper by its application to the EU HCLL TBS using both MELCOR and RELAP5 codes.« less
Panayotov, Dobromir; Poitevin, Yves; Grief, Andrew; ...
2016-09-23
'Fusion for Energy' (F4E) is designing, developing, and implementing the European Helium-Cooled Lead-Lithium (HCLL) and Helium-Cooled Pebble-Bed (HCPB) Test Blanket Systems (TBSs) for ITER (Nuclear Facility INB-174). Safety demonstration is an essential element for the integration of these TBSs into ITER and accident analysis is one of its critical components. A systematic approach to accident analysis has been developed under the F4E contract on TBS safety analyses. F4E technical requirements, together with Amec Foster Wheeler and INL efforts, have resulted in a comprehensive methodology for fusion breeding blanket accident analysis that addresses the specificity of the breeding blanket designs, materials,more » and phenomena while remaining consistent with the approach already applied to ITER accident analyses. Furthermore, the methodology phases are illustrated in the paper by its application to the EU HCLL TBS using both MELCOR and RELAP5 codes.« less
The solution of linear systems of equations with a structural analysis code on the NAS CRAY-2
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Overman, Andrea L.
1988-01-01
Two methods for solving linear systems of equations on the NAS Cray-2 are described. One is a direct method; the other is an iterative method. Both methods exploit the architecture of the Cray-2, particularly the vectorization, and are aimed at structural analysis applications. To demonstrate and evaluate the methods, they were installed in a finite element structural analysis code denoted the Computational Structural Mechanics (CSM) Testbed. A description of the techniques used to integrate the two solvers into the Testbed is given. Storage schemes, memory requirements, operation counts, and reformatting procedures are discussed. Finally, results from the new methods are compared with results from the initial Testbed sparse Choleski equation solver for three structural analysis problems. The new direct solvers described achieve the highest computational rates of the methods compared. The new iterative methods are not able to achieve as high computation rates as the vectorized direct solvers but are best for well conditioned problems which require fewer iterations to converge to the solution.
Iterative Integration of Visual Insights during Scalable Patent Search and Analysis.
Koch, S; Bosch, H; Giereth, M; Ertl, T
2011-05-01
Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With "PatViz", a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness.
Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen
2017-02-01
The SIRT (Simultaneous Iterative Reconstruction Technique) algorithm is commonly used in Electron Tomography to calculate the original volume of the sample from noisy images, but the results provided by this iterative procedure are strongly dependent on the specific implementation of the algorithm, as well as on the number of iterations employed for the reconstruction. In this work, a methodology for selecting the iteration number of the SIRT reconstruction that provides the most accurate segmentation is proposed. The methodology is based on the statistical analysis of the intensity profiles at the edge of the objects in the reconstructed volume. A phantom which resembles a a carbon black aggregate has been created to validate the methodology and the SIRT implementations of two free software packages (TOMOJ and TOMO3D) have been used. Copyright © 2016 Elsevier B.V. All rights reserved.
2017-01-01
Nanoporous anodic aluminum oxide (AAO) membranes are being used for an increasing number of applications. However, the original two-step anodization method in which the first anodization is sacrificial to pre-pattern the second is still widely used to produce them. This method provides relatively low throughput and material utilization as half of the films are discarded. An alternative scheme that relies on alternating anodization and cathodic delamination is demonstrated that allows for the fabrication of several AAO films with only one sacrificial layer thus greatly improving total aluminum to alumina yield. The thickness for which the cathodic delamination performs best to yield full, unbroken AAO sheets is around 85 μm. Additionally, an image analysis method is used to quantify the degree of long-range ordering of the unit cells in the AAO films which was found to increase with each successive iteration of the fabrication cycle. PMID:28630684
Choudhary, Eric; Szalai, Veronika
2016-01-01
Nanoporous anodic aluminum oxide (AAO) membranes are being used for an increasing number of applications. However, the original two-step anodization method in which the first anodization is sacrificial to pre-pattern the second is still widely used to produce them. This method provides relatively low throughput and material utilization as half of the films are discarded. An alternative scheme that relies on alternating anodization and cathodic delamination is demonstrated that allows for the fabrication of several AAO films with only one sacrificial layer thus greatly improving total aluminum to alumina yield. The thickness for which the cathodic delamination performs best to yield full, unbroken AAO sheets is around 85 μm. Additionally, an image analysis method is used to quantify the degree of long-range ordering of the unit cells in the AAO films which was found to increase with each successive iteration of the fabrication cycle.
Quality in Inclusive and Noninclusive Infant and Toddler Classrooms
ERIC Educational Resources Information Center
Hestenes, Linda L.; Cassidy, Deborah J.; Hegde, Archana V.; Lower, Joanna K.
2007-01-01
The quality of care in infant and toddler classrooms was compared across inclusive (n=64) and noninclusive classrooms (n=400). Quality was measured using the Infant/Toddler Environment Rating Scale-Revised (ITERS-R). An exploratory and confirmatory factor analysis revealed four distinct dimensions of quality within the ITERS-R. Inclusive…
NASA Astrophysics Data System (ADS)
Botelho, S. J.; Bazylak, A.
2015-04-01
In this study, the microporous layer (MPL) of the polymer electrolyte membrane (PEM) fuel cell was analysed at the nano-scale. Atomic force microscopy (AFM) was utilized to image the top layer of MPL particles, and a curve fitting algorithm was used to determine the particle size and filling radius distributions for SGL-10BB and SGL-10BC. The particles in SGL-10BC (approximately 60 nm in diameter) have been found to be larger than those in SGL-10BB (approximately 40 nm in diameter), highlighting structural variability between the two materials. The impact of the MPL particle interactions on the effective thermal conductivity of the bulk MPL was analysed using a discretization of the Fourier equation with the Gauss-Seidel iterative method. It was found that the particle spacing and filling radius dominates the effective thermal conductivity, a result which provides valuable insight for future MPL design.
A layered modulation method for pixel matching in online phase measuring profilometry
NASA Astrophysics Data System (ADS)
Li, Hongru; Feng, Guoying; Bourgade, Thomas; Yang, Peng; Zhou, Shouhuan; Asundi, Anand
2016-10-01
An online phase measuring profilometry with new layered modulation method for pixel matching is presented. In this method and in contrast with previous modulation matching methods, the captured images are enhanced by Retinex theory for better modulation distribution, and all different layer modulation masks are fully used to determine the displacement of a rectilinear moving object. High, medium and low modulation masks are obtained by performing binary segmentation with iterative Otsu method. The final shifting pixels are calculated based on centroid concept, and after that the aligned fringe patterns can be extracted from each frame. After performing Stoilov algorithm and a series of subsequent operations, the object profile on a translation stage is reconstructed. All procedures are carried out automatically, without setting specific parameters in advance. Numerical simulations are detailed and experimental results verify the validity and feasibility of the proposed approach.
A quasi-dense matching approach and its calibration application with Internet photos.
Wan, Yanli; Miao, Zhenjiang; Wu, Q M Jonathan; Wang, Xifu; Tang, Zhen; Wang, Zhifei
2015-03-01
This paper proposes a quasi-dense matching approach to the automatic acquisition of camera parameters, which is required for recovering 3-D information from 2-D images. An affine transformation-based optimization model and a new matching cost function are used to acquire quasi-dense correspondences with high accuracy in each pair of views. These correspondences can be effectively detected and tracked at the sub-pixel level in multiviews with our neighboring view selection strategy. A two-layer iteration algorithm is proposed to optimize 3-D quasi-dense points and camera parameters. In the inner layer, different optimization strategies based on local photometric consistency and a global objective function are employed to optimize the 3-D quasi-dense points and camera parameters, respectively. In the outer layer, quasi-dense correspondences are resampled to guide a new estimation and optimization process of the camera parameters. We demonstrate the effectiveness of our algorithm with several experiments.
Gary Achtemeier
2012-01-01
A cellular automata fire model represents âelementsâ of fire by autonomous agents. A few simple algebraic expressions substituted for complex physical and meteorological processes and solved iteratively yield simulations for âsuper-diffusiveâ fire spread and coupled surface-layer (2-m) fireâatmosphere processes. Pressure anomalies, which are integrals of the thermal...
Polarimetric signatures of a coniferous forest canopy based on vector radiative transfer theory
NASA Technical Reports Server (NTRS)
Karam, M. A.; Fung, A. K.; Amar, F.; Mougin, E.; Lopes, A.; Beaudoin, A.
1992-01-01
Complete polarization signatures of a coniferous forest canopy are studied by the iterative solution of the vector radiative transfer equations up to the second order. The forest canopy constituents (leaves, branches, stems, and trunk) are embedded in a multi-layered medium over a rough interface. The branches, stems and trunk scatterers are modeled as finite randomly oriented cylinders. The leaves are modeled as randomly oriented needles. For a plane wave exciting the canopy, the average Mueller matrix is formulated in terms of the iterative solution of the radiative transfer solution and used to determine the linearly polarized backscattering coefficients, the co-polarized and cross-polarized power returns, and the phase difference statistics. Numerical results are presented to investigate the effect of transmitting and receiving antenna configurations on the polarimetric signature of a pine forest. Comparison is made with measurements.
NASA Astrophysics Data System (ADS)
Litnovsky, A.; Philipps, V.; Wienhold, P.; Krieger, K.; Kirschner, A.; Borodin, D.; Sergienko, G.; Schmitz, O.; Kreter, A.; Samm, U.; Richter, S.; Breuer, U.; Textor Team
2009-04-01
Castellation is foreseen for the first wall and divertor area in ITER. The concern of the fuel accumulation and impurity deposition in the gaps of castellated structures calls for dedicated studies. Recently, a tungsten castellated limiter with rectangular and roof-like shaped cells was exposed to the SOL plasmas in TEXTOR. After exposure, roughly two times less fuel was found in the gaps between the shaped cells whereas the difference in carbon deposition was less pronounced. Up to 70 at.% of tungsten was found intermixed in the deposited layers in the gaps. The metal fraction in the deposit decreases rapidly with a depth of the gap. Modeling of carbon deposition in poloidal gaps has provided a qualitative agreement with an experiment. The significant anisotropy of C and D distributions in the toroidal gaps was measured.
Fast secant methods for the iterative solution of large nonsymmetric linear systems
NASA Technical Reports Server (NTRS)
Deuflhard, Peter; Freund, Roland; Walter, Artur
1990-01-01
A family of secant methods based on general rank-1 updates was revisited in view of the construction of iterative solvers for large non-Hermitian linear systems. As it turns out, both Broyden's good and bad update techniques play a special role, but should be associated with two different line search principles. For Broyden's bad update technique, a minimum residual principle is natural, thus making it theoretically comparable with a series of well known algorithms like GMRES. Broyden's good update technique, however, is shown to be naturally linked with a minimum next correction principle, which asymptotically mimics a minimum error principle. The two minimization principles differ significantly for sufficiently large system dimension. Numerical experiments on discretized partial differential equations of convection diffusion type in 2-D with integral layers give a first impression of the possible power of the derived good Broyden variant.
Iterative categorization (IC): a systematic technique for analysing qualitative data
2016-01-01
Abstract The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. PMID:26806155
Wang, G; Doyle, E J; Peebles, W A
2016-11-01
A monostatic antenna array arrangement has been designed for the microwave front-end of the ITER low-field-side reflectometer (LFSR) system. This paper presents details of the antenna coupling coefficient analyses performed using GENRAY, a 3-D ray tracing code, to evaluate the plasma height accommodation capability of such an antenna array design. Utilizing modeled data for the plasma equilibrium and profiles for the ITER baseline and half-field scenarios, a design study was performed for measurement locations varying from the plasma edge to inside the top of the pedestal. A front-end antenna configuration is recommended for the ITER LFSR system based on the results of this coupling analysis.
On foundations of discrete element analysis of contact in diarthrodial joints.
Volokh, K Y; Chao, E Y S; Armand, M
2007-06-01
Information about the stress distribution on contact surfaces of adjacent bones is indispensable for analysis of arthritis, bone fracture and remodeling. Numerical solution of the contact problem based on the classical approaches of solid mechanics is sophisticated and time-consuming. However, the solution can be essentially simplified on the following physical grounds. The bone contact surfaces are covered with a layer of articular cartilage, which is a soft tissue as compared to the hard bone. The latter allows ignoring the bone compliance in analysis of the contact problem, i.e. rigid bones are considered to interact through a compliant cartilage. Moreover, cartilage shear stresses and strains can be ignored because of the negligible friction between contacting cartilage layers. Thus, the cartilage can be approximated by a set of unilateral compressive springs normal to the bone surface. The forces in the springs can be computed from the equilibrium equations iteratively accounting for the changing contact area. This is the essence of the discrete element analysis (DEA). Despite the success in applications of DEA to various bone contact problems, its classical formulation required experimental validation because the springs approximating the cartilage were assumed linear while the real articular cartilage exhibited non-linear mechanical response in reported tests. Recent experimental results of Ateshian and his co-workers allow for revisiting the classical DEA formulation and establishing the limits of its applicability. In the present work, it is shown that the linear spring model is remarkably valid within a wide range of large deformations of the cartilage. It is also shown how to extend the classical DEA to the case of strong nonlinearity if necessary.
NASA Astrophysics Data System (ADS)
Chen, Hao; Lv, Wen; Zhang, Tongtong
2018-05-01
We study preconditioned iterative methods for the linear system arising in the numerical discretization of a two-dimensional space-fractional diffusion equation. Our approach is based on a formulation of the discrete problem that is shown to be the sum of two Kronecker products. By making use of an alternating Kronecker product splitting iteration technique we establish a class of fixed-point iteration methods. Theoretical analysis shows that the new method converges to the unique solution of the linear system. Moreover, the optimal choice of the involved iteration parameters and the corresponding asymptotic convergence rate are computed exactly when the eigenvalues of the system matrix are all real. The basic iteration is accelerated by a Krylov subspace method like GMRES. The corresponding preconditioner is in a form of a Kronecker product structure and requires at each iteration the solution of a set of discrete one-dimensional fractional diffusion equations. We use structure preserving approximations to the discrete one-dimensional fractional diffusion operators in the action of the preconditioning matrix. Numerical examples are presented to illustrate the effectiveness of this approach.
NASA Astrophysics Data System (ADS)
1990-09-01
The main purpose of the International Thermonuclear Experimental Reactor (ITER) is to develop an experimental fusion reactor through the united efforts of many technologically advanced countries. The ITER terms of reference, issued jointly by the European Community, Japan, the USSR, and the United States, call for an integrated international design activity and constitute the basis of current activities. Joint work on ITER is carried out under the auspices of the International Atomic Energy Agency (IAEA), according to the terms of quadripartite agreement reached between the European Community, Japan, the USSR, and the United States. The site for joint technical work sessions is at the Max Planck Institute of Plasma Physics. Garching, Federal Republic of Germany. The ITER activities have two phases: a definition phase performed in 1988 and the present design phase (1989 to 1990). During the definition phase, a set of ITER technical characteristics and supporting research and development (R and D) activities were developed and reported. The present conceptual design phase of ITER lasts until the end of 1990. The objectives of this phase are to develop the design of ITER, perform a safety and environmental analysis, develop site requirements, define future R and D needs, and estimate cost, manpower, and schedule for construction and operation. A final report will be submitted at the end of 1990. This paper summarizes progress in the ITER program during the 1989 design phase.
2015-12-01
AFRL-RY-WP-TR-2015-0144 COGNITIVE RADIO LOW-ENERGY SIGNAL ANALYSIS SENSOR INTEGRATED CIRCUITS (CLASIC) A Broadband Mixed-Signal Iterative Down...See additional restrictions described on inside pages STINFO COPY AIR FORCE RESEARCH LABORATORY SENSORS DIRECTORATE WRIGHT-PATTERSON AIR FORCE...Signature// TODD KASTLE, Chief Spectrum Warfare Division Sensors Directorate This report is published in the interest of scientific and technical
NASA Astrophysics Data System (ADS)
Li, Husheng; Betz, Sharon M.; Poor, H. Vincent
2007-05-01
This paper examines the performance of decision feedback based iterative channel estimation and multiuser detection in channel coded aperiodic DS-CDMA systems operating over multipath fading channels. First, explicit expressions describing the performance of channel estimation and parallel interference cancellation based multiuser detection are developed. These results are then combined to characterize the evolution of the performance of a system that iterates among channel estimation, multiuser detection and channel decoding. Sufficient conditions for convergence of this system to a unique fixed point are developed.
NASA Astrophysics Data System (ADS)
Loreto, M. F.; Tinivella, U.; Accaino, F.; Giustiniani, M.
2010-05-01
Sediments of the accretionary prism, present along the continental margin of the Peninsula Antarctica SW of Elephant Island, are filled by gas hydrates as evidenced by a strong BSR. A multidisciplinary geophysical dataset, represented by seismic data, multibeam, chirp profiles, CTD and core samples, was acquired during three oceanographic cruises. The estimation of gas hydrate and free gas concentrations is based on the P-wave velocity analysis. In order to extract a detailed and reliable velocity field, we have developed and optimized a procedure that includes the pre-stack depth migration to determine, iteratively and with a layer stripping approach method, the velocity field and the depth-migrated seismic section. The final velocity field is then translated in terms of gas hydrate and free gas amounts by using theoretical approaches. Several seismic sections have been processed in the investigated area. The final 2D velocity sections have been translated in gas-phase concentration sections, considering the gas distribution within sediments both uniformly and patchly distributed. The free gas layer is locally present and, consequently, the base of the free gas reflector was identified only in some lines or part of them. The hydrate layer shows important lateral variations of hydrate concentration in correspondence of geological features, such as faults and folds. The intense fluid migration along faults and different fluid accumulation in correspondence of geological structures can control the gas hydrate concentration and modify the geothermal field in the surrounding area.
NASA Technical Reports Server (NTRS)
Puliafito, E.; Bevilacqua, R.; Olivero, J.; Degenhardt, W.
1992-01-01
The formal retrieval error analysis of Rodgers (1990) allows the quantitative determination of such retrieval properties as measurement error sensitivity, resolution, and inversion bias. This technique was applied to five numerical inversion techniques and two nonlinear iterative techniques used for the retrieval of middle atmospheric constituent concentrations from limb-scanning millimeter-wave spectroscopic measurements. It is found that the iterative methods have better vertical resolution, but are slightly more sensitive to measurement error than constrained matrix methods. The iterative methods converge to the exact solution, whereas two of the matrix methods under consideration have an explicit constraint, the sensitivity of the solution to the a priori profile. Tradeoffs of these retrieval characteristics are presented.
Using Combustion Synthesis to Reinforce Berms and Other Regolith Structures
NASA Technical Reports Server (NTRS)
Rodriquez, Gary
2013-01-01
The Moonraker Excavator and other tools under development for use on the Moon, Mars, and asteroids will be employed to construct a number of civil engineering projects and to mine the soil. Mounds of loose soil will be subject to the local transport mechanisms plus artificial mechanisms such as blast effects from landers and erosion from surface vehicles. Some of these structures will require some permanence, with a minimum of maintenance and upkeep. Combustion Synthesis (CS) is a family of processes and techniques whereby chemistry is used to transform materials, often creating flame in a hard vacuum. CS can be used to stabilize civil engineering works such as berms, habitat shielding, ramps, pads, roadways, and the like. The method is to unroll thin sheets of CS fabric between layers of regolith and then fire the fabric, creating a continuous sheet of crusty material to be interposed among layers of loose regolith. The combination of low-energy processes, ISRU (in situ resource utilization) excavator, and CS fabrics, seems compelling as a general method for establishing structures of some permanence and utility, especially in the role of robotic missions as precursors to manned exploration and settlement. In robotic precursory missions, excavator/ mobility ensembles mine the Lunar surface, erect constructions of soil, and dispense sheets of CS fabrics that are covered with layers of soil, fired, and then again covered with layers of soil, iterating until the desired dimensions and forms are achieved. At the base of each berm, for example, is a shallow trench lined with CS fabric, fired and filled, mounded, and then covered and fired, iteratively to provide a footing against lateral shear. A larger trench is host to a habitat module, backfilled, covered with fabric, covered with soil, and fired. Covering the applied CS fabric with layers of soil before firing allows the resulting matrix to incorporate soil both above and below the fabric ply into the fused layer, developing a very irregular surface which, like sandpaper, can provide an anchor for loose soil. CS fabrics employ a coarse fiberglass weave that persists as reinforcement for the fired material. The fiberglass softens at a temperature that exceeds the combustion temperature by factors of two to three, and withstands the installation process. This type of structure should be more resistant to rocket blast effects from Lunar landers.
Harmonics analysis of the ITER poloidal field converter based on a piecewise method
NASA Astrophysics Data System (ADS)
Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU
2017-12-01
Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.
Performance of spectral MSE diagnostic on C-Mod and ITER
NASA Astrophysics Data System (ADS)
Liao, Ken; Rowan, William; Mumgaard, Robert; Granetz, Robert; Scott, Steve; Marchuk, Oleksandr; Ralchenko, Yuri; Alcator C-Mod Team
2015-11-01
Magnetic field was measured on Alcator C-mod by applying spectral Motional Stark Effect techniques based on line shift (MSE-LS) and line ratio (MSE-LR) to the H-alpha emission spectrum of the diagnostic neutral beam atoms. The high field of Alcator C-mod allows measurements to be made at close to ITER values of Stark splitting (~ Bv⊥) with similar background levels to those expected for ITER. Accurate modeling of the spectrum requires a non-statistical, collisional-radiative analysis of the excited beam population and quadratic and Zeeman corrections to the Stark shift. A detailed synthetic diagnostic was developed and used to estimate the performance of the diagnostic at C-Mod and ITER parameters. Our analysis includes the sensitivity to view and beam geometry, aperture and divergence broadening, magnetic field, pixel size, background noise, and signal levels. Analysis of preliminary experiments agree with Kinetic+(polarization)MSE EFIT within ~2° in pitch angle and simulations predict uncertainties of 20 mT in | B | and <2° in pitch angle. This material is based upon work supported by the U.S. Department of Energy Office of Science, Office of Fusion Energy Sciences under Award Number DE-FG03-96ER-54373 and DE-FC02-99ER54512.
NASA Astrophysics Data System (ADS)
Parand, Kourosh; Mahdi Moayeri, Mohammad; Latifi, Sobhan; Delkhosh, Mehdi
2017-07-01
In this paper, a spectral method based on the four kinds of rational Chebyshev functions is proposed to approximate the solution of the boundary layer flow of an Eyring-Powell fluid over a stretching sheet. First, by using the quasilinearization method (QLM), the model which is a nonlinear ordinary differential equation is converted to a sequence of linear ordinary differential equations (ODEs). By applying the proposed method on the ODEs in each iteration, the equations are converted to a system of linear algebraic equations. The results indicate the high accuracy and convergence of our method. Moreover, the effects of the Eyring-Powell fluid material parameters are discussed.
The numerical calculation of laminar boundary-layer separation
NASA Technical Reports Server (NTRS)
Klineberg, J. M.; Steger, J. L.
1974-01-01
Iterative finite-difference techniques are developed for integrating the boundary-layer equations, without approximation, through a region of reversed flow. The numerical procedures are used to calculate incompressible laminar separated flows and to investigate the conditions for regular behavior at the point of separation. Regular flows are shown to be characterized by an integrable saddle-type singularity that makes it difficult to obtain numerical solutions which pass continuously into the separated region. The singularity is removed and continuous solutions ensured by specifying the wall shear distribution and computing the pressure gradient as part of the solution. Calculated results are presented for several separated flows and the accuracy of the method is verified. A computer program listing and complete solution case are included.
Vision-based calibration of parallax barrier displays
NASA Astrophysics Data System (ADS)
Ranieri, Nicola; Gross, Markus
2014-03-01
Static and dynamic parallax barrier displays became very popular over the past years. Especially for single viewer applications like tablets, phones and other hand-held devices, parallax barriers provide a convenient solution to render stereoscopic content. In our work we present a computer vision based calibration approach to relate image layer and barrier layer of parallax barrier displays with unknown display geometry for static or dynamic viewer positions using homographies. We provide the math and methods to compose the required homographies on the fly and present a way to compute the barrier without the need of any iteration. Our GPU implementation is stable and general and can be used to reduce latency and increase refresh rate of existing and upcoming barrier methods.
Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2012-01-01
Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764
Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy.
Zelyak, O; Fallone, B G; St-Aubin, J
2017-12-14
Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation.
Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy
NASA Astrophysics Data System (ADS)
Zelyak, O.; Fallone, B. G.; St-Aubin, J.
2018-01-01
Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation.
Corrigendum to "Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy".
Zelyak, Oleksandr; Fallone, B Gino; St-Aubin, Joel
2018-03-12
Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation. © 2018 Institute of Physics and Engineering in Medicine.
CuCrZr alloy microstructure and mechanical properties after hot isostatic pressing bonding cycles
NASA Astrophysics Data System (ADS)
Frayssines, P.-E.; Gentzbittel, J.-M.; Guilloud, A.; Bucci, P.; Soreau, T.; Francois, N.; Primaux, F.; Heikkinen, S.; Zacchia, F.; Eaton, R.; Barabash, V.; Mitteau, R.
2014-04-01
ITER first wall (FW) panels are a layered structure made of the three following materials: 316L(N) austenitic stainless steel, CuCrZr alloy and beryllium. Two hot isostatic pressing (HIP) cycles are included in the reference fabrication route to bond these materials together for the normal heat flux design supplied by the European Union (EU). This reference fabrication route ensures sufficiently good mechanical properties for the materials and joints, which fulfil the ITER mechanical specifications, but often results in a coarse grain size for the CuCrZr alloy, which is not favourable, especially, for the thermal creep properties of the FW panels. To limit the abnormal grain growth of CuCrZr and make the ITER FW fabrication route more reliable, a study began in 2010 in the EU in the frame of an ITER task agreement. Two material fabrication approaches have been investigated. The first one was dedicated to the fabrication of solid CuCrZr alloy in close collaboration with an industrial copper alloys manufacturer. The second approach investigated was the manufacturing of CuCrZr alloy using the powder metallurgy (PM) route and HIP consolidation. This paper presents the main mechanical and microstructural results associated with the two CuCrZr approaches mentioned above. The mechanical properties of solid CuCrZr, PM CuCrZr and joints (solid CuCrZr/solid CuCrZr and solid CuCrZr/316L(N) and PM CuCrZr/316L(N)) are also presented.
NASA Astrophysics Data System (ADS)
Ruff, Michael; Rohn, Joachim
2008-07-01
In this paper a tool for semi-quantitative susceptibility assessment at a regional scale is presented which is applicable at areas with complex geological setting. At a study area within the Northern Calcareous Alps geotechnical mappings were implemented into a Geographical Information System and analysed as grid data with a cell size of 25 m. The susceptibility to sliding and falling processes was considered according to five classes (very low, low, medium, high, very high). Susceptibility to sliding was analysed using an index method. The layers of lithology, bedding conditions, tectonic faults, slope angle, slope aspect, vegetation and erosion were combined iteratively. Dropout zones of rockfall material were determined with help of a Digital Elevation Model. The movement of rolling rock samples was modelled by a cost analysis of all potential rockfall trajectories. These trajectories were also divided into five susceptibility classes. The susceptibility maps are presented in a general way to be used by communities and spatial planners. Conflict areas of susceptibility and landuse were located and can be presented destinctively.
Carbon charge exchange analysis in the ITER-like wall environment.
Menmuir, S; Giroud, C; Biewer, T M; Coffey, I H; Delabie, E; Hawkes, N C; Sertoli, M
2014-11-01
Charge exchange spectroscopy has long been a key diagnostic tool for fusion plasmas and is well developed in devices with Carbon Plasma-Facing Components. Operation with the ITER-like wall at JET has resulted in changes to the spectrum in the region of the Carbon charge exchange line at 529.06 nm and demonstrates the need to revise the core charge exchange analysis for this line. An investigation has been made of this spectral region in different plasma conditions and the revised description of the spectral lines to be included in the analysis is presented.
Ali, S. J.; Kraus, R. G.; Fratanduono, D. E.; ...
2017-05-18
Here, we developed an iterative forward analysis (IFA) technique with the ability to use hydrocode simulations as a fitting function for analysis of dynamic compression experiments. The IFA method optimizes over parameterized quantities in the hydrocode simulations, breaking the degeneracy of contributions to the measured material response. Velocity profiles from synthetic data generated using a hydrocode simulation are analyzed as a first-order validation of the technique. We also analyze multiple magnetically driven ramp compression experiments on copper and compare with more conventional techniques. Excellent agreement is obtained in both cases.
PAFAC- PLASTIC AND FAILURE ANALYSIS OF COMPOSITES
NASA Technical Reports Server (NTRS)
Bigelow, C. A.
1994-01-01
The increasing number of applications of fiber-reinforced composites in industry demands a detailed understanding of their material properties and behavior. A three-dimensional finite-element computer program called PAFAC (Plastic and Failure Analysis of Composites) has been developed for the elastic-plastic analysis of fiber-reinforced composite materials and structures. The evaluation of stresses and deformations at edges, cut-outs, and joints is essential in understanding the strength and failure for metal-matrix composites since the onset of plastic yielding starts very early in the loading process as compared to the composite's ultimate strength. Such comprehensive analysis can only be achieved by a finite-element program like PAFAC. PAFAC is particularly suited for the analysis of laminated metal-matrix composites. It can model the elastic-plastic behavior of the matrix phase while the fibers remain elastic. Since the PAFAC program uses a three-dimensional element, the program can also model the individual layers of the laminate to account for thickness effects. In PAFAC, the composite is modeled as a continuum reinforced by cylindrical fibers of vanishingly small diameter which occupy a finite volume fraction of the composite. In this way, the essential axial constraint of the phases is retained. Furthermore, the local stress and strain fields are uniform. The PAFAC finite-element solution is obtained using the displacement method. Solution of the nonlinear equilibrium equations is obtained with a Newton-Raphson iteration technique. The elastic-plastic behavior of composites consisting of aligned, continuous elastic filaments and an elastic-plastic matrix is described in terms of the constituent properties, their volume fractions, and mutual constraints between phases indicated by the geometry of the microstructure. The program uses an iterative procedure to determine the overall response of the laminate, then from the overall response determines the stress state in each phase of the composite material. Failure of the fibers or matrix within an element can also be modeled by PAFAC. PAFAC is written in FORTRAN IV for batch execution and has been implemented on a CDC CYBER 170 series computer with a segmented memory requirement of approximately 66K (octal) of 60 bit words. PAFAC was developed in 1982.
Oxide segregation and melting behavior of transient heat load exposed beryllium
NASA Astrophysics Data System (ADS)
Spilker, B.; Linke, J.; Pintsuk, G.; Wirtz, M.
2016-10-01
In the experimental fusion reactor ITER, beryllium will be applied as first wall armor material. However, the ITER-like wall project at JET already experienced that the relatively low melting temperature of beryllium can easily be exceeded during plasma operation. Therefore, a detailed study was carried out on S-65 beryllium under various transient, ITER-relevant heat loads that were simulated in the electron beam facility JUDITH 1. Hereby, the absorbed power densities were in the range of 0.15-1.0 GW m-2 in combination with pulse durations of 1-10 ms and pulse numbers of 1-1000. In metallographic cross sections, the emergence of a transition region in a depth of ~70-120 µm was revealed. This transition region was characterized by a strong segregation of oxygen at the grain boundaries, determined with energy dispersive x-ray spectroscopy element mappings. The oxide segregation strongly depended on the maximum temperature reached at the end of the transient heat pulse in combination with the pulse duration. A threshold for this process was found at 936 °C for a pulse duration of 10 ms. Further transient heat pulses applied to specimens that had already formed this transition region resulted in the overheating and melting of the material. The latter occurred between the surface and the transition region and was associated with a strong decrease of the thermal conductivity due to the weakly bound grains across the transition region. Additionally, the transition region caused a partial separation of the melt layer from the bulk material, which could ultimately result in a full detachment of the solidified beryllium layers from the bulk armor. Furthermore, solidified beryllium filaments evolved in several locations of the loaded area and are related to the thermally induced crack formation. However, these filaments are not expected to account for an increase of the beryllium net erosion.
On the implementation of an accurate and efficient solver for convection-diffusion equations
NASA Astrophysics Data System (ADS)
Wu, Chin-Tien
In this dissertation, we examine several different aspects of computing the numerical solution of the convection-diffusion equation. The solution of this equation often exhibits sharp gradients due to Dirichlet outflow boundaries or discontinuities in boundary conditions. Because of the singular-perturbed nature of the equation, numerical solutions often have severe oscillations when grid sizes are not small enough to resolve sharp gradients. To overcome such difficulties, the streamline diffusion discretization method can be used to obtain an accurate approximate solution in regions where the solution is smooth. To increase accuracy of the solution in the regions containing layers, adaptive mesh refinement and mesh movement based on a posteriori error estimations can be employed. An error-adapted mesh refinement strategy based on a posteriori error estimations is also proposed to resolve layers. For solving the sparse linear systems that arise from discretization, goemetric multigrid (MG) and algebraic multigrid (AMG) are compared. In addition, both methods are also used as preconditioners for Krylov subspace methods. We derive some convergence results for MG with line Gauss-Seidel smoothers and bilinear interpolation. Finally, while considering adaptive mesh refinement as an integral part of the solution process, it is natural to set a stopping tolerance for the iterative linear solvers on each mesh stage so that the difference between the approximate solution obtained from iterative methods and the finite element solution is bounded by an a posteriori error bound. Here, we present two stopping criteria. The first is based on a residual-type a posteriori error estimator developed by Verfurth. The second is based on an a posteriori error estimator, using local solutions, developed by Kay and Silvester. Our numerical results show the refined mesh obtained from the iterative solution which satisfies the second criteria is similar to the refined mesh obtained from the finite element solution.
Abu-Almaalie, Zina; Ghassemlooy, Zabih; Bhatnagar, Manav R; Le-Minh, Hoa; Aslam, Nauman; Liaw, Shien-Kuei; Lee, It Ee
2016-11-20
Physical layer network coding (PNC) improves the throughput in wireless networks by enabling two nodes to exchange information using a minimum number of time slots. The PNC technique is proposed for two-way relay channel free space optical (TWR-FSO) communications with the aim of maximizing the utilization of network resources. The multipair TWR-FSO is considered in this paper, where a single antenna on each pair seeks to communicate via a common receiver aperture at the relay. Therefore, chip interleaving is adopted as a technique to separate the different transmitted signals at the relay node to perform PNC mapping. Accordingly, this scheme relies on the iterative multiuser technique for detection of users at the receiver. The bit error rate (BER) performance of the proposed system is examined under the combined influences of atmospheric loss, turbulence-induced channel fading, and pointing errors (PEs). By adopting the joint PNC mapping with interleaving and multiuser detection techniques, the BER results show that the proposed scheme can achieve a significant performance improvement against the degrading effects of turbulences and PEs. It is also demonstrated that a larger number of simultaneous users can be supported with this new scheme in establishing a communication link between multiple pairs of nodes in two time slots, thereby improving the channel capacity.
Gourdain, P-A; Peebles, W A
2008-10-01
Reflectometry has successfully demonstrated measurements of many important parameters in high temperature tokamak fusion plasmas. However, implementing such capabilities in a high-field, large plasma, such as ITER, will be a significant challenge. In ITER, the ratio of plasma size (meters) to the required reflectometry source wavelength (millimeters) is significantly larger than in existing fusion experiments. This suggests that the flow of the launched reflectometer millimeter-wave power can be realistically analyzed using three-dimensional ray tracing techniques. The analytical and numerical studies presented will highlight the fact that the group velocity (or power flow) of the launched microwaves is dependent on the direction of wave propagation relative to the internal magnetic field. It is shown that this dependence strongly modifies power flow near the cutoff layer in a manner that embeds the local magnetic field direction in the "footprint" of the power returned toward the launch antenna. It will be shown that this can potentially be utilized to locally determine the magnetic field pitch angle at the cutoff location. The resultant beam drift and distortion due to magnetic field and relativistic effects also have significant consequences on the design of reflectometry systems for large, high-field fusion experiments. These effects are discussed in the context of the upcoming ITER burning plasma experiment.
NASA Astrophysics Data System (ADS)
Clayton, N.; Crouchen, M.; Devred, A.; Evans, D.; Gung, C.-Y.; Lathwell, I.
2017-04-01
It is planned that the high voltage electrical insulation on the ITER feeder busbars will consist of interleaved layers of epoxy resin pre-impregnated glass tapes ('pre-preg') and polyimide. In addition to its electrical insulation function, the busbar insulation must have adequate mechanical properties to sustain the loads imposed on it during ITER magnet operation. This paper reports an investigation into suitable materials to manufacture the high voltage insulation for the ITER superconducting busbars and pipework. An R&D programme was undertaken in order to identify suitable pre-preg and polyimide materials from a range of suppliers. Pre-preg materials were obtained from 3 suppliers and used with Kapton HN, to make mouldings using the desired insulation architecture. Two main processing routes for pre-pregs have been investigated, namely vacuum bag processing (out of autoclave processing) and processing using a material with a high coefficient of thermal expansion (silicone rubber), to apply the compaction pressure on the insulation. Insulation should have adequate mechanical properties to cope with the stresses induced by the operating environment and a low void content necessary in a high voltage application. The quality of the mouldings was assessed by mechanical testing at 77 K and by the measurement of the void content.
Adaptive Dynamic Programming for Discrete-Time Zero-Sum Games.
Wei, Qinglai; Liu, Derong; Lin, Qiao; Song, Ruizhuo
2018-04-01
In this paper, a novel adaptive dynamic programming (ADP) algorithm, called "iterative zero-sum ADP algorithm," is developed to solve infinite-horizon discrete-time two-player zero-sum games of nonlinear systems. The present iterative zero-sum ADP algorithm permits arbitrary positive semidefinite functions to initialize the upper and lower iterations. A novel convergence analysis is developed to guarantee the upper and lower iterative value functions to converge to the upper and lower optimums, respectively. When the saddle-point equilibrium exists, it is emphasized that both the upper and lower iterative value functions are proved to converge to the optimal solution of the zero-sum game, where the existence criteria of the saddle-point equilibrium are not required. If the saddle-point equilibrium does not exist, the upper and lower optimal performance index functions are obtained, respectively, where the upper and lower performance index functions are proved to be not equivalent. Finally, simulation results and comparisons are shown to illustrate the performance of the present method.
Contraction design for small low-speed wind tunnels
NASA Technical Reports Server (NTRS)
Bell, James H.; Mehta, Rabindra D.
1988-01-01
An iterative design procedure was developed for 2- or 3-dimensional contractions installed on small, low speed wind tunnels. The procedure consists of first computing the potential flow field and hence the pressure distributions along the walls of a contraction of given size and shape using a 3-dimensional numerical panel method. The pressure or velocity distributions are then fed into 2-dimensional boundary layer codes to predict the behavior of the boundary layers along the walls. For small, low speed contractions, it is shown that the assumption of a laminar boundary layer originating from stagnation conditions at the contraction entry and remaining laminar throughout passage through the successful designs is justified. This hypothesis was confirmed by comparing the predicted boundary layer data at the contraction exit with measured data in existing wind tunnels. The measured boundary layer momentum thicknesses at the exit of four existing contractions, two of which were 3-D, were found to lie within 10 percent of the predicted values, with the predicted values generally lower. From the contraction wall shapes investigated, the one based on a 5th order polynomial was selected for newly designed mixing wind tunnel installation.
Green, Timothy R.; Freyberg, David L.
1995-01-01
Anisotropy in large-scale unsaturated hydraulic conductivity of layered soils changes with the moisture state. Here, state-dependent anisotropy is computed under conditions of large-scale gravity drainage. Soils represented by Gardner's exponential function are perfectly stratified, periodic, and inclined. Analytical integration of Darcy’s law across each layer results in a system of nonlinear equations that is solved iteratively for capillary suction at layer interfaces and for the Darcy flux normal to layering. Computed fluxes and suction profiles are used to determine both upscaled hydraulic conductivity in the principal directions and the corresponding “state-dependent” anisotropy ratio as functions of the mean suction. Three groups of layered soils are analyzed and compared with independent predictions from the stochastic results of Yeh et al. (1985b). The small-perturbation approach predicts appropriate behaviors for anisotropy under nonarid conditions. However, the stochastic results are limited to moderate values of mean suction; this limitation is linked to a Taylor series approximation in terms of a group of statistical and geometric parameters. Two alternative forms of the Taylor series provide upper and lower bounds for the state-dependent anisotropy of relatively dry soils.
Numerical investigation of internal high-speed viscous flows using a parabolic technique
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Power, G. D.
1985-01-01
A feasibility study has been conducted to assess the applicability of an existing parabolic analysis (ADD-Axisymmetric Diffuser Duct), developed previously for subsonic viscous internal flows, to mixed supersonic/subsonic flows with heat addition simulating a SCRAMJET combustor. A study was conducted with the ADD code modified to include additional convection effects in the normal momentum equation when supersonic expansion and compression waves are present. A set of test problems with weak shock and expansion waves have been analyzed with this modified ADD method and stable and accurate solutions were demonstrated provided the streamwise step size was maintained at levels larger than the boundary layer displacement thickness. Calculations made with further reductions in step size encountered departure solutions consistent with strong interaction theory. Calculations were also performed for a flow field with a flame front in which a specific heat release was imposed to simulate a SCRAMJET combustor. In this case the flame front generated relatively thick shear layers which aggravated the departure solution problem. Qualitatively correct results were obtained for these cases using a marching technique with the convective terms in the normal momentum equation suppressed. It is concluded from the present study that for the class of problems where strong viscous/inviscid interactions are present a global iteration procedure is required.
Temporal neural networks and transient analysis of complex engineering systems
NASA Astrophysics Data System (ADS)
Uluyol, Onder
A theory is introduced for a multi-layered Local Output Gamma Feedback (LOGF) neural network within the paradigm of Locally-Recurrent Globally-Feedforward neural networks. It is developed for the identification, prediction, and control tasks of spatio-temporal systems and allows for the presentation of different time scales through incorporation of a gamma memory. It is initially applied to the tasks of sunspot and Mackey-Glass series prediction as benchmarks, then it is extended to the task of power level control of a nuclear reactor at different fuel cycle conditions. The developed LOGF neuron model can also be viewed as a Transformed Input and State (TIS) Gamma memory for neural network architectures for temporal processing. The novel LOGF neuron model extends the static neuron model by incorporating into it a short-term memory structure in the form of a digital gamma filter. A feedforward neural network made up of LOGF neurons can thus be used to model dynamic systems. A learning algorithm based upon the Backpropagation-Through-Time (BTT) approach is derived. It is applicable for training a general L-layer LOGF neural network. The spatial and temporal weights and parameters of the network are iteratively optimized for a given problem using the derived learning algorithm.
The role of simulation in the design of a neural network chip
NASA Technical Reports Server (NTRS)
Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.
1993-01-01
An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.
Optimized iterative decoding method for TPC coded CPM
NASA Astrophysics Data System (ADS)
Ma, Yanmin; Lai, Penghui; Wang, Shilian; Xie, Shunqin; Zhang, Wei
2018-05-01
Turbo Product Code (TPC) coded Continuous Phase Modulation (CPM) system (TPC-CPM) has been widely used in aeronautical telemetry and satellite communication. This paper mainly investigates the improvement and optimization on the TPC-CPM system. We first add the interleaver and deinterleaver to the TPC-CPM system, and then establish an iterative system to iteratively decode. However, the improved system has a poor convergence ability. To overcome this issue, we use the Extrinsic Information Transfer (EXIT) analysis to find the optimal factors for the system. The experiments show our method is efficient to improve the convergence performance.
The ITER bolometer diagnostic: Status and plansa)
NASA Astrophysics Data System (ADS)
Meister, H.; Giannone, L.; Horton, L. D.; Raupp, G.; Zeidner, W.; Grunda, G.; Kalvin, S.; Fischer, U.; Serikov, A.; Stickel, S.; Reichle, R.
2008-10-01
A consortium consisting of four EURATOM Associations has been set up to develop the project plan for the full development of the ITER bolometer diagnostic and to continue urgent R&D activities. An overview of the current status is given, including detector development, line-of-sight optimization, performance analysis as well as the design of the diagnostic components and their integration in ITER. This is complemented by the presentation of plans for future activities required to successfully implement the bolometer diagnostic, ranging from the detector development over diagnostic design and prototype testing to RH tools for calibration.
ERIC Educational Resources Information Center
Hilchey, Christian Thomas
2014-01-01
This dissertation examines prefixation of simplex pairs. A simplex pair consists of an iterative imperfective and a semelfactive perfective verb. When prefixed, both of these verbs are perfective. The prefixed forms derived from semelfactives are labeled single act verbs, while the prefixed forms derived from iterative imperfective simplex verbs…
Iterative Ellipsoidal Trimming.
1980-02-11
to above. Iterative ellipsoidal trimming has been investigated before by other statisticians, most notably by Gnanadesikan and his coworkers...J., Gnanadesikan R., and Kettenring, J. R. (1975). "Robust estimation and outlier detection with correlation coefficients." Biometrika. 62, 531-45. [6...Duda, Richard, and Hart, Peter (1973). Pattern Classification and Scene Analysis. Wiley, New York. [7] Gnanadesikan , R. (1977). Methods for
NASA Technical Reports Server (NTRS)
Pindera, Marek-Jerzy; Salzar, Robert S.; Williams, Todd O.
1994-01-01
A user's guide for the computer program OPTCOMP is presented in this report. This program provides a capability to optimize the fabrication or service-induced residual stresses in uni-directional metal matrix composites subjected to combined thermo-mechanical axisymmetric loading using compensating or compliant layers at the fiber/matrix interface. The user specifies the architecture and the initial material parameters of the interfacial region, which can be either elastic or elastoplastic, and defines the design variables, together with the objective function, the associated constraints and the loading history through a user-friendly data input interface. The optimization procedure is based on an efficient solution methodology for the elastoplastic response of an arbitrarily layered multiple concentric cylinder model that is coupled to the commercial optimization package DOT. The solution methodology for the arbitrarily layered cylinder is based on the local-global stiffness matrix formulation and Mendelson's iterative technique of successive elastic solutions developed for elastoplastic boundary-value problems. The optimization algorithm employed in DOT is based on the method of feasible directions.
NASA Astrophysics Data System (ADS)
Valova-Zaharevskaya, E. G.; Popova, E. N.; Deryagina, I. L.; Abdyukhanov, I. M.; Tsapleva, A. S.
2018-03-01
The goal of the present study is to characterize the growth kinetics and structural parameters of the Nb3Sn layers formed under various regimes of the diffusion annealing of bronze-processed Nb/Cu-Sn composites. The structure of the superconducting layers is characterized by their thickness, average size of equiaxed grains and by the ratio of fractions of columnar and equiaxed grains. It was found that at higher diffusion annealing temperatures (above 650°C) thicker superconducting layers are obtained, but the average sizes of equiaxed Nb3Sn grains even under short exposures (10 h) are much larger than after the long low-temperature annealing. At the low-temperature (575 °C) annealing the relative fraction of columnar grains increases with increasing annealing time. Based on the data obtained, optimal regimes of the diffusion annealing can be chosen, which would on the one hand ensure complete transformation of Nb into Nb3Sn of close to the stoichiometric composition, and on the other hand prevent the formation of coarse and columnar grains.
Combining Static Analysis and Model Checking for Software Analysis
NASA Technical Reports Server (NTRS)
Brat, Guillaume; Visser, Willem; Clancy, Daniel (Technical Monitor)
2003-01-01
We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However we show that the process converges to a fired point at which time the partial order information is safe and the whole state space is explored.
Parametric Thermal and Flow Analysis of ITER Diagnostic Shield Module
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khodak, A.; Zhai, Y.; Wang, W.
As part of the diagnostic port plug assembly, the ITER Diagnostic Shield Module (DSM) is designed to provide mechanical support and the plasma shielding while allowing access to plasma diagnostics. Thermal and hydraulic analysis of the DSM was performed using a conjugate heat transfer approach, in which heat transfer was resolved in both solid and liquid parts, and simultaneously, fluid dynamics analysis was performed only in the liquid part. ITER Diagnostic First Wall (DFW) and cooling tubing were also included in the analysis. This allowed direct modeling of the interface between DSM and DFW, and also direct assessment of themore » coolant flow distribution between the parts of DSM and DFW to ensure DSM design meets the DFW cooling requirements. Design of the DSM included voids filled with Boron Carbide pellets, allowing weight reduction while keeping shielding capability of the DSM. These voids were modeled as a continuous solid with smeared material properties using analytical relation for thermal conductivity. Results of the analysis lead to design modifications improving heat transfer efficiency of the DSM. Furthermore, the effect of design modifications on thermal performance as well as effect of Boron Carbide will be presented.« less
Development of parallel algorithms for electrical power management in space applications
NASA Technical Reports Server (NTRS)
Berry, Frederick C.
1989-01-01
The application of parallel techniques for electrical power system analysis is discussed. The Newton-Raphson method of load flow analysis was used along with the decomposition-coordination technique to perform load flow analysis. The decomposition-coordination technique enables tasks to be performed in parallel by partitioning the electrical power system into independent local problems. Each independent local problem represents a portion of the total electrical power system on which a loan flow analysis can be performed. The load flow analysis is performed on these partitioned elements by using the Newton-Raphson load flow method. These independent local problems will produce results for voltage and power which can then be passed to the coordinator portion of the solution procedure. The coordinator problem uses the results of the local problems to determine if any correction is needed on the local problems. The coordinator problem is also solved by an iterative method much like the local problem. The iterative method for the coordination problem will also be the Newton-Raphson method. Therefore, each iteration at the coordination level will result in new values for the local problems. The local problems will have to be solved again along with the coordinator problem until some convergence conditions are met.
Parametric Thermal and Flow Analysis of ITER Diagnostic Shield Module
Khodak, A.; Zhai, Y.; Wang, W.; ...
2017-06-19
As part of the diagnostic port plug assembly, the ITER Diagnostic Shield Module (DSM) is designed to provide mechanical support and the plasma shielding while allowing access to plasma diagnostics. Thermal and hydraulic analysis of the DSM was performed using a conjugate heat transfer approach, in which heat transfer was resolved in both solid and liquid parts, and simultaneously, fluid dynamics analysis was performed only in the liquid part. ITER Diagnostic First Wall (DFW) and cooling tubing were also included in the analysis. This allowed direct modeling of the interface between DSM and DFW, and also direct assessment of themore » coolant flow distribution between the parts of DSM and DFW to ensure DSM design meets the DFW cooling requirements. Design of the DSM included voids filled with Boron Carbide pellets, allowing weight reduction while keeping shielding capability of the DSM. These voids were modeled as a continuous solid with smeared material properties using analytical relation for thermal conductivity. Results of the analysis lead to design modifications improving heat transfer efficiency of the DSM. Furthermore, the effect of design modifications on thermal performance as well as effect of Boron Carbide will be presented.« less
Kassam, Aliya; Donnon, Tyrone; Rigby, Ian
2014-03-01
There is a question of whether a single assessment tool can assess the key competencies of residents as mandated by the Royal College of Physicians and Surgeons of Canada CanMEDS roles framework. The objective of the present study was to investigate the reliability and validity of an emergency medicine (EM) in-training evaluation report (ITER). ITER data from 2009 to 2011 were combined for residents across the 5 years of the EM residency training program. An exploratory factor analysis with varimax rotation was used to explore the construct validity of the ITER. A total of 172 ITERs were completed on residents across their first to fifth year of training. A combined, 24-item ITER yielded a five-factor solution measuring the CanMEDs role Medical Expert/Scholar, Communicator/Collaborator, Professional, Health Advocate and Manager subscales. The factor solution accounted for 79% of the variance, and reliability coefficients (Cronbach alpha) ranged from α = 0.90 to 0.95 for each subscale and α = 0.97 overall. The combined, 24-item ITER used to assess residents' competencies in the EM residency program showed strong reliability and evidence of construct validity for assessment of the CanMEDS roles. Further research is needed to develop and test ITER items that will differentiate each CanMEDS role exclusively.
Gariani, Joanna; Martin, Steve P; Botsikas, Diomidis; Becker, Christoph D; Montet, Xavier
2018-06-14
To compare radiation dose and image quality of thoracoabdominal scans obtained with a high-pitch protocol (pitch 3.2) and iterative reconstruction (Sinogram Affirmed Iterative Reconstruction) in comparison to standard pitch reconstructed with filtered back projection (FBP) using dual source CT. 114 CT scans (Somatom Definition Flash, Siemens Healthineers, Erlangen, Germany), 39 thoracic scans, 54 thoracoabdominal scans and 21 abdominal scans were performed. Analysis of three protocols was undertaken; pitch of 1 reconstructed with FBP, pitch of 3.2 reconstructed with SAFIRE, pitch of 3.2 with stellar detectors reconstructed with SAFIRE. Objective and subjective image analysis were performed. Dose differences of the protocols used were compared. Dose was reduced when comparing scans with a pitch of 1 reconstructed with FBP to high-pitch scans with a pitch of 3.2 reconstructed with SAFIRE with a reduction of volume CT dose index of 75% for thoracic scans, 64% for thoracoabdominal scans and 67% for abdominal scans. There was a further reduction after the implementation of stellar detectors reflected in a reduction of 36% of the dose-length product for thoracic scans. This was not at the detriment of image quality, contrast-to-noise ratio, signal-to-noise ratio and the qualitative image analysis revealed a superior image quality in the high-pitch protocols. The combination of a high pitch protocol with iterative reconstruction allows significant dose reduction in routine chest and abdominal scans whilst maintaining or improving diagnostic image quality, with a further reduction in thoracic scans with stellar detectors. Advances in knowledge: High pitch imaging with iterative reconstruction is a tool that can be used to reduce dose without sacrificing image quality.
Three-dimensional marginal separation
NASA Technical Reports Server (NTRS)
Duck, Peter W.
1988-01-01
The three dimensional marginal separation of a boundary layer along a line of symmetry is considered. The key equation governing the displacement function is derived, and found to be a nonlinear integral equation in two space variables. This is solved iteratively using a pseudo-spectral approach, based partly in double Fourier space, and partly in physical space. Qualitatively, the results are similar to previously reported two dimensional results (which are also computed to test the accuracy of the numerical scheme); however quantitatively the three dimensional results are much different.
Silicon Based Mid Infrared SiGeSn Heterostructure Emitters and Detectors
2016-05-16
have investigated the surface plasmon enhancement of the GeSn p-i-n photodiode using gold metal nanostructures. We have conducted numerical...simulation of the plasmonic structure of 2D nano-hole array to tune the surface plasmon resonance into the absorption range of the GeSn active layer. Such a...diode can indeed be enhanced with the plasmonic structure on top. Within the time span of this project, we have completed one iteration of the process
MUSIC imaging method for electromagnetic inspection of composite multi-layers
NASA Astrophysics Data System (ADS)
Rodeghiero, Giacomo; Ding, Ping-Ping; Zhong, Yu; Lambert, Marc; Lesselier, Dominique
2015-03-01
A first-order asymptotic formulation of the electric field scattered by a small inclusion (with respect to the wavelength in dielectric regime or to the skin depth in conductive regime) embedded in composite material is given. It is validated by comparison with results obtained using a Method of Moments (MoM). A non-iterative MUltiple SIgnal Classification (MUSIC) imaging method is utilized in the same configuration to locate the position of small defects. The effectiveness of the imaging algorithm is illustrated through some numerical examples.
Extending radiative transfer models by use of Bayes rule. [in atmospheric science
NASA Technical Reports Server (NTRS)
Whitney, C.
1977-01-01
This paper presents a procedure that extends some existing radiative transfer modeling techniques to problems in atmospheric science where curvature and layering of the medium and dynamic range and angular resolution of the signal are important. Example problems include twilight and limb scan simulations. Techniques that are extended include successive orders of scattering, matrix operator, doubling, Gauss-Seidel iteration, discrete ordinates and spherical harmonics. The procedure for extending them is based on Bayes' rule from probability theory.
Iterative categorization (IC): a systematic technique for analysing qualitative data.
Neale, Joanne
2016-06-01
The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. © 2016 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.
NASA Astrophysics Data System (ADS)
Suparmi, A.; Cari, C.; Lilis Elviyanti, Isnaini
2018-04-01
Analysis of relativistic energy and wave function for zero spin particles using Klein Gordon equation was influenced by separable noncentral cylindrical potential was solved by asymptotic iteration method (AIM). By using cylindrical coordinates, the Klein Gordon equation for the case of symmetry spin was reduced to three one-dimensional Schrodinger like equations that were solvable using variable separation method. The relativistic energy was calculated numerically with Matlab software, and the general unnormalized wave function was expressed in hypergeometric terms.
NASA Technical Reports Server (NTRS)
Tilton, James C.
1988-01-01
Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.
NASA Technical Reports Server (NTRS)
Gossard, Myron L
1952-01-01
An iterative transformation procedure suggested by H. Wielandt for numerical solution of flutter and similar characteristic-value problems is presented. Application of this procedure to ordinary natural-vibration problems and to flutter problems is shown by numerical examples. Comparisons of computed results with experimental values and with results obtained by other methods of analysis are made.
VIMOS Instrument Control Software Design: an Object Oriented Approach
NASA Astrophysics Data System (ADS)
Brau-Nogué, Sylvie; Lucuix, Christian
2002-12-01
The Franco-Italian VIMOS instrument is a VIsible imaging Multi-Object Spectrograph with outstanding multiplex capabilities, allowing to take spectra of more than 800 objects simultaneously, or integral field spectroscopy mode in a 54x54 arcsec area. VIMOS is being installed at the Nasmyth focus of the third Unit Telescope of the European Southern Observatory Very Large Telescope (VLT) at Mount Paranal in Chile. This paper will describe the analysis, the design and the implementation of the VIMOS Instrument Control System, using UML notation. Our Control group followed an Object Oriented software process while keeping in mind the ESO VLT standard control concepts. At ESO VLT a complete software library is available. Rather than applying waterfall lifecycle, ICS project used iterative development, a lifecycle consisting of several iterations. Each iteration consisted in : capture and evaluate the requirements, visual modeling for analysis and design, implementation, test, and deployment. Depending of the project phases, iterations focused more or less on specific activity. The result is an object model (the design model), including use-case realizations. An implementation view and a deployment view complement this product. An extract of VIMOS ICS UML model will be presented and some implementation, integration and test issues will be discussed.
Noise models for low counting rate coherent diffraction imaging.
Godard, Pierre; Allain, Marc; Chamard, Virginie; Rodenburg, John
2012-11-05
Coherent diffraction imaging (CDI) is a lens-less microscopy method that extracts the complex-valued exit field from intensity measurements alone. It is of particular importance for microscopy imaging with diffraction set-ups where high quality lenses are not available. The inversion scheme allowing the phase retrieval is based on the use of an iterative algorithm. In this work, we address the question of the choice of the iterative process in the case of data corrupted by photon or electron shot noise. Several noise models are presented and further used within two inversion strategies, the ordered subset and the scaled gradient. Based on analytical and numerical analysis together with Monte-Carlo studies, we show that any physical interpretations drawn from a CDI iterative technique require a detailed understanding of the relationship between the noise model and the used inversion method. We observe that iterative algorithms often assume implicitly a noise model. For low counting rates, each noise model behaves differently. Moreover, the used optimization strategy introduces its own artefacts. Based on this analysis, we develop a hybrid strategy which works efficiently in the absence of an informed initial guess. Our work emphasises issues which should be considered carefully when inverting experimental data.
Iterative Stable Alignment and Clustering of 2D Transmission Electron Microscope Images
Yang, Zhengfan; Fang, Jia; Chittuluru, Johnathan; Asturias, Francisco J.; Penczek, Pawel A.
2012-01-01
SUMMARY Identification of homogeneous subsets of images in a macromolecular electron microscopy (EM) image data set is a critical step in single-particle analysis. The task is handled by iterative algorithms, whose performance is compromised by the compounded limitations of image alignment and K-means clustering. Here we describe an approach, iterative stable alignment and clustering (ISAC) that, relying on a new clustering method and on the concepts of stability and reproducibility, can extract validated, homogeneous subsets of images. ISAC requires only a small number of simple parameters and, with minimal human intervention, can eliminate bias from two-dimensional image clustering and maximize the quality of group averages that can be used for ab initio three-dimensional structural determination and analysis of macromolecular conformational variability. Repeated testing of the stability and reproducibility of a solution within ISAC eliminates heterogeneous or incorrect classes and introduces critical validation to the process of EM image clustering. PMID:22325773
Precise and fast spatial-frequency analysis using the iterative local Fourier transform.
Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook
2016-09-19
The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.
NASA Technical Reports Server (NTRS)
Maskew, B.
1983-01-01
A general low-order surface-singularity panel method is used to predict the aerodynamic characteristics of a problem where a wing-tip vortex from one wing closely interacts with an aft mounted wing in a low Reynolds Number flow; i.e., 125,000. Nonlinear effects due to wake roll-up and the influence of the wings on the vortex path are included in the calculation by using a coupled iterative wake relaxation scheme. The interaction also affects the wing pressures and boundary layer characteristics: these effects are also considered using coupled integral boundary layer codes and preliminary calculations using free vortex sheet separation modelling are included. Calculated results are compared with water tunnel experimental data with generally remarkably good agreement.
Generalized self-similar unsteady gas flows behind the strong shock wave front
NASA Astrophysics Data System (ADS)
Bogatko, V. I.; Potekhina, E. A.
2018-05-01
Two-dimensional (plane and axially symmetric) nonstationary gas flows behind the front of a strong shock wave are considered. All the gas parameters are functions of the ratio of Cartesian coordinates to some degree of time tn, where n is a self-similarity index. The problem is solved in Lagrangian variables. It is shown that the resulting system of partial differential equations is suitable for constructing an iterative process. ¢he "thin shock layer" method is used to construct an approximate analytical solution of the problem. The limit solution of the problem is constructed. A formula for determining the path traversed by a gas particle in the shock layer along the front of a shock wave is obtained. A system of equations for determining the first approximation corrections is constructed.
NASA Technical Reports Server (NTRS)
Volakis, John L.
1990-01-01
There are two tasks described in this report. First, an extension of a two dimensional formulation is presented for a three dimensional body of revolution. With the introduction of a Fourier expansion of the vector electric and magnetic fields, a coupled two dimensional system is generated and solved via the finite element method. An exact boundary condition is employed to terminate the mesh and the fast fourier transformation is used to evaluate the boundary integrals for low O(n) memory demand when an iterative solution algorithm is used. Second, the diffraction by a material discontinuity in a thick dielectric/ferrite layer is considered by modeling the layer as a distributed current sheet obeying generalized sheet transition conditions (GSTC's).
Interactive learning in 2×2 normal form games by neural network agents
NASA Astrophysics Data System (ADS)
Spiliopoulos, Leonidas
2012-11-01
This paper models the learning process of populations of randomly rematched tabula rasa neural network (NN) agents playing randomly generated 2×2 normal form games of all strategic classes. This approach has greater external validity than the existing models in the literature, each of which is usually applicable to narrow subsets of classes of games (often a single game) and/or to fixed matching protocols. The learning prowess of NNs with hidden layers was impressive as they learned to play unique pure strategy equilibria with near certainty, adhered to principles of dominance and iterated dominance, and exhibited a preference for risk-dominant equilibria. In contrast, perceptron NNs were found to perform significantly worse than hidden layer NN agents and human subjects in experimental studies.
A transonic interactive boundary-layer theory for laminar and turbulent flow over swept wings
NASA Technical Reports Server (NTRS)
Woodson, Shawn H.; Dejarnette, Fred R.
1988-01-01
A 3-D laminar and turbulent boundary-layer method is developed for compressible flow over swept wings. The governing equations and curvature terms are derived in detail for a nonorthogonal, curvilinear coordinate system. Reynolds shear-stress terms are modeled by the Cebeci-Smith eddy-viscosity formulation. The governing equations are descretized using the second-order accurate, predictor-corrector finite-difference technique of Matsuno, which has the advantage that the crossflow difference formulas are formed independent of the sign of the crossflow velocity component. The method is coupled with a full potential wing/body inviscid code (FLO-30) and the inviscid-viscous interaction is performed by updating the original wing surface with the viscous displacement surface calculated by the boundary-layer code. The number of these global iterations ranged from five to twelve depending on Mach number, sweep angle, and angle of attack. Several test cases are computed by this method and the results are compared with another inviscid-viscous interaction method (TAWFIVE) and with experimental data.
A Map for Clinical Laboratories Management Indicators in the Intelligent Dashboard.
Azadmanjir, Zahra; Torabi, Mashallah; Safdari, Reza; Bayat, Maryam; Golmahi, Fatemeh
2015-08-01
management challenges of clinical laboratories are more complicated for educational hospital clinical laboratories. Managers can use tools of business intelligence (BI), such as information dashboards that provide the possibility of intelligent decision-making and problem solving about increasing income, reducing spending, utilization management and even improving quality. Critical phase of dashboard design is setting indicators and modeling causal relations between them. The paper describes the process of creating a map for laboratory dashboard. the study is one part of an action research that begins from 2012 by innovation initiative for implementing laboratory intelligent dashboard. Laboratories management problems were determined in educational hospitals by the brainstorming sessions. Then, with regard to the problems key performance indicators (KPIs) specified. the map of indicators designed in form of three layered. They have a causal relationship so that issues measured in the subsequent layers affect issues measured in the prime layers. the proposed indicator map can be the base of performance monitoring. However, these indicators can be modified to improve during iterations of dashboard designing process.
Radiative transfer theory for active remote sensing of a forested canopy
NASA Technical Reports Server (NTRS)
Karam, M. A.; Fung, A. K.
1989-01-01
A canopy is modeled as a two-layer medium above a rough interface. The upper layer stands for the forest crown, with the leaves modeled as randomly oriented and distributed disks and needles and the branches modeled as randomly oriented finite dielectric cylinders. The lower layer contains the tree trunks, modeled as randomly positioned vertical cylinders above the rough soil. Radiative-transfer theory is applied to calculate EM scattering from such a canopy, is expressed in terms of the scattering-amplitude tensors (SATs). For leaves, the generalized Rayleigh-Gans approximation is applied, whereas the branch and trunk SATs are obtained by estimating the inner field by fields inside a similar cylinder of infinite length. The Kirchhoff method is used to calculate the soil SAT. For a plane wave exciting the canopy, the radiative-transfer equations are solved by iteration to the first order in albedo of the leaves and the branches. Numerical results are illustrated as a function of the incidence angle.
NASA Astrophysics Data System (ADS)
Yudovsky, Dmitry; Nouvong, Aksone; Schomacker, Kevin; Pilon, Laurent
2010-02-01
Foot ulceration is a debilitating comorbidity of diabetes that may result in loss of mobility and amputation. Optical detection of cutaneous tissue changes due to inflammation and necrosis at the preulcer site could constitute a preventative strategy. A commercial hyperspectral oximetry system was used to measure tissue oxygenation on the feet of diabetic patients. A previously developed predictive index was used to differentiate preulcer tissue from surrounding healthy tissue with a sensitivity of 92% and specificity of 80%. To improve prediction accuracy, an optical skin model was developed treating skin as a two-layer medium and explicitly accounting for (i) melanin content and thickness of the epidermis, (ii) blood content and hemoglobin saturation of the dermis, and (iii) tissue scattering in both layers. Using this forward model, an iterative inverse method was used to determine the skin properties from hyperspectral images of preulcerative areas. The use of this information in lowering the false positive rate was discussed.
Perturbational and nonperturbational inversion of Rayleigh-wave velocities
Haney, Matt; Tsai, Victor C.
2017-01-01
The inversion of Rayleigh-wave dispersion curves is a classic geophysical inverse problem. We have developed a set of MATLAB codes that performs forward modeling and inversion of Rayleigh-wave phase or group velocity measurements. We describe two different methods of inversion: a perturbational method based on finite elements and a nonperturbational method based on the recently developed Dix-type relation for Rayleigh waves. In practice, the nonperturbational method can be used to provide a good starting model that can be iteratively improved with the perturbational method. Although the perturbational method is well-known, we solve the forward problem using an eigenvalue/eigenvector solver instead of the conventional approach of root finding. Features of the codes include the ability to handle any mix of phase or group velocity measurements, combinations of modes of any order, the presence of a surface water layer, computation of partial derivatives due to changes in material properties and layer boundaries, and the implementation of an automatic grid of layers that is optimally suited for the depth sensitivity of Rayleigh waves.
Multi-layered mode structure of locked-tearing-modes after unlocking
NASA Astrophysics Data System (ADS)
Okabayashi, Michio; Logan, N.; Tobias, B.; Wang, Z.; Budny, B.; Nazikian, R.; Strait, E.; La Haye, R.; Paz-Soldan, C. J.; Ferraro, N.; Shiraki, D.; Hanson, J.; Zanca, P.; Paccagnella, R.
2015-11-01
Prevention of m/n=2/1 tearing modes (TM) by electro-magnetic torque injection has been successful in DIII-D and RFX-mod where plasma conditions and plasma shape are completely different. Understanding the internal structure in the post-unlocked phase is a pre-requisite to its application to reactor relevant plasmas such as in ITER. Ti and toroidal rotation perturbations show there exist several radially different TM layers. However, the phase shift between the applied field and the plasma response is rather small from plasma edge to the q ~3 domain, indicating that a kink-like response prevails. The biggest threat for sustaining an unlocked 2/1 mode is sudden distortion of the rotational profile due to the internal mode reconnection. Possible TM layer structure will be discussed with numerical MHD codes and TRANSP. This work is supported in part by the US Department of Energy under DE-AC02-09CH11466, DE-FG02-99ER54531, DE-SC0003913, and DE-FC02-04ER54698.
Iterative Correction of Reference Nucleotides (iCORN) using second generation sequencing technology.
Otto, Thomas D; Sanders, Mandy; Berriman, Matthew; Newbold, Chris
2010-07-15
The accuracy of reference genomes is important for downstream analysis but a low error rate requires expensive manual interrogation of the sequence. Here, we describe a novel algorithm (Iterative Correction of Reference Nucleotides) that iteratively aligns deep coverage of short sequencing reads to correct errors in reference genome sequences and evaluate their accuracy. Using Plasmodium falciparum (81% A + T content) as an extreme example, we show that the algorithm is highly accurate and corrects over 2000 errors in the reference sequence. We give examples of its application to numerous other eukaryotic and prokaryotic genomes and suggest additional applications. The software is available at http://icorn.sourceforge.net
NASA Astrophysics Data System (ADS)
Philipps, V.; Malaquias, A.; Hakola, A.; Karhunen, J.; Maddaluno, G.; Almaviva, S.; Caneve, L.; Colao, F.; Fortuna, E.; Gasior, P.; Kubkowska, M.; Czarnecka, A.; Laan, M.; Lissovski, A.; Paris, P.; van der Meiden, H. J.; Petersson, P.; Rubel, M.; Huber, A.; Zlobinski, M.; Schweer, B.; Gierse, N.; Xiao, Q.; Sergienko, G.
2013-09-01
Analysis and understanding of wall erosion, material transport and fuel retention are among the most important tasks for ITER and future devices, since these questions determine largely the lifetime and availability of the fusion reactor. These data are also of extreme value to improve the understanding and validate the models of the in vessel build-up of the T inventory in ITER and future D-T devices. So far, research in these areas is largely supported by post-mortem analysis of wall tiles. However, access to samples will be very much restricted in the next-generation devices (such as ITER, JT-60SA, W7-X, etc) with actively cooled plasma-facing components (PFC) and increasing duty cycle. This has motivated the development of methods to measure the deposition of material and retention of plasma fuel on the walls of fusion devices in situ, without removal of PFC samples. For this purpose, laser-based methods are the most promising candidates. Their feasibility has been assessed in a cooperative undertaking in various European associations under EFDA coordination. Different laser techniques have been explored both under laboratory and tokamak conditions with the emphasis to develop a conceptual design for a laser-based wall diagnostic which is integrated into an ITER port plug, aiming to characterize in situ relevant parts of the inner wall, the upper region of the inner divertor, part of the dome and the upper X-point region.
Wide-angle ITER-prototype tangential infrared and visible viewing system for DIII-D.
Lasnier, C J; Allen, S L; Ellis, R E; Fenstermacher, M E; McLean, A G; Meyer, W H; Morris, K; Seppala, L G; Crabtree, K; Van Zeeland, M A
2014-11-01
An imaging system with a wide-angle tangential view of the full poloidal cross-section of the tokamak in simultaneous infrared and visible light has been installed on DIII-D. The optical train includes three polished stainless steel mirrors in vacuum, which view the tokamak through an aperture in the first mirror, similar to the design concept proposed for ITER. A dichroic beam splitter outside the vacuum separates visible and infrared (IR) light. Spatial calibration is accomplished by warping a CAD-rendered image to align with landmarks in a data image. The IR camera provides scrape-off layer heat flux profile deposition features in diverted and inner-wall-limited plasmas, such as heat flux reduction in pumped radiative divertor shots. Demonstration of the system to date includes observation of fast-ion losses to the outer wall during neutral beam injection, and shows reduced peak wall heat loading with disruption mitigation by injection of a massive gas puff.
NASA Astrophysics Data System (ADS)
Guillemaut, C.; Metzger, C.; Moulton, D.; Heinola, K.; O’Mullane, M.; Balboa, I.; Boom, J.; Matthews, G. F.; Silburn, S.; Solano, E. R.; contributors, JET
2018-06-01
The design and operation of future fusion devices relying on H-mode plasmas requires reliable modelling of edge-localized modes (ELMs) for precise prediction of divertor target conditions. An extensive experimental validation of simple analytical predictions of the time evolution of target plasma loads during ELMs has been carried out here in more than 70 JET-ITER-like wall H-mode experiments with a wide range of conditions. Comparisons of these analytical predictions with diagnostic measurements of target ion flux density, power density, impact energy and electron temperature during ELMs are presented in this paper and show excellent agreement. The analytical predictions tested here are made with the ‘free-streaming’ kinetic model (FSM) which describes ELMs as a quasi-neutral plasma bunch expanding along the magnetic field lines into the Scrape-Off Layer without collisions. Consequences of the FSM on energy reflection and deposition on divertor targets during ELMs are also discussed.
Experimental investigations of castellated monoblock structures in TEXTOR
NASA Astrophysics Data System (ADS)
Litnovsky, A.; Philipps, V.; Wienhold, P.; Sergienko, G.; Emmoth, B.; Rubel, M.; Breuer, U.; Wessel, E.
2005-03-01
To insure the thermo-mechanical durability of ITER it is planned to manufacture the castellated armour of the divertor i.e. to split the armour into cells [W. Daener et al., Fusion Eng. Des. 61&62 (2002) 61]. This will cause an increase of the surface area and may lead to carbon deposition and tritium accumulation in the gaps in between cells. To investigate the processes of deposition and fuel accumulation in gaps, a castellated test-limiter was exposed to the SOL plasma of TEXTOR. The geometry of castellation used was the same as proposed for the vertical divertor target in ITER [W. Daener et al., Fusion Eng. Des. 61&62 (2002) 61]. After exposure the limiter was investigated with various surface diagnostic techniques. Deposited layers containing carbon, hydrogen, deuterium and boron were found both on top plasma-facing surfaces and in the gaps. The amount of deuterium in the gaps was at least 30% of that found on the top surfaces.
Effects of ELMs and disruptions on ITER divertor armour materials
NASA Astrophysics Data System (ADS)
Federici, G.; Zhitlukhin, A.; Arkhipov, N.; Giniyatulin, R.; Klimov, N.; Landman, I.; Podkovyrov, V.; Safronov, V.; Loarte, A.; Merola, M.
2005-03-01
This paper describes the response of plasma facing components manufactured with tungsten (macro-brush) and CFC to energy loads characteristic of Type I ELMs and disruptions in ITER, in experiments conducted (under an EU/RF collaboration) in two plasma guns (QSPA and MK-200UG) at the TRINITI institute. Targets were exposed to a series of repetitive pulses in QSPA with heat loads in a range of 1-2 MJ/m 2 lasting 0.5 ms. Moderate tungsten erosion, of less than 0.2 μm per pulse, was found for loads of ˜1.5 MJ/m 2, consistent with ELM erosion being determined by tungsten evaporation and not by melt layer displacement. At energy densities of ˜1.8 MJ/m 2 a sharp growth of tungsten erosion was measured together with intense droplet ejection. MK-200UG experiments were focused on studying mainly vapor plasma production and impurity transport during ELMs. The conditions for removal of thin metal deposits from a carbon substrate were characterized.
Universal single level implicit algorithm for gasdynamics
NASA Technical Reports Server (NTRS)
Lombard, C. K.; Venkatapthy, E.
1984-01-01
A single level effectively explicit implicit algorithm for gasdynamics is presented. The method meets all the requirements for unconditionally stable global iteration over flows with mixed supersonic and supersonic zones including blunt body flow and boundary layer flows with strong interaction and streamwise separation. For hyperbolic (supersonic flow) regions the method is automatically equivalent to contemporary space marching methods. For elliptic (subsonic flow) regions, rapid convergence is facilitated by alternating direction solution sweeps which bring both sets of eigenvectors and the influence of both boundaries of a coordinate line equally into play. Point by point updating of the data with local iteration on the solution procedure at each spatial step as the sweeps progress not only renders the method single level in storage but, also, improves nonlinear accuracy to accelerate convergence by an order of magnitude over related two level linearized implicit methods. The method derives robust stability from the combination of an eigenvector split upwind difference method (CSCM) with diagonally dominant ADI(DDADI) approximate factorization and computed characteristic boundary approximations.
Wide-angle ITER-prototype tangential infrared and visible viewing system for DIII-D
Lasnier, Charles J.; Allen, Steve L.; Ellis, Ronald E.; ...
2014-08-26
An imaging system with a wide-angle tangential view of the full poloidal cross-section of the tokamak in simultaneous infrared and visible light has been installed on DIII-D. The optical train includes three polished stainless steel mirrors in vacuum, which view the tokamak through an aperture in the first mirror, similar to the design concept proposed for ITER. A dichroic beam splitter outside the vacuum separates visible and infrared (IR) light. Spatial calibration is accomplished by warping a CAD-rendered image to align with landmarks in a data image. The IR camera provides scrape-off layer heat flux profile deposition features in divertedmore » and inner-wall-limited plasmas, such as heat flux reduction in pumped radiative divertor shots. As a result, demonstration of the system to date includes observation of fast-ion losses to the outer wall during neutral beam injection, and shows reduced peak wall heat loading with disruption mitigation by injection of a massive gas puff.« less
NASA Astrophysics Data System (ADS)
Di Noia, Antonio; Hasekamp, Otto P.; Wu, Lianghai; van Diedenhoven, Bastiaan; Cairns, Brian; Yorks, John E.
2017-11-01
In this paper, an algorithm for the retrieval of aerosol and land surface properties from airborne spectropolarimetric measurements - combining neural networks and an iterative scheme based on Phillips-Tikhonov regularization - is described. The algorithm - which is an extension of a scheme previously designed for ground-based retrievals - is applied to measurements from the Research Scanning Polarimeter (RSP) on board the NASA ER-2 aircraft. A neural network, trained on a large data set of synthetic measurements, is applied to perform aerosol retrievals from real RSP data, and the neural network retrievals are subsequently used as a first guess for the Phillips-Tikhonov retrieval. The resulting algorithm appears capable of accurately retrieving aerosol optical thickness, fine-mode effective radius and aerosol layer height from RSP data. Among the advantages of using a neural network as initial guess for an iterative algorithm are a decrease in processing time and an increase in the number of converging retrievals.
NASA Astrophysics Data System (ADS)
Langlois, A.; Royer, A.; Derksen, C.; Montpetit, B.; Dupont, F.; GoïTa, K.
2012-12-01
Satellite-passive microwave remote sensing has been extensively used to estimate snow water equivalent (SWE) in northern regions. Although passive microwave sensors operate independent of solar illumination and the lower frequencies are independent of atmospheric conditions, the coarse spatial resolution introduces uncertainties to SWE retrievals due to the surface heterogeneity within individual pixels. In this article, we investigate the coupling of a thermodynamic multilayered snow model with a passive microwave emission model. Results show that the snow model itself provides poor SWE simulations when compared to field measurements from two major field campaigns. Coupling the snow and microwave emission models with successive iterations to correct the influence of snow grain size and density significantly improves SWE simulations. This method was further validated using an additional independent data set, which also showed significant improvement using the two-step iteration method compared to standalone simulations with the snow model.
NASA Astrophysics Data System (ADS)
Wang, Xianghong; Liu, Xinyu; Wang, Nanshuo; Yu, Xiaojun; Bo, En; Chen, Si; Liu, Linbo
2017-02-01
Optical coherence tomography (OCT) provides high resolution and cross-sectional images of biological tissue and is widely used for diagnosis of ocular diseases. However, OCT images suffer from speckle noise, which typically considered as multiplicative noise in nature, reducing the image resolution and contrast. In this study, we propose a two-step iteration (TSI) method to suppress those noises. We first utilize augmented Lagrange method to recover a low-rank OCT image and remove additive Gaussian noise, and then employ the simple and efficient split Bregman method to solve the Total-Variation Denoising model. We validated such proposed method using images of swine, rabbit and human retina. Results demonstrate that our TSI method outperforms the other popular methods in achieving higher peak signal-to-noise ratio (PSNR) and structure similarity (SSIM) while preserving important structural details, such as tiny capillaries and thin layers in retinal OCT images. In addition, the results of our TSI method show clearer boundaries and maintains high image contrast, which facilitates better image interpretations and analyses.
NASA Astrophysics Data System (ADS)
Linsmeier, Christian
2004-12-01
The deposition of carbon on metals is the unavoidable consequence of the application of different wall materials in present and future fusion experiments like ITER. Presently used and prospected materials besides carbon (CFC materials in high heat load areas) are tungsten and beryllium. The simultaneous application of different materials leads to the formation of surface compounds due to the erosion, transport and re-deposition of material during plasma operations. The formation and erosion processes are governed by widely varying surface temperatures and kinetic energies as well as the spectrum of impinging particles from the plasma. The knowledge of the dependence on these parameters is crucial for the understanding and prediction of the compound formation on wall materials. The formation of surface layers is of great importance, since they not only determine erosion rates, but also influence the ability of the first wall for hydrogen isotope inventory accumulation and release. Surface compound formation, diffusion and erosion phenomena are studied under well-controlled ultra-high vacuum conditions using in-situ X-ray photoelectron spectroscopy (XPS) and ion beam analysis techniques available at a 3 MV tandem accelerator. XPS provides chemical information and allows distinguishing elemental and carbidic phases with high surface sensitivity. Accelerator-based spectroscopies provide quantitative compositional analysis and sensitivity for deuterium in the surface layers. Using these techniques, the formation of carbidic layers on metals is studied from room temperature up to 1700 K. The formation of an interfacial carbide of several monolayers thickness is not only observed for metals with exothermic carbide formation enthalpies, but also in the cases of Ni and Fe which form endothermic carbides. Additional carbon deposited at 300 K remains elemental. Depending on the substrate, carbon diffusion into the bulk starts at elevated temperatures together with additional carbide formation. Depending on the bond nature in the carbide (metallic in the transition metal carbides, ionic e.g. in Be2C), the surface carbide layer is dissolved upon further increased temperatures or remains stable. Carbide formation can also be initiated by ion bombardment, both of chemically inert noble gas ions or C+ or CO+ ions. In the latter case, a deposition-erosion equilibrium develops which leads to a ternary surface layer of constant thickness. A chemical erosion channel is also discussed for the enhanced erosion of thin carbon films on metals by deuterium ions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less
Recent Updates to the MELCOR 1.8.2 Code for ITER Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merrill, Brad J
This report documents recent changes made to the MELCOR 1.8.2 computer code for application to the International Thermonuclear Experimental Reactor (ITER), as required by ITER Task Agreement ITA 81-18. There are four areas of change documented by this report. The first area is the addition to this code of a model for transporting HTO. The second area is the updating of the material oxidation correlations to match those specified in the ITER Safety Analysis Data List (SADL). The third area replaces a modification to an aerosol tranpsort subroutine that specified the nominal aerosol density internally with one that now allowsmore » the user to specify this density through user input. The fourth area corrected an error that existed in an air condensation subroutine of previous versions of this modified MELCOR code. The appendices of this report contain FORTRAN listings of the coding for these modifications.« less
Uniform convergence of multigrid V-cycle iterations for indefinite and nonsymmetric problems
NASA Technical Reports Server (NTRS)
Bramble, James H.; Kwak, Do Y.; Pasciak, Joseph E.
1993-01-01
In this paper, we present an analysis of a multigrid method for nonsymmetric and/or indefinite elliptic problems. In this multigrid method various types of smoothers may be used. One type of smoother which we consider is defined in terms of an associated symmetric problem and includes point and line, Jacobi, and Gauss-Seidel iterations. We also study smoothers based entirely on the original operator. One is based on the normal form, that is, the product of the operator and its transpose. Other smoothers studied include point and line, Jacobi, and Gauss-Seidel. We show that the uniform estimates for symmetric positive definite problems carry over to these algorithms. More precisely, the multigrid iteration for the nonsymmetric and/or indefinite problem is shown to converge at a uniform rate provided that the coarsest grid in the multilevel iteration is sufficiently fine (but not depending on the number of multigrid levels).
Spotting the difference in molecular dynamics simulations of biomolecules
NASA Astrophysics Data System (ADS)
Sakuraba, Shun; Kono, Hidetoshi
2016-08-01
Comparing two trajectories from molecular simulations conducted under different conditions is not a trivial task. In this study, we apply a method called Linear Discriminant Analysis with ITERative procedure (LDA-ITER) to compare two molecular simulation results by finding the appropriate projection vectors. Because LDA-ITER attempts to determine a projection such that the projections of the two trajectories do not overlap, the comparison does not suffer from a strong anisotropy, which is an issue in protein dynamics. LDA-ITER is applied to two test cases: the T4 lysozyme protein simulation with or without a point mutation and the allosteric protein PDZ2 domain of hPTP1E with or without a ligand. The projection determined by the method agrees with the experimental data and previous simulations. The proposed procedure, which complements existing methods, is a versatile analytical method that is specialized to find the "difference" between two trajectories.
Iterative Monte Carlo analysis of spin-dependent parton distributions
Sato, Nobuo; Melnitchouk, Wally; Kuhn, Sebastian E.; ...
2016-04-05
We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳ 0.1. Furthermore, the study also provides the first determination of the flavor-separated twist-3 PDFsmore » and the d 2 moment of the nucleon within a global PDF analysis.« less
Drawing dynamical and parameters planes of iterative families and methods.
Chicharro, Francisco I; Cordero, Alicia; Torregrosa, Juan R
2013-01-01
The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones).
Gas Flows in Rocket Motors. Volume 2. Appendix C. Time Iterative Solution of Viscous Supersonic Flow
1989-08-01
by b!ock number) FIELD GROUP SUB- GROUP nozzle analysis, Navier-Stokes, turbulent flow, equilibrium S 20 04 chemistry 19. ABSTRACT (Continue on reverse... quasi -conservative formulations lead to unacrepilably large mass conservation errors. Along with the investigations of Navier-Stkes algorithins...Characteristics Splitting ................................... 125 4.2.3 Non -Iterative PNS Procedure ............................... 125 4.2.4 Comparisons of
Blind One-Bit Compressive Sampling
2013-01-17
14] Q. Li, C. A. Micchelli, L. Shen, and Y. Xu, A proximity algorithm accelerated by Gauss - Seidel iterations for L1/TV denoising models, Inverse...methods for nonconvex optimization on the unit sphere and has a provable convergence guarantees. Binary iterative hard thresholding (BIHT) algorithms were... Convergence analysis of the algorithm is presented. Our approach is to obtain a sequence of optimization problems by successively approximating the ℓ0
Wang, An; Cao, Yang; Shi, Quan
2018-01-01
In this paper, we demonstrate a complete version of the convergence theory of the modulus-based matrix splitting iteration methods for solving a class of implicit complementarity problems proposed by Hong and Li (Numer. Linear Algebra Appl. 23:629-641, 2016). New convergence conditions are presented when the system matrix is a positive-definite matrix and an [Formula: see text]-matrix, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Federici, G.; Matera, R.; Chiocchio, S.
1994-11-01
One difficulty associated with the design and development of sacrificial plasma facing components that have to handle the high heat and particle fluxes in ITER is achieving the necessary contact conductance between the plasma protection material and the high-conductivity substrate in contact with the coolant. This paper presents a novel bond idea which is proposed as one of the options for the sacrificial energy dump targets located at the bottom of the divertor legs. The bonded joint in this design concept provides thermal and electrical contact between the armour and the cooled sub-structure while promoting remote, in-situ maintenance repair andmore » an easy replaceability of the armour part without disturbing the cooling pipes or rewelding neutron irradiated materials. To provide reliable and demountable adhesion, the bond consists of a metal alloy, treated in the semi-solid phase so that it leads to a fine dispersion of a globular solid phase into a liquid matrix (rheocast process). This thermal bond layer would normally operate in the solid state but could be brought reversibly to the semi-solid state during the armour replacement simply by heating it slightly above its solidus temperature. Material and design options are discussed in this paper. Possible methods of installation and removal are described, and lifetime considerations are addressed. In order to validate this concept within the ITER time-frame, a R&D programme must be rapidly implemented.« less
van der Werf, N R; Willemink, M J; Willems, T P; Greuter, M J W; Leiner, T
2017-12-28
The objective of this study was to evaluate the influence of iterative reconstruction on coronary calcium scores (CCS) at different heart rates for four state-of-the-art CT systems. Within an anthropomorphic chest phantom, artificial coronary arteries were translated in a water-filled compartment. The arteries contained three different calcifications with low (38 mg), medium (80 mg) and high (157 mg) mass. Linear velocities were applied, corresponding to heart rates of 0, < 60, 60-75 and > 75 bpm. Data were acquired on four state-of-the-art CT systems (CT1-CT4) with routinely used CCS protocols. Filtered back projection (FBP) and three increasing levels of iterative reconstruction (L1-L3) were used for reconstruction. CCS were quantified as Agatston score and mass score. An iterative reconstruction susceptibility (IRS) index was used to assess susceptibility of Agatston score (IRS AS ) and mass score (IRS MS ) to iterative reconstruction. IRS values were compared between CT systems and between calcification masses. For each heart rate, differences in CCS of iterative reconstructed images were evaluated with CCS of FBP images as reference, and indicated as small (< 5%), medium (5-10%) or large (> 10%). Statistical analysis was performed with repeated measures ANOVA tests. While subtle differences were found for Agatston scores of low mass calcification, medium and high mass calcifications showed increased CCS up to 77% with increasing heart rates. IRS AS of CT1-T4 were 17, 41, 130 and 22% higher than IRS MS . Not only were IRS significantly different between all CT systems, but also between calcification masses. Up to a fourfold increase in IRS was found for the low mass calcification in comparison with the high mass calcification. With increasing iterative reconstruction strength, maximum decreases of 21 and 13% for Agatston and mass score were found. In total, 21 large differences between Agatston scores from FBP and iterative reconstruction were found, while only five large differences were found between FBP and iterative reconstruction mass scores. Iterative reconstruction results in reduced CCS. The effect of iterative reconstruction on CCS is more prominent with low-density calcifications, high heart rates and increasing iterative reconstruction strength.
NASA Astrophysics Data System (ADS)
Gryanik, Vladimir M.; Lüpkes, Christof
2018-02-01
In climate and weather prediction models the near-surface turbulent fluxes of heat and momentum and related transfer coefficients are usually parametrized on the basis of Monin-Obukhov similarity theory (MOST). To avoid iteration, required for the numerical solution of the MOST equations, many models apply parametrizations of the transfer coefficients based on an approach relating these coefficients to the bulk Richardson number Rib. However, the parametrizations that are presently used in most climate models are valid only for weaker stability and larger surface roughnesses than those documented during the Surface Heat Budget of the Arctic Ocean campaign (SHEBA). The latter delivered a well-accepted set of turbulence data in the stable surface layer over polar sea-ice. Using stability functions based on the SHEBA data, we solve the MOST equations applying a new semi-analytic approach that results in transfer coefficients as a function of Rib and roughness lengths for momentum and heat. It is shown that the new coefficients reproduce the coefficients obtained by the numerical iterative method with a good accuracy in the most relevant range of stability and roughness lengths. For small Rib, the new bulk transfer coefficients are similar to the traditional coefficients, but for large Rib they are much smaller than currently used coefficients. Finally, a possible adjustment of the latter and the implementation of the new proposed parametrizations in models are discussed.
NASA Astrophysics Data System (ADS)
Zhang, Tianxi
2014-06-01
The black hole universe model is a multiverse model of cosmology recently developed by the speaker. According to this new model, our universe is a fully grown extremely supermassive black hole, which originated from a hot star-like black hole with several solar masses, and gradually grew up from a supermassive black hole with million to billion solar masses to the present state with trillion-trillion solar masses by accreting ambient matter or merging with other black holes. The entire space is structured with infinite layers or universes hierarchically. The innermost three layers include the universe that we live, the inside star-like and supermassive black holes called child universes, and the outside space called mother universe. The outermost layer is infinite in mass, radius, and entropy without an edge and limits to zero for both the matter density and absolute temperature. All layers are governed by the same physics and tend to expand physically in one direction (outward or the direction of increasing entropy). The expansion of a black hole universe decreases its density and temperature but does not alter the laws of physics. The black hole universe evolves iteratively and endlessly without a beginning. When one universe expands out, a new similar one is formed from inside star-like and supermassive black holes. In each of iterations, elements are resynthesized, matter is reconfigurated, and the universe is renewed rather than a simple repeat. The black hole universe is consistent with the Mach principle, observations, and Einsteinian general relativity. It has only one postulate but is able to explain all phenomena occurred in the universe with well-developed physics. The black hole universe does not need dark energy for acceleration and an inflation epoch for flatness, and thus has a devastating impact on the big bang model. In this talk, I will present how this new cosmological model explains the various aspects of the universe, including the origin, structure, evolution, expansion, background radiation, acceleration, anisotropy, quasars, gamma-ray bursts, nucleosynthesis, etc., and compares to the big bang model.
NASA Astrophysics Data System (ADS)
Gholami, Peyman; Roy, Priyanka; Kuppuswamy Parthasarathy, Mohana; Ommani, Abbas; Zelek, John; Lakshminarayanan, Vasudevan
2018-02-01
Retinal layer shape and thickness are one of the main indicators in the diagnosis of ocular diseases. We present an active contour approach to localize intra-retinal boundaries of eight retinal layers from OCT images. The initial locations of the active contour curves are determined using a Viterbi dynamic programming method. The main energy function is a Chan-Vese active contour model without edges. A boundary term is added to the energy function using an adaptive weighting method to help curves converge to the retinal layer edges more precisely, after evolving of curves towards boundaries, in final iterations. A wavelet-based denoising method is used to remove speckle from OCT images while preserving important details and edges. The performance of the proposed method was tested on a set of healthy and diseased eye SD-OCT images. The experimental results, compared between the proposed method and the manual segmentation, which was determined by an optometrist, indicate that our method has obtained an average of 95.29%, 92.78%, 95.86%, 87.93%, 82.67%, and 90.25% respectively, for accuracy, sensitivity, specificity, precision, Jaccard Index, and Dice Similarity Coefficient over all segmented layers. These results justify the robustness of the proposed method in determining the location of different retinal layers.
NASA Technical Reports Server (NTRS)
Pindera, Marek-Jerzy; Salzar, Robert S.
1996-01-01
A user's guide for the computer program OPTCOMP2 is presented in this report. This program provides a capability to optimize the fabrication or service-induced residual stresses in unidirectional metal matrix composites subjected to combined thermomechanical axisymmetric loading by altering the processing history, as well as through the microstructural design of interfacial fiber coatings. The user specifies the initial architecture of the composite and the load history, with the constituent materials being elastic, plastic, viscoplastic, or as defined by the 'user-defined' constitutive model, in addition to the objective function and constraints, through a user-friendly data input interface. The optimization procedure is based on an efficient solution methodology for the inelastic response of a fiber/interface layer(s)/matrix concentric cylinder model where the interface layers can be either homogeneous or heterogeneous. The response of heterogeneous layers is modeled using Aboudi's three-dimensional method of cells micromechanics model. The commercial optimization package DOT is used for the nonlinear optimization problem. The solution methodology for the arbitrarily layered cylinder is based on the local-global stiffness matrix formulation and Mendelson's iterative technique of successive elastic solutions developed for elastoplastic boundary-value problems. The optimization algorithm employed in DOT is based on the method of feasible directions.
Guerra, C; Schwartz, C J
2012-02-01
Friction blisters occur when shear loading causes the separation of dermal layers. Consequences range from minor pain to life-threatening infection. Past research in blister formation has focused on in vivo experiments, which complicate a mechanics-based study of the phenomenon. A Synthetic Skin Simulant Platform (3SP) approach was developed to investigate the effect of textile fabrics (t-shirt knit and denim cottons) and surface treatments (dry and wet lubricants) on blister formation. 3SP samples consist of bonded elastomeric layers that are surrogates for various dermal layers. These layers display frictional and mechanical properties similar to their anatomical analogues. Blistering was assessed by the measurement of deboned area between layers. Denim caused greater blistering than did the t-shirt knit cotton, and both lubricants significantly reduced blister area and surface damage. A triglyceride-based lubricant had a more pronounced effect on blister reduction than corn starch. The triglyceride lubricant used with t-shirt knit cotton resulted in no blisters being formed. The performance of the 3SP approach follows previously reported frictional behavior of skin in vivo. The results of textile and surface treatment performance suggest that future 3SP iterations can be focused on specific anatomical sites based on application type. © 2011 John Wiley & Sons A/S.
Quasi-linear modeling of lower hybrid current drive in ITER and DEMO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardinali, A., E-mail: alessandro.cardinali@enea.it; Cesario, R.; Panaccione, L.
2015-12-10
First pass absorption of the Lower Hybrid waves in thermonuclear devices like ITER and DEMO is modeled by coupling the ray tracing equations with the quasi-linear evolution of the electron distribution function in 2D velocity space. As usually assumed, the Lower Hybrid Current Drive is not effective in a plasma of a tokamak fusion reactor, owing to the accessibility condition which, depending on the density, restricts the parallel wavenumber to values greater than n{sub ∥crit} and, at the same time, to the high electron temperature that would enhance the wave absorption and then restricts the RF power deposition to themore » very periphery of the plasma column (near the separatrix). In this work, by extensively using the “ray{sup star}” code, a parametric study of the propagation and absorption of the LH wave as function of the coupled wave spectrum (as its width, and peak value), has been performed very accurately. Such a careful investigation aims at controlling the power deposition layer possibly in the external half radius of the plasma, thus providing a valuable aid to the solution of how to control the plasma current profile in a toroidal magnetic configuration, and how to help the suppression of MHD mode that can develop in the outer part of the plasma. This analysis is useful not only for exploring the possibility of profile control of a pulsed operation reactor as well as the tearing mode stabilization, but also in order to reconsider the feasibility of steady state regime for DEMO.« less
A Fast MoM Solver (GIFFT) for Large Arrays of Microstrip and Cavity-Backed Antennas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fasenfest, B J; Capolino, F; Wilton, D
2005-02-02
A straightforward numerical analysis of large arrays of arbitrary contour (and possibly missing elements) requires large memory storage and long computation times. Several techniques are currently under development to reduce this cost. One such technique is the GIFFT (Green's function interpolation and FFT) method discussed here that belongs to the class of fast solvers for large structures. This method uses a modification of the standard AIM approach [1] that takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basismore » functions, such as the RWG basis. The Green's function is then projected onto a sparse regular grid of separable interpolating polynomials. This grid can then be used in a 2D or 3D FFT to accelerate the matrix-vector product used in an iterative solver [2]. The method has been proven to greatly reduce solve time by speeding up the matrix-vector product computation. The GIFFT approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends GIFFT to layered material Green's functions and multiregion interactions via slots in ground planes. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the GIFFT method is reported in [2]; this contribution is limited to presenting new results for array antennas made of slot-excited patches and cavity-backed patch antennas.« less
a Baseline for Upper Crustal Velocity Variations Along the East Pacific Rise
NASA Astrophysics Data System (ADS)
Kappus, Mary Elizabeth
Seismic measurements of the oceanic crust and theoretical models of its generation at mid-ocean ridges suggest several systematic variations in upper crustal velocity structure, but without constraints on the inherent variation in newly-formed crust these suggestions remain tentative. The Wide Aperture Profiles (WAPs) which form the database for this study have sufficient horizontal extent and resolution in the upper crust to establish a zero-age baseline. After assessing the adequacy of amplitude preservation in several tau - p transform methods we make a precise estimate of the velocity at the top of the crust from analysis of amplitudes in the tau - p domain. Along a 52-km segment we find less than 5% variation from 2.45 km/s. Velocity models of the uppermost crust are constructed using waveform inversion for both reflection and refraction arrivals. This method exploits the high quality of both primary and secondary phases and provides an objective process for iteratively improving trial models and for measuring misfit. The resulting models show remarkable homogeneity: on-axis variation is 5% or less within layers 2A and 2B, increasing to 10% at the sharp 2A/2B boundary. The extrusive volcanic layer is only 130 m thick along-axis and corresponds to the triangular -shaped neovolcanic zone. From this we infer that the sheeted dikes feeding the extrusive layer 2A come up to very shallow depths on axis. Along axis, a fourth-order deviation from axial linearity identified geochemically is observed as a small increase in thickness of the extrusive layer. Off -axis, the velocity increases only slightly to 2.49 km/s, while the thickness of the extrusives increases to 217 km and the variability in both parameters increases with distance from the ridge axis. In a separate section we present the first published analysis of seismic records of thunder. We calculate multi -taper spectra to determine the peak energy in the lightning bolt and apply time-dependent polarization analysis to determine the lightning propagation path. The peak energies of the intracloud lightning bolts are all infrasonic, but we show that this is not incompatible with the mechanism of thunder production by a rapidly heated gas channel as was previously thought. From polarization analysis we find the direction to the lightning bolt from a single station record, in several cases resolving a significant horizontal component to the lightning path.
NASA Astrophysics Data System (ADS)
Ahunov, Roman R.; Kuksenko, Sergey P.; Gazizov, Talgat R.
2016-06-01
A multiple solution of linear algebraic systems with dense matrix by iterative methods is considered. To accelerate the process, the recomputing of the preconditioning matrix is used. A priory condition of the recomputing based on change of the arithmetic mean of the current solution time during the multiple solution is proposed. To confirm the effectiveness of the proposed approach, the numerical experiments using iterative methods BiCGStab and CGS for four different sets of matrices on two examples of microstrip structures are carried out. For solution of 100 linear systems the acceleration up to 1.6 times, compared to the approach without recomputing, is obtained.
NASA Astrophysics Data System (ADS)
Mazon, D.; Liegeard, C.; Jardin, A.; Barnsley, R.; Walsh, M.; O'Mullane, M.; Sirinelli, A.; Dorchies, F.
2016-11-01
Measuring Soft X-Ray (SXR) radiation [0.1 keV; 15 keV] in tokamaks is a standard way of extracting valuable information on the particle transport and magnetohydrodynamic activity. Generally, the analysis is performed with detectors positioned close to the plasma for a direct line of sight. A burning plasma, like the ITER deuterium-tritium phase, is too harsh an environment to permit the use of such detectors in close vicinity of the machine. We have thus investigated in this article the possibility of using polycapillary lenses in ITER to transport the SXR information several meters away from the plasma in the complex port-plug geometry.
Mazon, D; Liegeard, C; Jardin, A; Barnsley, R; Walsh, M; O'Mullane, M; Sirinelli, A; Dorchies, F
2016-11-01
Measuring Soft X-Ray (SXR) radiation [0.1 keV; 15 keV] in tokamaks is a standard way of extracting valuable information on the particle transport and magnetohydrodynamic activity. Generally, the analysis is performed with detectors positioned close to the plasma for a direct line of sight. A burning plasma, like the ITER deuterium-tritium phase, is too harsh an environment to permit the use of such detectors in close vicinity of the machine. We have thus investigated in this article the possibility of using polycapillary lenses in ITER to transport the SXR information several meters away from the plasma in the complex port-plug geometry.
NASA Astrophysics Data System (ADS)
Hladowski, Lukasz; Galkowski, Krzysztof; Cai, Zhonglun; Rogers, Eric; Freeman, Chris T.; Lewin, Paul L.
2011-07-01
In this article a new approach to iterative learning control for the practically relevant case of deterministic discrete linear plants with uniform rank greater than unity is developed. The analysis is undertaken in a 2D systems setting that, by using a strong form of stability for linear repetitive processes, allows simultaneous consideration of both trial-to-trial error convergence and along the trial performance, resulting in design algorithms that can be computed using linear matrix inequalities (LMIs). Finally, the control laws are experimentally verified on a gantry robot that replicates a pick and place operation commonly found in a number of applications to which iterative learning control is applicable.
Numerical solution of Euler's equation by perturbed functionals
NASA Technical Reports Server (NTRS)
Dey, S. K.
1985-01-01
A perturbed functional iteration has been developed to solve nonlinear systems. It adds at each iteration level, unique perturbation parameters to nonlinear Gauss-Seidel iterates which enhances its convergence properties. As convergence is approached these parameters are damped out. Local linearization along the diagonal has been used to compute these parameters. The method requires no computation of Jacobian or factorization of matrices. Analysis of convergence depends on properties of certain contraction-type mappings, known as D-mappings. In this article, application of this method to solve an implicit finite difference approximation of Euler's equation is studied. Some representative results for the well known shock tube problem and compressible flows in a nozzle are given.
NASA Astrophysics Data System (ADS)
Raj, A. Stanley; Srinivas, Y.; Oliver, D. Hudson; Muthuraj, D.
2014-03-01
The non-linear apparent resistivity problem in the subsurface study of the earth takes into account the model parameters in terms of resistivity and thickness of individual subsurface layers using the trained synthetic data by means of Artificial Neural Networks (ANN). Here we used a single layer feed-forward neural network with fast back propagation learning algorithm. So on proper training of back propagation networks it tends to give the resistivity and thickness of the subsurface layer model of the field resistivity data with reference to the synthetic data trained in the appropriate network. During training, the weights and biases of the network are iteratively adjusted to make network performance function level more efficient. On adequate training, errors are minimized and the best result is obtained using the artificial neural networks. The network is trained with more number of VES data and this trained network is demonstrated by the field data. The accuracy of inversion depends upon the number of data trained. In this novel and specially designed algorithm, the interpretation of the vertical electrical sounding has been done successfully with the more accurate layer model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jayakumar, R.; Martovetsky, N.N.; Perfect, S.A.
A glass-polyimide insulation system has been proposed by the US team for use in the Central Solenoid (CS) coil of the international Thermonuclear Experimental Reactor (ITER) machine and it is planned to use this system in the CS model coil inner module. The turn insulation will consist of 2 layers of combined prepreg and Kapton. Each layer is 50% overlapped with a butt wrap of prepreg and an overwrap of S glass. The coil layers will be separated by a glass-resin composite and impregnated in a VPI process. Small scale tests on the various components of the insulation are complete.more » It is planned to fabricate and test the insulation in a 4 x 4 insulated CS conductor array which will include the layer insulation and be vacuum impregnated. The conductor array will be subjected to 20 thermal cycles and 100000 mechanical load cycles in a Liquid Nitrogen environment. These loads are similar to those seen in the CS coil design. The insulation will be electrically tested at several stages during mechanical testing. This paper will describe the array configuration, fabrication: process, instrumentation, testing configuration, and supporting analyses used in selecting the array and test configurations.« less
Drawing Dynamical and Parameters Planes of Iterative Families and Methods
Chicharro, Francisco I.
2013-01-01
The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones). PMID:24376386
2014-10-01
nonlinear and non-stationary signals. It aims at decomposing a signal, via an iterative sifting procedure, into several intrinsic mode functions ...stationary signals. It aims at decomposing a signal, via an iterative sifting procedure into several intrinsic mode functions (IMFs), and each of the... function , optimization. 1 Introduction It is well known that nonlinear and non-stationary signal analysis is important and difficult. His- torically
NASA Astrophysics Data System (ADS)
Domnisoru, L.; Modiga, A.; Gasparotti, C.
2016-08-01
At the ship's design, the first step of the hull structural assessment is based on the longitudinal strength analysis, with head wave equivalent loads by the ships' classification societies’ rules. This paper presents an enhancement of the longitudinal strength analysis, considering the general case of the oblique quasi-static equivalent waves, based on the own non-linear iterative procedure and in-house program. The numerical approach is developed for the mono-hull ships, without restrictions on 3D-hull offset lines non-linearities, and involves three interlinked iterative cycles on floating, pitch and roll trim equilibrium conditions. Besides the ship-wave equilibrium parameters, the ship's girder wave induced loads are obtained. As numerical study case we have considered a large LPG liquefied petroleum gas carrier. The numerical results of the large LPG are compared with the statistical design values from several ships' classification societies’ rules. This study makes possible to obtain the oblique wave conditions that are inducing the maximum loads into the large LPG ship's girder. The numerical results of this study are pointing out that the non-linear iterative approach is necessary for the computation of the extreme loads induced by the oblique waves, ensuring better accuracy of the large LPG ship's longitudinal strength assessment.
Saab, Xavier E; Griggs, Jason A; Powers, John M; Engelmeier, Robert L
2007-02-01
Angled abutments are often used to restore dental implants placed in the anterior maxilla due to esthetic or spatial needs. The effect of abutment angulation on bone strain is unknown. The purpose of the current study was to measure and compare the strain distribution on the bone around an implant in the anterior maxilla using 2 different abutments by means of finite element analysis. Two-dimensional finite element models were designed using software (ANSYS) for 2 situations: (1) an implant with a straight abutment in the anterior maxilla, and (2) an implant with an angled abutment in the anterior maxilla. The implant used was 4x13 mm (MicroThread). The maxillary bone was modeled as type 3 bone with a cortical layer thickness of 0.5 mm. Oblique loads of 178 N were applied on the cingulum area of both models. Seven consecutive iterations of mesh refinement were performed in each model to observe the convergence of the results. The greatest strain was found on the cancellous bone, adjacent to the 3 most apical microthreads on the palatal side of the implant where tensile forces were created. The same strain distribution was observed around both the straight and angled abutments. After several iterations, the results converged to a value for the maximum first principal strain on the bone of both models, which was independent of element size. Most of the deformation occurred in the cancellous bone and ranged between 1000 and 3500 microstrain. Small areas of cancellous bone experienced strain above the physiologic limit (4000 microstrain). The model predicted a 15% higher maximum bone strain for the straight abutment compared with the angled abutment. The results converged after several iterations of mesh refinement, which confirmed the lack of dependence of the maximum strain at the implant-bone interface on mesh density. Most of the strain produced on the cancellous and cortical bone was within the range that has been reported to increase bone mass and mineralization.
NASA Astrophysics Data System (ADS)
Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue
2018-01-01
An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.
Wideband dichroic-filter design for LED-phosphor beam-combining
Falicoff, Waqidi
2010-12-28
A general method is disclosed of designing two-component dichroic short-pass filters operable for incidence angle distributions over the 0-30.degree. range, and specific preferred embodiments are listed. The method is based on computer optimization algorithms for an N-layer design, specifically the N-dimensional conjugate-gradient minimization of a merit function based on difference from a target transmission spectrum, as well as subsequent cycles of needle synthesis for increasing N. A key feature of the method is the initial filter design, upon which the algorithm proceeds to iterate successive design candidates with smaller merit functions. This initial design, with high-index material H and low-index L, is (0.75 H, 0.5 L, 0.75 H)^m, denoting m (20-30) repetitions of a three-layer motif, giving rise to a filter with N=2 m+1.
In situ measurements of fuel retention by laser induced desorption spectroscopy in TEXTOR
NASA Astrophysics Data System (ADS)
Zlobinski, M.; Philipps, V.; Schweer, B.; Huber, A.; Stoschus, H.; Brezinsek, S.; Samm, U.; TEXTOR Team
2011-12-01
In future fusion devices such as ITER tritium retention due to tritium co-deposition in mixed material layers can be a serious safety problem. Laser induced desorption spectroscopy (LIDS) can measure the hydrogen content of hydrogenic carbon layers locally on plasma-facing components, while hydrogen is used as a tritium substitute. For several years, this method has been applied in the TEXTOR tokamak in situ during plasma operation to monitor the hydrogen content in space and time. This work shows the LIDS signal reproducibility and studies the effects of different plasma conditions, desorption distances from the plasma and different laser energies using a dedicated sample with constant hydrogen amount. Also the LIDS signal evaluation procedure is described in detail and the detection limits for different conditions in the TEXTOR tokamak are estimated.
SERCA directs cell migration and branching across species and germ layers
Lansdale, Nick; Navarro, Sonia; Truong, Thai V.; Bower, Dan J.; Featherstone, Neil C.; Connell, Marilyn G.; Al Alam, Denise; Frey, Mark R.; Trinh, Le A.; Fernandez, G. Esteban; Warburton, David; Fraser, Scott E.; Bennett, Daimark; Jesudason, Edwin C.
2017-01-01
ABSTRACT Branching morphogenesis underlies organogenesis in vertebrates and invertebrates, yet is incompletely understood. Here, we show that the sarco-endoplasmic reticulum Ca2+ reuptake pump (SERCA) directs budding across germ layers and species. Clonal knockdown demonstrated a cell-autonomous role for SERCA in Drosophila air sac budding. Live imaging of Drosophila tracheogenesis revealed elevated Ca2+ levels in migratory tip cells as they form branches. SERCA blockade abolished this Ca2+ differential, aborting both cell migration and new branching. Activating protein kinase C (PKC) rescued Ca2+ in tip cells and restored cell migration and branching. Likewise, inhibiting SERCA abolished mammalian epithelial budding, PKC activation rescued budding, while morphogens did not. Mesoderm (zebrafish angiogenesis) and ectoderm (Drosophila nervous system) behaved similarly, suggesting a conserved requirement for cell-autonomous Ca2+ signaling, established by SERCA, in iterative budding. PMID:28821490
Choosing order of operations to accelerate strip structure analysis in parameter range
NASA Astrophysics Data System (ADS)
Kuksenko, S. P.; Akhunov, R. R.; Gazizov, T. R.
2018-05-01
The paper considers the issue of using iteration methods in solving the sequence of linear algebraic systems obtained in quasistatic analysis of strip structures with the method of moments. Using the analysis of 4 strip structures, the authors have proved that additional acceleration (up to 2.21 times) of the iterative process can be obtained during the process of solving linear systems repeatedly by means of choosing a proper order of operations and a preconditioner. The obtained results can be used to accelerate the process of computer-aided design of various strip structures. The choice of the order of operations to accelerate the process is quite simple, universal and could be used not only for strip structure analysis but also for a wide range of computational problems.
Analysis of a multi-machine database on divertor heat fluxesa)
NASA Astrophysics Data System (ADS)
Makowski, M. A.; Elder, D.; Gray, T. K.; LaBombard, B.; Lasnier, C. J.; Leonard, A. W.; Maingi, R.; Osborne, T. H.; Stangeby, P. C.; Terry, J. L.; Watkins, J.
2012-05-01
A coordinated effort to measure divertor heat flux characteristics in fully attached, similarly shaped H-mode plasmas on C-Mod, DIII-D, and NSTX was carried out in 2010 in order to construct a predictive scaling relation applicable to next step devices including ITER, FNSF, and DEMO. Few published scaling laws are available and those that have been published were obtained under widely varying conditions and divertor geometries, leading to conflicting predictions for this critically important quantity. This study was designed to overcome these deficiencies. Analysis of the combined data set reveals that the primary dependence of the parallel heat flux width is robustly inverse with Ip, which all three tokamaks independently demonstrate. An improved Thomson scattering system on DIII-D has yielded very accurate scrape off layer (SOL) profile measurements from which tests of parallel transport models have been made. It is found that a flux-limited model agrees best with the data at all collisionalities, while a Spitzer resistivity model agrees at higher collisionality where it is more valid. The SOL profile measurements and divertor heat flux scaling are consistent with a heuristic drift based model as well as a critical gradient model.
Gas Flows in Rocket Motors. Volume 3. Appendix D. Computer Code Listings
1989-08-01
Information Service, where it will be available to the general public, including foreign nationals. Prepared for the Astronautics Laboratory (AFSC) Air Force...SYMIMETRIC TRANSONIC NOZZLE FLOW~ IN CENEPAL COORDINATE SYSTEM C+ USING TIME ITERATIVE CD/’CD SCHEME * c VIITH THIN-LAYER APPROXIMATED NAVIER-STOIKE’S...Q( 1,1, 2) ,RHOU( 1, 1)), DIMENSION ADD(4) DIMENSION PRE(4,4), PADD (4) C SAI-DIRECTION ENTRY ADDX COF:F=O.125D0*OMEGAX DO 70 J=I,,JL DO 70 I=1,IL IF
Digital Model of Fourier and Fresnel Quantized Holograms
NASA Astrophysics Data System (ADS)
Boriskevich, Anatoly A.; Erokhovets, Valery K.; Tkachenko, Vadim V.
Some models schemes of Fourier and Fresnel quantized protective holograms with visual effects are suggested. The condition to arrive at optimum relationship between the quality of reconstructed images, and the coefficient of data reduction about a hologram, and quantity of iterations in the reconstructing hologram process has been estimated through computer model. Higher protection level is achieved by means of greater number both bi-dimensional secret keys (more than 2128) in form of pseudorandom amplitude and phase encoding matrixes, and one-dimensional encoding key parameters for every image of single-layer or superimposed holograms.
Inverse Problems, Control and Modeling in the Presence of Uncertainty
2007-10-30
using a Kelvin model, CRSC- TR07-08, March, 2007; IEEE Transactions on Biomedical Engineering, submitted. [P18] K. Ito, Q. Huynh and J . Toivanen, A fast...Science and Engineering, Springer (2006), 595 602 . [P19] K.Ito and J . Toivanen, A fast iterative solver for scattering by elastic objects in layered...and N.G. Medhin, " A stick-slip/Rouse hybrid model", CRSC-TR05-28, August, 2005. [P23] H.T. Banks, A . F. Karr, H. K. Nguyen, and J . R. Samuels, Jr
NASA Technical Reports Server (NTRS)
Smith, D. R.
1982-01-01
The Purdue Regional Objective Analysis of the Mesoscale (PROAM) is a Barness-type scheme for the analysis of surface meteorological data. Modifications are introduced to the original version in order to increase its flexibility and to permit greater ease of usage. The code was rewritten for an interactive computer environment. Furthermore, a multiple iteration technique suggested by Barnes was implemented for greater accuracy. PROAM was subjected to a series of experiments in order to evaluate its performance under a variety of analysis conditions. The tests include use of a known analytic temperature distribution in order to quantify error bounds for the scheme. Similar experiments were conducted using actual atmospheric data. Results indicate that the multiple iteration technique increases the accuracy of the analysis. Furthermore, the tests verify appropriate values for the analysis parameters in resolving meso-beta scale phenomena.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lines, L.; Burton, A.; Lu, H.X.
Accurate velocity models are a necessity for reliable migration results. Velocity analysis generally involves the use of methods such as normal moveout analysis (NMO), seismic traveltime tomography, or iterative prestack migration. These techniques can be effective, and each has its own advantage or disadvantage. Conventional NMO methods are relatively inexpensive but basically require simplifying assumptions about geology. Tomography is a more general method but requires traveltime interpretation of prestack data. Iterative prestack depth migration is very general but is computationally expensive. In some cases, there is the opportunity to estimate vertical velocities by use of well information. The well informationmore » can be used to optimize poststack migrations, thereby eliminating some of the time and expense of iterative prestack migration. The optimized poststack migration procedure defined here computes the velocity model which minimizes the depth differences between seismic images and formation depths at the well by using a least squares inversion method. The optimization methods described in this paper will hopefully produce ``migrations without migraines.``« less
NASA Technical Reports Server (NTRS)
Nakazawa, Shohei
1991-01-01
Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.
Analysis of Anderson Acceleration on a Simplified Neutronics/Thermal Hydraulics System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toth, Alex; Kelley, C. T.; Slattery, Stuart R
ABSTRACT A standard method for solving coupled multiphysics problems in light water reactors is Picard iteration, which sequentially alternates between solving single physics applications. This solution approach is appealing due to simplicity of implementation and the ability to leverage existing software packages to accurately solve single physics applications. However, there are several drawbacks in the convergence behavior of this method; namely slow convergence and the necessity of heuristically chosen damping factors to achieve convergence in many cases. Anderson acceleration is a method that has been seen to be more robust and fast converging than Picard iteration for many problems, withoutmore » significantly higher cost per iteration or complexity of implementation, though its effectiveness in the context of multiphysics coupling is not well explored. In this work, we develop a one-dimensional model simulating the coupling between the neutron distribution and fuel and coolant properties in a single fuel pin. We show that this model generally captures the convergence issues noted in Picard iterations which couple high-fidelity physics codes. We then use this model to gauge potential improvements with regard to rate of convergence and robustness from utilizing Anderson acceleration as an alternative to Picard iteration.« less
Pseudo-time methods for constrained optimization problems governed by PDE
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1995-01-01
In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.
NASA Astrophysics Data System (ADS)
Tsuru, Daigo; Tanigawa, Hisashi; Hirose, Takanori; Mohri, Kensuke; Seki, Yohji; Enoeda, Mikio; Ezato, Koichiro; Suzuki, Satoshi; Nishi, Hiroshi; Akiba, Masato
2009-06-01
As the primary candidate of ITER Test Blanket Module (TBM) to be tested under the leadership of Japan, a water cooled solid breeder (WCSB) TBM is being developed. This paper shows the recent achievements towards the milestones of ITER TBMs prior to the installation, which consist of design integration in ITER, module qualification and safety assessment. With respect to the design integration, targeting the detailed design final report in 2012, structure designs of the WCSB TBM and the interfacing components (common frame and backside shielding) that are placed in a test port of ITER and the layout of the cooling system are presented. As for the module qualification, a real-scale first wall mock-up fabricated by using the hot isostatic pressing method by structural material of reduced activation martensitic ferritic steel, F82H, and flow and irradiation test of the mock-up are presented. As for safety milestones, the contents of the preliminary safety report in 2008 consisting of source term identification, failure mode and effect analysis (FMEA) and identification of postulated initiating events (PIEs) and safety analyses are presented.
NASA Astrophysics Data System (ADS)
Wu, Weibin; Dai, Yifan; Zhou, Lin; Xu, Mingjin
2016-09-01
Material removal accuracy has a direct impact on the machining precision and efficiency of ion beam figuring. By analyzing the factors suppressing the improvement of material removal accuracy, we conclude that correcting the removal function deviation and reducing the removal material amount during each iterative process could help to improve material removal accuracy. Removal function correcting principle can effectively compensate removal function deviation between actual figuring and simulated processes, while experiments indicate that material removal accuracy decreases with a long machining time, so a small amount of removal material in each iterative process is suggested. However, more clamping and measuring steps will be introduced in this way, which will also generate machining errors and suppress the improvement of material removal accuracy. On this account, a free-measurement iterative process method is put forward to improve material removal accuracy and figuring efficiency by using less measuring and clamping steps. Finally, an experiment on a φ 100-mm Zerodur planar is preformed, which shows that, in similar figuring time, three free-measurement iterative processes could improve the material removal accuracy and the surface error convergence rate by 62.5% and 17.6%, respectively, compared with a single iterative process.
Survey on the Performance of Source Localization Algorithms.
Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G
2017-11-18
The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.
Survey on the Performance of Source Localization Algorithms
2017-01-01
The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565
Holistic stakeholder-oriented and case study-based risk analysis
NASA Astrophysics Data System (ADS)
Heisterkamp, Tobias
2013-04-01
Case studies of storm events in the Berlin conurbation demonstrate the chance of a holistic approach and its potential data sources. Data sets of population, but also data provided by insurance and transport companies, and operating data provided by fire brigades, are used. Various indicators for risk analysis are constructed to identify hot spots. These hot spots can be shortcomings or critical aspects in structure, communication, the warning chain, or even in the structure of potentially affected stakeholders or in the civil protection system itself. Due to increasing complexity of interactions and interdependencies in or between societies and nature, it is important to choose a holistic approach. For risk analyses like the storms in Berlin, it captures many important factors with their effects. For risk analyses, it is important to take potential users into concern: The analysis gets important due to its use later on. In addition to a theoretical background, a focus on the application should be set from the beginning on. To get usable results, it is helpful to complement the theoretical meta-level by a stakeholder-oriented level. An iterative investigation and combination of different layers for the risk analysis explores important influencing factors and allows a tailoring of results to different stakeholder groups. Layers are indicators, gained from data sets like losses from insurance data. Tailoring is important, because of different requirements e.g. by technical or medical assistance. Stakeholders' feedback in the iterative investigation also shows structural limitations for later applications, like special laws the fire brigades have to deal with. Additionally, using actors' perspectives offers the chance to convince practitioners of taking part in the analysis. Their participation is an essential component in applied science. They are important data suppliers, whose goodwill is needed to ensure good results. Based on their experience, they can also help by evaluating the results and their correspondence to reality continuously. Using case studies can help to identify important stakeholders, notably potential affected groups. To cover essential interests of all important stakeholders, a wide range of vulnerabilities, regarding physical and social aspects, and including their resiliences, has to be assessed. The case studies of storm events offer a solid base for investigations, no matter which method is used. They expose shortcomings like gaps in the warning chain or misunderstandings in warning communication. Case studies of extreme events are very interesting for many stakeholders, insurances or fire brigades use them in their daily work. Thus a case study-based approach is a further chance to integrated practitioners into the analysis process and to get results, which they easily understand and which they can transfer into application. There could a second advantage in taking many data sets into account: Each data set like the meteorological observations of wind gust speeds has inherent shortcomings like limited expressiveness, significance or especially uncertainties. Using various approaches could frame the final result and prevent expanding biases or misinterpretations. Altogether, this work stresses the role of transdisciplinary holistic approaches in vulnerability assessments for risk analyses.
Polarimetric Thomson scattering for high Te fusion plasmas
NASA Astrophysics Data System (ADS)
Giudicotti, L.
2017-11-01
Polarimetric Thomson scattering (TS) is a technique for the analysis of TS spectra in which the electron temperature Te is determined from the depolarization of the scattered radiation, a relativistic effect noticeable only in very hot (Te >= 10 keV) fusion plasmas. It has been proposed as a complementary technique to supplement the conventional spectral analysis in the ITER CPTS (Core Plasma Thomson Scattering) system for measurements in high Te, low ne plasma conditions. In this paper we review the characteristics of the depolarized TS radiation with special emphasis to the conditions of the ITER CPTS system and we describe a possible implementation of this diagnostic method suitable to significantly improve the performances of the conventional TS spectral analysis in the high Te range.
Discrete fourier transform (DFT) analysis for applications using iterative transform methods
NASA Technical Reports Server (NTRS)
Dean, Bruce H. (Inventor)
2012-01-01
According to various embodiments, a method is provided for determining aberration data for an optical system. The method comprises collecting a data signal, and generating a pre-transformation algorithm. The data is pre-transformed by multiplying the data with the pre-transformation algorithm. A discrete Fourier transform of the pre-transformed data is performed in an iterative loop. The method further comprises back-transforming the data to generate aberration data.
Numerical Grid Generation and Potential Airfoil Analysis and Design
1988-01-01
Gauss- Seidel , SOR and ADI iterative methods e JACOBI METHOD In the Jacobi method each new value of a function is computed entirely from old values...preceding iteration and adding the inhomogeneous (boundary condition) term. * GAUSS- SEIDEL METHOD When we compute I in a Jacobi method, we have already...Gauss- Seidel method. Sufficient condition for p convergence of the Gauss- Seidel method is diagonal-dominance of [A].9W e SUCESSIVE OVER-RELAXATION (SOR
Virtual patient design: exploring what works and why. A grounded theory study
Bateman, James; Allen, Maggie; Samani, Dipti; Kidd, Jane; Davies, David
2013-01-01
Objectives Virtual patients (VPs) are online representations of clinical cases used in medical education. Widely adopted, they are well placed to teach clinical reasoning skills. International technology standards mean VPs can be created, shared and repurposed between institutions. A systematic review has highlighted the lack of evidence to support which of the numerous VP designs may be effective, and why. We set out to research the influence of VP design on medical undergraduates. Methods This is a grounded theory study into the influence of VP design on undergraduate medical students. Following a review of the literature and publicly available VP cases, we identified important design properties. We integrated them into two substantial VPs produced for this research. Using purposeful iterative sampling, 46 medical undergraduates were recruited to participate in six focus groups. Participants completed both VPs, an evaluation and a 1-hour focus group discussion. These were digitally recorded, transcribed and analysed using grounded theory, supported by computer-assisted analysis. Following open, axial and selective coding, we produced a theoretical model describing how students learn from VPs. Results We identified a central core phenomenon designated ‘learning from the VP’. This had four categories: VP Construction; External Preconditions; Student–VP Interaction, and Consequences. From these, we constructed a three-layer model describing the interactions of students with VPs. The inner layer consists of the student's cognitive and behavioural preconditions prior to sitting a case. The middle layer considers the VP as an ‘encoded object’, an e-learning artefact and as a ‘constructed activity’, with associated pedagogic and organisational elements. The outer layer describes cognitive and behavioural change. Conclusions This is the first grounded theory study to explore VP design. This original research has produced a model which enhances understanding of how and why the delivery and design of VPs influence learning. The model may be of practical use to authors, institutions and researchers. PMID:23662877
Virtual patient design: exploring what works and why. A grounded theory study.
Bateman, James; Allen, Maggie; Samani, Dipti; Kidd, Jane; Davies, David
2013-06-01
Virtual patients (VPs) are online representations of clinical cases used in medical education. Widely adopted, they are well placed to teach clinical reasoning skills. International technology standards mean VPs can be created, shared and repurposed between institutions. A systematic review has highlighted the lack of evidence to support which of the numerous VP designs may be effective, and why. We set out to research the influence of VP design on medical undergraduates. This is a grounded theory study into the influence of VP design on undergraduate medical students. Following a review of the literature and publicly available VP cases, we identified important design properties. We integrated them into two substantial VPs produced for this research. Using purposeful iterative sampling, 46 medical undergraduates were recruited to participate in six focus groups. Participants completed both VPs, an evaluation and a 1-hour focus group discussion. These were digitally recorded, transcribed and analysed using grounded theory, supported by computer-assisted analysis. Following open, axial and selective coding, we produced a theoretical model describing how students learn from VPs. We identified a central core phenomenon designated 'learning from the VP'. This had four categories: VP Construction; External Preconditions; Student-VP Interaction, and Consequences. From these, we constructed a three-layer model describing the interactions of students with VPs. The inner layer consists of the student's cognitive and behavioural preconditions prior to sitting a case. The middle layer considers the VP as an 'encoded object', an e-learning artefact and as a 'constructed activity', with associated pedagogic and organisational elements. The outer layer describes cognitive and behavioural change. This is the first grounded theory study to explore VP design. This original research has produced a model which enhances understanding of how and why the delivery and design of VPs influence learning. The model may be of practical use to authors, institutions and researchers. © 2013 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Sato, S.; Takatsu, H.; Maki, K.; Yamada, K.; Mori, S.; Iida, H.; Santoro, R. T.
1997-09-01
Gamma-ray exposure dose rates at the ITER site boundary were estimated for the cases of removal of a failed activated Toroidal Field (TF) coil from the torus and removal of a failed activated TF coil together with a sector of the activated Vacuum Vessel (VV). Skyshine analyses were performed using the two-dimensional SN radiation transport code, DOT3.5. The exposure gamma-ray dose rates on the ground at the site boundary (presently assumed to be 1 km from the ITER building), were calculated to be 1.1 and 84 μSv/year for removal of the TF coil without and with a VV sector, respectively. The dose rate level for the latter case is close to the tentative radiation limit of 100 μSv/year so an additional ˜14 cm of concrete is required in the ITER building roof to satisfy the criterion for a safety factor often for the site boundary dose rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prindle, N.H.; Mendenhall, F.T.; Trauth, K.
1996-05-01
The Systems Prioritization Method (SPM) is a decision-aiding tool developed by Sandia National Laboratories (SNL). SPM provides an analytical basis for supporting programmatic decisions for the Waste Isolation Pilot Plant (WIPP) to meet selected portions of the applicable US EPA long-term performance regulations. The first iteration of SPM (SPM-1), the prototype for SPM< was completed in 1994. It served as a benchmark and a test bed for developing the tools needed for the second iteration of SPM (SPM-2). SPM-2, completed in 1995, is intended for programmatic decision making. This is Volume II of the three-volume final report of the secondmore » iteration of the SPM. It describes the technical input and model implementation for SPM-2, and presents the SPM-2 technical baseline and the activities, activity outcomes, outcome probabilities, and the input parameters for SPM-2 analysis.« less
Thermal analysis of the in-vessel components of the ITER plasma-position reflectometry.
Quental, P B; Policarpo, H; Luís, R; Varela, P
2016-11-01
The ITER plasma position reflectometry system measures the edge electron density profile of the plasma, providing real-time supplementary contribution to the magnetic measurements of the plasma-wall distance. Some of the system components will be in direct sight of the plasma and therefore subject to plasma and stray radiation, which may cause excessive temperatures and stresses. In this work, thermal finite element analysis of the antenna and adjacent waveguides is conducted with ANSYS V17 (ANSYS® Academic Research, Release 17.0, 2016). Results allow the identification of critical temperature points, and solutions are proposed to improve the thermal behavior of the system.
Thermal analysis of the in-vessel components of the ITER plasma-position reflectometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quental, P. B., E-mail: pquental@ipfn.tecnico.ulisboa.pt; Policarpo, H.; Luís, R.
The ITER plasma position reflectometry system measures the edge electron density profile of the plasma, providing real-time supplementary contribution to the magnetic measurements of the plasma-wall distance. Some of the system components will be in direct sight of the plasma and therefore subject to plasma and stray radiation, which may cause excessive temperatures and stresses. In this work, thermal finite element analysis of the antenna and adjacent waveguides is conducted with ANSYS V17 (ANSYS® Academic Research, Release 17.0, 2016). Results allow the identification of critical temperature points, and solutions are proposed to improve the thermal behavior of the system.
Deductive Evaluation: Formal Code Analysis With Low User Burden
NASA Technical Reports Server (NTRS)
Di Vito, Ben. L
2016-01-01
We describe a framework for symbolically evaluating iterative C code using a deductive approach that automatically discovers and proves program properties. Although verification is not performed, the method can infer detailed program behavior. Software engineering work flows could be enhanced by this type of analysis. Floyd-Hoare verification principles are applied to synthesize loop invariants, using a library of iteration-specific deductive knowledge. When needed, theorem proving is interleaved with evaluation and performed on the fly. Evaluation results take the form of inferred expressions and type constraints for values of program variables. An implementation using PVS (Prototype Verification System) is presented along with results for sample C functions.
Conceptual design of ACB-CP for ITER cryogenic system
NASA Astrophysics Data System (ADS)
Jiang, Yongcheng; Xiong, Lianyou; Peng, Nan; Tang, Jiancheng; Liu, Liqiang; Zhang, Liang
2012-06-01
ACB-CP (Auxiliary Cold Box for Cryopumps) is used to supply the cryopumps system with necessary cryogen in ITER (International Thermonuclear Experimental Reactor) cryogenic distribution system. The conceptual design of ACB-CP contains thermo-hydraulic analysis, 3D structure design and strength checking. Through the thermohydraulic analysis, the main specifications of process valves, pressure safety valves, pipes, heat exchangers can be decided. During the 3D structure design process, vacuum requirement, adiabatic requirement, assembly constraints and maintenance requirement have been considered to arrange the pipes, valves and other components. The strength checking has been performed to crosscheck if the 3D design meets the strength requirements for the ACB-CP.
Sorting Five Human Tumor Types Reveals Specific Biomarkers and Background Classification Genes.
Roche, Kimberly E; Weinstein, Marvin; Dunwoodie, Leland J; Poehlman, William L; Feltus, Frank A
2018-05-25
We applied two state-of-the-art, knowledge independent data-mining methods - Dynamic Quantum Clustering (DQC) and t-Distributed Stochastic Neighbor Embedding (t-SNE) - to data from The Cancer Genome Atlas (TCGA). We showed that the RNA expression patterns for a mixture of 2,016 samples from five tumor types can sort the tumors into groups enriched for relevant annotations including tumor type, gender, tumor stage, and ethnicity. DQC feature selection analysis discovered 48 core biomarker transcripts that clustered tumors by tumor type. When these transcripts were removed, the geometry of tumor relationships changed, but it was still possible to classify the tumors using the RNA expression profiles of the remaining transcripts. We continued to remove the top biomarkers for several iterations and performed cluster analysis. Even though the most informative transcripts were removed from the cluster analysis, the sorting ability of remaining transcripts remained strong after each iteration. Further, in some iterations we detected a repeating pattern of biological function that wasn't detectable with the core biomarker transcripts present. This suggests the existence of a "background classification" potential in which the pattern of gene expression after continued removal of "biomarker" transcripts could still classify tumors in agreement with the tumor type.
A Study of Morrison's Iterative Noise Removal Method. Final Report M. S. Thesis
NASA Technical Reports Server (NTRS)
Ioup, G. E.; Wright, K. A. R.
1985-01-01
Morrison's iterative noise removal method is studied by characterizing its effect upon systems of differing noise level and response function. The nature of data acquired from a linear shift invariant instrument is discussed so as to define the relationship between the input signal, the instrument response function, and the output signal. Fourier analysis is introduced, along with several pertinent theorems, as a tool to more thorough understanding of the nature of and difficulties with deconvolution. In relation to such difficulties the necessity of a noise removal process is discussed. Morrison's iterative noise removal method and the restrictions upon its application are developed. The nature of permissible response functions is discussed, as is the choice of the response functions used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazon, D., E-mail: Didier.Mazon@cea.fr; Jardin, A.; Liegeard, C.
2016-11-15
Measuring Soft X-Ray (SXR) radiation [0.1 keV; 15 keV] in tokamaks is a standard way of extracting valuable information on the particle transport and magnetohydrodynamic activity. Generally, the analysis is performed with detectors positioned close to the plasma for a direct line of sight. A burning plasma, like the ITER deuterium-tritium phase, is too harsh an environment to permit the use of such detectors in close vicinity of the machine. We have thus investigated in this article the possibility of using polycapillary lenses in ITER to transport the SXR information several meters away from the plasma in the complex port-plugmore » geometry.« less
NASA Technical Reports Server (NTRS)
Winget, J. M.; Hughes, T. J. R.
1985-01-01
The particular problems investigated in the present study arise from nonlinear transient heat conduction. One of two types of nonlinearities considered is related to a material temperature dependence which is frequently needed to accurately model behavior over the range of temperature of engineering interest. The second nonlinearity is introduced by radiation boundary conditions. The finite element equations arising from the solution of nonlinear transient heat conduction problems are formulated. The finite element matrix equations are temporally discretized, and a nonlinear iterative solution algorithm is proposed. Algorithms for solving the linear problem are discussed, taking into account the form of the matrix equations, Gaussian elimination, cost, and iterative techniques. Attention is also given to approximate factorization, implementational aspects, and numerical results.
NASA Astrophysics Data System (ADS)
Anomohanran, Ochuko; Ofomola, Merrious Oviri; Okocha, Fredrick Ogochukwu
2017-05-01
Groundwater study involving the application of geophysical logging and vertical electrical sounding (VES) methods was carried out in parts of Ndokwa area of Delta State, Nigeria. The objective was to delineate the geological situation and the groundwater condition of the area. The geophysical logging of a drilled well and thirty VESs of the Schlumberger configuration were executed in this study using the Abem SAS 1000/4000 Terrameter. The result of the lithological study from the drilled well showed that the subsurface formation consist of lateritic topsoil, very fine sand, clayey fine sand, fine and medium grain sand, coarse sand, medium coarse sand and very coarse sand. The interpretation of the vertical electrical sounding data using a combination of curve matching and Win Resist computer iteration showed a close correlation with the well record. The result revealed the presence of four geoelectric layers with the aquifer identified to be in the fourth layer and having resistivity which ranged from 480 to 11,904 Ωm, while the depth ranged between 17.8 and 38.8 m. The analysis of the geophysical logging revealed that the average value of the electrical conductivity and the total dissolved solid of the groundwater in the aquifer were obtained as 229 μS/cm and 149 mg/cm3 respectively. These results indicate that the groundwater is within the permissible limit set by the Standard Organization of Nigeria for potable water which is 1000 μS/cm for electrical conductivity and 500 mg/cm3 for total dissolved solid. The fourth layer was therefore identified as the potential non conductive zone suitable for groundwater development in the study area.
Rueda, Oscar M; Diaz-Uriarte, Ramon
2007-10-16
Yu et al. (BMC Bioinformatics 2007,8: 145+) have recently compared the performance of several methods for the detection of genomic amplification and deletion breakpoints using data from high-density single nucleotide polymorphism arrays. One of the methods compared is our non-homogenous Hidden Markov Model approach. Our approach uses Markov Chain Monte Carlo for inference, but Yu et al. ran the sampler for a severely insufficient number of iterations for a Markov Chain Monte Carlo-based method. Moreover, they did not use the appropriate reference level for the non-altered state. We rerun the analysis in Yu et al. using appropriate settings for both the Markov Chain Monte Carlo iterations and the reference level. Additionally, to show how easy it is to obtain answers to additional specific questions, we have added a new analysis targeted specifically to the detection of breakpoints. The reanalysis shows that the performance of our method is comparable to that of the other methods analyzed. In addition, we can provide probabilities of a given spot being a breakpoint, something unique among the methods examined. Markov Chain Monte Carlo methods require using a sufficient number of iterations before they can be assumed to yield samples from the distribution of interest. Running our method with too small a number of iterations cannot be representative of its performance. Moreover, our analysis shows how our original approach can be easily adapted to answer specific additional questions (e.g., identify edges).
GWASinlps: Nonlocal prior based iterative SNP selection tool for genome-wide association studies.
Sanyal, Nilotpal; Lo, Min-Tzu; Kauppi, Karolina; Djurovic, Srdjan; Andreassen, Ole A; Johnson, Valen E; Chen, Chi-Hua
2018-06-19
Multiple marker analysis of the genome-wide association study (GWAS) data has gained ample attention in recent years. However, because of the ultra high-dimensionality of GWAS data, such analysis is challenging. Frequently used penalized regression methods often lead to large number of false positives, whereas Bayesian methods are computationally very expensive. Motivated to ameliorate these issues simultaneously, we consider the novel approach of using nonlocal priors in an iterative variable selection framework. We develop a variable selection method, named, iterative nonlocal prior based selection for GWAS, or GWASinlps, that combines, in an iterative variable selection framework, the computational efficiency of the screen-and-select approach based on some association learning and the parsimonious uncertainty quantification provided by the use of nonlocal priors. The hallmark of our method is the introduction of 'structured screen-and-select' strategy, that considers hierarchical screening, which is not only based on response-predictor associations, but also based on response-response associations, and concatenates variable selection within that hierarchy. Extensive simulation studies with SNPs having realistic linkage disequilibrium structures demonstrate the advantages of our computationally efficient method compared to several frequentist and Bayesian variable selection methods, in terms of true positive rate, false discovery rate, mean squared error, and effect size estimation error. Further, we provide empirical power analysis useful for study design. Finally, a real GWAS data application was considered with human height as phenotype. An R-package for implementing the GWASinlps method is available at https://cran.r-project.org/web/packages/GWASinlps/index.html. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Goossens, Bart; Aelterman, Jan; Luong, Hi"p.; Pižurica, Aleksandra; Philips, Wilfried
2011-09-01
The shearlet transform is a recent sibling in the family of geometric image representations that provides a traditional multiresolution analysis combined with a multidirectional analysis. In this paper, we present a fast DFT-based analysis and synthesis scheme for the 2D discrete shearlet transform. Our scheme conforms to the continuous shearlet theory to high extent, provides perfect numerical reconstruction (up to floating point rounding errors) in a non-iterative scheme and is highly suitable for parallel implementation (e.g. FPGA, GPU). We show that our discrete shearlet representation is also a tight frame and the redundancy factor of the transform is around 2.6, independent of the number of analysis directions. Experimental denoising results indicate that the transform performs the same or even better than several related multiresolution transforms, while having a significantly lower redundancy factor.
FOLDER: A numerical tool to simulate the development of structures in layered media
NASA Astrophysics Data System (ADS)
Adamuszek, Marta; Dabrowski, Marcin; Schmid, Daniel W.
2015-04-01
FOLDER is a numerical toolbox for modelling deformation in layered media during layer parallel shortening or extension in two dimensions. FOLDER builds on MILAMIN [1], a finite element method based mechanical solver, with a range of utilities included from the MUTILS package [2]. Numerical mesh is generated using the Triangle software [3]. The toolbox includes features that allow for: 1) designing complex structures such as multi-layer stacks, 2) accurately simulating large-strain deformation of linear and non-linear viscous materials, 3) post-processing of various physical fields such as velocity (total and perturbing), rate of deformation, finite strain, stress, deviatoric stress, pressure, apparent viscosity. FOLDER is designed to ensure maximum flexibility to configure model geometry, define material parameters, specify range of numerical parameters in simulations and choose the plotting options. FOLDER is an open source MATLAB application and comes with a user friendly graphical interface. The toolbox additionally comprises an educational application that illustrates various analytical solutions of growth rates calculated for the cases of folding and necking of a single layer with interfaces perturbed with a single sinusoidal waveform. We further derive two novel analytical expressions for the growth rate in the cases of folding and necking of a linear viscous layer embedded in a linear viscous medium of a finite thickness. We use FOLDER to test the accuracy of single-layer folding simulations using various 1) spatial and temporal resolutions, 2) time integration schemes, and 3) iterative algorithms for non-linear materials. The accuracy of the numerical results is quantified by: 1) comparing them to analytical solution, if available, or 2) running convergence tests. As a result, we provide a map of the most optimal choice of grid size, time step, and number of iterations to keep the results of the numerical simulations below a given error for a given time integration scheme. We also demonstrate that Euler and Leapfrog time integration schemes are not recommended for any practical use. Finally, the capabilities of the toolbox are illustrated based on two examples: 1) shortening of a synthetic multi-layer sequence and 2) extension of a folded quartz vein embedded in phyllite from Sprague Upper Reservoir (example discussed by Sherwin and Chapple [4]). The latter example demonstrates that FOLDER can be successfully used for reverse modelling and mechanical restoration. [1] Dabrowski, M., Krotkiewski, M., and Schmid, D. W., 2008, MILAMIN: MATLAB-based finite element method solver for large problems. Geochemistry Geophysics Geosystems, vol. 9. [2] Krotkiewski, M. and Dabrowski M., 2010 Parallel symmetric sparse matrix-vector product on scalar multi-core cpus. Parallel Computing, 36(4):181-198 [3] Shewchuk, J. R., 1996, Triangle: Engineering a 2D Quality Mesh Generator and Delaunay Triangulator, In: Applied Computational Geometry: Towards Geometric Engineering'' (Ming C. Lin and Dinesh Manocha, editors), Vol. 1148 of Lecture Notes in Computer Science, pp. 203-222, Springer-Verlag, Berlin [4] Sherwin, J.A., Chapple, W.M., 1968. Wavelengths of single layer folds - a Comparison between theory and Observation. American Journal of Science 266 (3), p. 167-179
Effect of a Starting Model on the Solution of a Travel Time Seismic Tomography Problem
NASA Astrophysics Data System (ADS)
Yanovskaya, T. B.; Medvedev, S. V.; Gobarenko, V. S.
2018-03-01
In the problems of three-dimensional (3D) travel time seismic tomography where the data are travel times of diving waves and the starting model is a system of plane layers where the velocity is a function of depth alone, the solution turns out to strongly depend on the selection of the starting model. This is due to the fact that in the different starting models, the rays between the same points can intersect different layers, which makes the tomography problem fundamentally nonlinear. This effect is demonstrated by the model example. Based on the same example, it is shown how the starting model should be selected to ensure a solution close to the true velocity distribution. The starting model (the average dependence of the seismic velocity on depth) should be determined by the method of successive iterations at each step of which the horizontal velocity variations in the layers are determined by solving the two-dimensional tomography problem. An example illustrating the application of this technique to the P-wave travel time data in the region of the Black Sea basin is presented.
A Map for Clinical Laboratories Management Indicators in the Intelligent Dashboard
Azadmanjir, Zahra; Torabi, Mashallah; Safdari, Reza; Bayat, Maryam; Golmahi, Fatemeh
2015-01-01
Introduction: management challenges of clinical laboratories are more complicated for educational hospital clinical laboratories. Managers can use tools of business intelligence (BI), such as information dashboards that provide the possibility of intelligent decision-making and problem solving about increasing income, reducing spending, utilization management and even improving quality. Critical phase of dashboard design is setting indicators and modeling causal relations between them. The paper describes the process of creating a map for laboratory dashboard. Methods: the study is one part of an action research that begins from 2012 by innovation initiative for implementing laboratory intelligent dashboard. Laboratories management problems were determined in educational hospitals by the brainstorming sessions. Then, with regard to the problems key performance indicators (KPIs) specified. Results: the map of indicators designed in form of three layered. They have a causal relationship so that issues measured in the subsequent layers affect issues measured in the prime layers. Conclusion: the proposed indicator map can be the base of performance monitoring. However, these indicators can be modified to improve during iterations of dashboard designing process. PMID:26483593
NASA Technical Reports Server (NTRS)
Green, M. J.; Nachtsheim, P. R.
1972-01-01
A numerical method for the solution of large systems of nonlinear differential equations of the boundary-layer type is described. The method is a modification of the technique for satisfying asymptotic boundary conditions. The present method employs inverse interpolation instead of the Newton method to adjust the initial conditions of the related initial-value problem. This eliminates the so-called perturbation equations. The elimination of the perturbation equations not only reduces the user's preliminary work in the application of the method, but also reduces the number of time-consuming initial-value problems to be numerically solved at each iteration. For further ease of application, the solution of the overdetermined system for the unknown initial conditions is obtained automatically by applying Golub's linear least-squares algorithm. The relative ease of application of the proposed numerical method increases directly as the order of the differential-equation system increases. Hence, the method is especially attractive for the solution of large-order systems. After the method is described, it is applied to a fifth-order problem from boundary-layer theory.
NASA Technical Reports Server (NTRS)
Wilmoth, R. G.
1980-01-01
A viscous-inviscid interaction model was developed to account for jet entrainment effects in the prediction of the subsonic flow over nozzle afterbodies. The model is based on the concept of a weakly interacting shear layer in which the local streamline deflections due to entrainment are accounted for by a displacement-thickness type of correction to the inviscid plume boundary. The entire flow field is solved in an iterative manner to account for the effects on the inviscid external flow of the turbulent boundary layer, turbulent mixing and chemical reactions in the shear layer, and the inviscid jet exhaust flow. The components of the computational model are described, and numerical results are presented to illustrate the interactive effects of entrainment on the overall flow structure. The validity of the model is assessed by comparisons with data obtained form flow-field measurements on cold-air jet exhausts. Numerical results and experimental data are also given to show the entrainment effects on nozzle boattail drag under various jet exhaust and free-stream flow conditions.
Control of vortical separation on conical bodies
NASA Technical Reports Server (NTRS)
Mourtos, Nikos J.; Roberts, Leonard
1987-01-01
In a variety of aeronautical applications, the flow around conical bodies at incidence is of interest. Such applications include, but are not limited to, highly maneuverable aircraft with delta wings, the aerospace plane and nose portions of spike inlets. The theoretical model used has three parts. First, the single line vortex model is used within the framework of slender body theory, to compute the outer inviscid field for specified separation lines. Next, the three dimensional boundary layer is represented by a momentum equation for the cross flow, analogous to that for a plane boundary layer; a von Karman Pohlhausen approximation is applied to solve this equation. The cross flow separation for both laminar and turbulent layers is determined by matching the pressure at the upper and lower separation points. This iterative procedure yields a unique solution for the separation lines and consequently for the position of the vortices and the vortex lift on the body. Lastly, control of separation is achieved by blowing tangentially from a slot located along a cone generator. It is found that for very small blowing coefficients, the separation can be postponed or suppressedy completely.
NASA Astrophysics Data System (ADS)
Li, Jie; Guo, LiXin; He, Qiong; Wei, Bing
2012-10-01
An iterative strategy combining Kirchhoff approximation^(KA) with the hybrid finite element-boundary integral (FE-BI) method is presented in this paper to study the interactions between the inhomogeneous object and the underlying rough surface. KA is applied to study scattering from underlying rough surfaces, whereas FE-BI deals with scattering from the above target. Both two methods use updated excitation sources. Huygens equivalence principle and an iterative strategy are employed to consider the multi-scattering effects. This hybrid FE-BI-KA scheme is an improved and generalized version of previous hybrid Kirchhoff approximation-method of moments (KA-MoM). This newly presented hybrid method has the following advantages: (1) the feasibility of modeling multi-scale scattering problems (large scale underlying surface and small scale target); (2) low memory requirement as in hybrid KA-MoM; (3) the ability to deal with scattering from inhomogeneous (including coated or layered) scatterers above rough surfaces. The numerical results are given to evaluate the accuracy of the multi-hybrid technique; the computing time and memory requirements consumed in specific numerical simulation of FE-BI-KA are compared with those of MoM. The convergence performance is analyzed by studying the iteration number variation caused by related parameters. Then bistatic scattering from inhomogeneous object of different configurations above dielectric Gaussian rough surface is calculated and the influences of dielectric compositions and surface roughness on the scattering pattern are discussed.
Efficient Geometry and Data Handling for Large-Scale Monte Carlo - Thermal-Hydraulics Coupling
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard
2014-06-01
Detailed coupling of thermal-hydraulics calculations to Monte Carlo reactor criticality calculations requires each axial layer of each fuel pin to be defined separately in the input to the Monte Carlo code in order to assign to each volume the temperature according to the result of the TH calculation, and if the volume contains coolant, also the density of the coolant. This leads to huge input files for even small systems. In this paper a methodology for dynamical assignment of temperatures with respect to cross section data is demonstrated to overcome this problem. The method is implemented in MCNP5. The method is verified for an infinite lattice with 3x3 BWR-type fuel pins with fuel, cladding and moderator/coolant explicitly modeled. For each pin 60 axial zones are considered with different temperatures and coolant densities. The results of the axial power distribution per fuel pin are compared to a standard MCNP5 run in which all 9x60 cells for fuel, cladding and coolant are explicitly defined and their respective temperatures determined from the TH calculation. Full agreement is obtained. For large-scale application the method is demonstrated for an infinite lattice with 17x17 PWR-type fuel assemblies with 25 rods replaced by guide tubes. Again all geometrical detailed is retained. The method was used in a procedure for coupled Monte Carlo and thermal-hydraulics iterations. Using an optimised iteration technique, convergence was obtained in 11 iteration steps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, C.B.; Haglund, R.C.; Miller, M.E.
1996-12-31
The Vanadium/Lithium system has been the recent focus of ANL`s Blanket Technology Pro-ram, and for the last several years, ANL`s Liquid Metal Blanket activities have been carried out in direct support of the ITER (International Thermonuclear Experimental Reactor) breeding blanket task area. A key feasibility issue for the ITER Vanadium/Lithium breeding blanket is the Near the development of insulator coatings. Design calculations, Hua and Gohar, show that an electrically insulating layer is necessary to maintain an acceptably low magneto-hydrodynamic (MHD) pressure drop in the current ITER design. Consequently, the decision was made to convert Argonne`s Liquid Metal EXperiment (ALEX) frommore » a 200{degrees}C NaK facility to a 350{degrees}C lithium facility. The upgraded facility was designed to produce MHD pressure drop data, test section voltage distributions, and heat transfer data for mid-scale test sections and blanket mockups at Hartmann numbers (M) and interaction parameters (N) in the range of 10{sup 3} to 10{sup 5} in lithium at 350{degrees}C. Following completion of the upgrade work, a short performance test was conducted, followed by two longer multiple-hour, MHD tests, all at 230{degrees}C. The modified ALEX facility performed up to expectations in the testing. MHD pressure drop and test section voltage distributions were collected at Hartmann numbers of 1000.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, C.B.; Haglund, R.C.; Miller, M.E.
1996-12-31
The Vanadium/Lithium system has been the recent focus of ANL`s Blanket Technology Program, and for the last several years, ANL`s Liquid Metal Blanket activities have been carried out in direct support of the ITER (International Thermonuclear Experimental Reactor) breeding blanket task area. A key feasibility issue for the ITER Vanadium/Lithium breeding blanket is the development of insulator coatings. Design calculations, Hua and Gohar, show that an electrically insulating layer is necessary to maintain an acceptably low magnetohydrodynamic (MHD) pressure drop in the current ITER design. Consequently, the decision was made to convert Argonne`s Liquid Metal EXperiment (ALEX) from a 200{degree}Cmore » NaK facility to a 350{degree}C lithium facility. The upgraded facility was designed to produce MHD pressure drop data, test section voltage distributions, and heat transfer data for mid-scale test sections and blanket mockups at Hartmann numbers (M) and interaction parameters (N) in the range of 10{sup 3} to 10{sup 5} in lithium at 350{degree}C. Following completion of the upgrade work, a short performance test was conducted, followed by two longer, multiple-hour, MHD tests, all at 230{degree}C. The modified ALEX facility performed up to expectations in the testing. MHD pressure drop and test section voltage distributions were collected at Hartmann numbers of 1000. 4 refs., 2 figs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tresemer, K. R.
2015-07-01
ITER is an international project under construction in France that will demonstrate nuclear fusion at a power plant-relevant scale. The Toroidal Interferometer and Polarimeter (TIP) Diagnostic will be used to measure the plasma electron line density along 5 laser-beam chords. This line-averaged density measurement will be input to the ITER feedback-control system. The TIP is considered the primary diagnostic for these measurements, which are needed for basic ITER machine control. Therefore, system reliability & accuracy is a critical element in TIP’s design. There are two major challenges to the reliability of the TIP system. First is the survivability and performancemore » of in-vessel optics and second is maintaining optical alignment over long optical paths and large vessel movements. Both of these issues greatly depend on minimizing the overall distortion due to neutron & gamma heating of the Corner Cube Retroreflectors (CCRs). These are small optical mirrors embedded in five first wall locations around the vacuum vessel, corresponding to certain plasma tangency radii. During the development of the design and location of these CCRs, several iterations of neutronics analyses were performed to determine and minimize the total distortion due to nuclear heating of the CCRs. The CCR corresponding to TIP Channel 2 was chosen for analysis as a good middle-road case, being an average distance from the plasma (of the five channels) and having moderate neutron shielding from its blanket shield housing. Results show that Channel 2 meets the requirements of the TIP Diagnostic, but barely. These results suggest other CCRs might be at risk of exceeding thermal deformation due to nuclear heating.« less
NASA Astrophysics Data System (ADS)
Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo
2017-08-01
The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.
Fully Automated Detection of Cloud and Aerosol Layers in the CALIPSO Lidar Measurements
NASA Technical Reports Server (NTRS)
Vaughan, Mark A.; Powell, Kathleen A.; Kuehn, Ralph E.; Young, Stuart A.; Winker, David M.; Hostetler, Chris A.; Hunt, William H.; Liu, Zhaoyan; McGill, Matthew J.; Getzewich, Brian J.
2009-01-01
Accurate knowledge of the vertical and horizontal extent of clouds and aerosols in the earth s atmosphere is critical in assessing the planet s radiation budget and for advancing human understanding of climate change issues. To retrieve this fundamental information from the elastic backscatter lidar data acquired during the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) mission, a selective, iterated boundary location (SIBYL) algorithm has been developed and deployed. SIBYL accomplishes its goals by integrating an adaptive context-sensitive profile scanner into an iterated multiresolution spatial averaging scheme. This paper provides an in-depth overview of the architecture and performance of the SIBYL algorithm. It begins with a brief review of the theory of target detection in noise-contaminated signals, and an enumeration of the practical constraints levied on the retrieval scheme by the design of the lidar hardware, the geometry of a space-based remote sensing platform, and the spatial variability of the measurement targets. Detailed descriptions are then provided for both the adaptive threshold algorithm used to detect features of interest within individual lidar profiles and the fully automated multiresolution averaging engine within which this profile scanner functions. The resulting fusion of profile scanner and averaging engine is specifically designed to optimize the trade-offs between the widely varying signal-to-noise ratio of the measurements and the disparate spatial resolutions of the detection targets. Throughout the paper, specific algorithm performance details are illustrated using examples drawn from the existing CALIPSO dataset. Overall performance is established by comparisons to existing layer height distributions obtained by other airborne and space-based lidars.
NASA Technical Reports Server (NTRS)
Lachenmayr, Georg
1992-01-01
IABG has been using various servohydraulic test facilities for many years for the reproduction of service loads and environmental loads on all kinds of test objects. For more than 15 years, a multi-axis vibration test facility has been under service, originally designed for earthquake simulation but being upgraded to the demands of space testing. First tests with the DFS/STM showed good reproduction accuracy and demonstrated the feasibility of transient vibration testing of space objects on a multi-axis hydraulic shaker. An approach to structural qualification is possible by using this test philosophy. It will be outlined and its obvious advantages over the state-of-the-art single-axis test will be demonstrated by example results. The new test technique has some special requirements to the test facility exceeding those of earthquake testing. Most important is the high reproduction accuracy demanded for a sophisticated control system. The state-of-the-art approach of analog closed-loop control circuits for each actuator combined with a static decoupling network and an off-line iterative waveform control is not able to meet all the demands. Therefore, the future over-all control system is implemented as hierarchical full digital closed-loop system on a highly parallel transputer network. The innermost layer is the digital actuator controller, the second one is the MDOF-control of the table movement. The outermost layer would be the off-line iterative waveform control, which is dedicated only to deal with the interaction of test table and test object or non-linear effects. The outline of the system will be presented.
A two-dimensional iterative panel method and boundary layer model for bio-inspired multi-body wings
NASA Astrophysics Data System (ADS)
Blower, Christopher J.; Dhruv, Akash; Wickenheiser, Adam M.
2014-03-01
The increased use of Unmanned Aerial Vehicles (UAVs) has created a continuous demand for improved flight capabilities and range of use. During the last decade, engineers have turned to bio-inspiration for new and innovative flow control methods for gust alleviation, maneuverability, and stability improvement using morphing aircraft wings. The bio-inspired wing design considered in this study mimics the flow manipulation techniques performed by birds to extend the operating envelope of UAVs through the installation of an array of feather-like panels across the airfoil's upper and lower surfaces while replacing the trailing edge flap. Each flap has the ability to deflect into both the airfoil and the inbound airflow using hinge points with a single degree-of-freedom, situated at 20%, 40%, 60% and 80% of the chord. The installation of the surface flaps offers configurations that enable advantageous maneuvers while alleviating gust disturbances. Due to the number of possible permutations available for the flap configurations, an iterative constant-strength doublet/source panel method has been developed with an integrated boundary layer model to calculate the pressure distribution and viscous drag over the wing's surface. As a result, the lift, drag and moment coefficients for each airfoil configuration can be calculated. The flight coefficients of this numerical method are validated using experimental data from a low speed suction wind tunnel operating at a Reynolds Number 300,000. This method enables the aerodynamic assessment of a morphing wing profile to be performed accurately and efficiently in comparison to Computational Fluid Dynamics methods and experiments as discussed herein.
NASA Technical Reports Server (NTRS)
Liang, XU; Lettenmaier, Dennis P.; Wood, Eric F.; Burges, Stephen J.
1994-01-01
A generalization of the single soil layer variable infiltration capacity (VIC) land surface hydrological model previously implemented in the Geophysical Fluid Dynamics Laboratory (GFDL) general circulation model (GCM) is described. The new model is comprised of a two-layer characterization of the soil column, and uses an aerodynamic representation of the latent and sensible heat fluxes at the land surface. The infiltration algorithm for the upper layer is essentially the same as for the single layer VIC model, while the lower layer drainage formulation is of the form previously implemented in the Max-Planck-Institut GCM. The model partitions the area of interest (e.g., grid cell) into multiple land surface cover types; for each land cover type the fraction of roots in the upper and lower zone is specified. Evapotranspiration consists of three components: canopy evaporation, evaporation from bare soils, and transpiration, which is represented using a canopy and architectural resistance formulation. Once the latent heat flux has been computed, the surface energy balance is iterated to solve for the land surface temperature at each time step. The model was tested using long-term hydrologic and climatological data for Kings Creek, Kansas to estimate and validate the hydrological parameters, and surface flux data from three First International Satellite Land Surface Climatology Project Field Experiment (FIFE) intensive field campaigns in the summer-fall of 1987 to validate the surface energy fluxes.
Inferring the Presence of Reverse Proxies Through Timing Analysis
2015-06-01
16 Figure 3.2 The three different instances of timing measurement configurations 17 Figure 3.3 Permutation of a web request iteration...Their data showed that they could detect at least 6 bits of entropy between unlike devices and that it was enough to determine that they are in fact...depending on the permutation being executed so that every iteration was conducted under the same distance 15 City Lat Long City Lat Long
Nonlinear Analysis of Cavitating Propellers in Nonuniform Flow
1992-10-16
Helmholtz more than a century ago [4]. The method was later extended to treat curved bodies at zero cavitation number by Levi - Civita [4]. The theory was...122, 1895. [63] M.P. Tulin. Steady two -dimensional cavity flows about slender bodies . Technical Report 834, DTMB, May 1953. [64] M.P. Tulin...iterative solution for two -dimensional flows is remarkably fast and that the accuracy of the first iteration solution is sufficient for a wide range of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gingold, E; Dave, J
2014-06-01
Purpose: The purpose of this study was to compare a new model-based iterative reconstruction with existing reconstruction methods (filtered backprojection and basic iterative reconstruction) using quantitative analysis of standard image quality phantom images. Methods: An ACR accreditation phantom (Gammex 464) and a CATPHAN600 phantom were scanned using 3 routine clinical acquisition protocols (adult axial brain, adult abdomen, and pediatric abdomen) on a Philips iCT system. Each scan was acquired using default conditions and 75%, 50% and 25% dose levels. Images were reconstructed using standard filtered backprojection (FBP), conventional iterative reconstruction (iDose4) and a prototype model-based iterative reconstruction (IMR). Phantom measurementsmore » included CT number accuracy, contrast to noise ratio (CNR), modulation transfer function (MTF), low contrast detectability (LCD), and noise power spectrum (NPS). Results: The choice of reconstruction method had no effect on CT number accuracy, or MTF (p<0.01). The CNR of a 6 HU contrast target was improved by 1–67% with iDose4 relative to FBP, while IMR improved CNR by 145–367% across all protocols and dose levels. Within each scan protocol, the CNR improvement from IMR vs FBP showed a general trend of greater improvement at lower dose levels. NPS magnitude was greatest for FBP and lowest for IMR. The NPS of the IMR reconstruction showed a pronounced decrease with increasing spatial frequency, consistent with the unusual noise texture seen in IMR images. Conclusion: Iterative Model Reconstruction reduces noise and improves contrast-to-noise ratio without sacrificing spatial resolution in CT phantom images. This offers the possibility of radiation dose reduction and improved low contrast detectability compared with filtered backprojection or conventional iterative reconstruction.« less
EC assisted start-up experiments reproduction in FTU and AUG for simulations of the ITER case
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granucci, G.; Ricci, D.; Farina, D.
The breakdown and plasma start-up in ITER are well known issues studied in the last few years in many tokamaks with the aid of calculation based on simplified modeling. The thickness of ITER metallic wall and the voltage limits of the Central Solenoid Power Supply strongly limit the maximum toroidal electric field achievable (0.3 V/m), well below the level used in the present generation of tokamaks. In order to have a safe and robust breakdown, the use of Electron Cyclotron Power to assist plasma formation and current rump up has been foreseen. This has raised attention on plasma formation phasemore » in presence of EC wave, especially in order to predict the required power for a robust breakdown in ITER. Few detailed theory studies have been performed up to nowadays, due to the complexity of the problems. A simplified approach, extended from that proposed in ref[1] has been developed including a impurity multispecies distribution and an EC wave propagation and absorption based on GRAY code. This integrated model (BK0D) has been benchmarked on ohmic and EC assisted experiments on FTU and AUG, finding the key aspects for a good reproduction of data. On the basis of this, the simulation has been devoted to understand the best configuration for ITER case. The dependency of impurity distribution content and neutral gas pressure limits has been considered. As results of the analysis a reasonable amount of power (1 - 2 MW) seems to be enough to extend in a significant way the breakdown and current start up capability of ITER. The work reports the FTU data reproduction and the ITER case simulations.« less
Hybrid cloud and cluster computing paradigms for life science applications
2010-01-01
Background Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Results Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. Conclusions The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. Methods We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments. PMID:21210982
Progress of IRSN R&D on ITER Safety Assessment
NASA Astrophysics Data System (ADS)
Van Dorsselaere, J. P.; Perrault, D.; Barrachin, M.; Bentaib, A.; Gensdarmes, F.; Haeck, W.; Pouvreau, S.; Salat, E.; Seropian, C.; Vendel, J.
2012-08-01
The French "Institut de Radioprotection et de Sûreté Nucléaire" (IRSN), in support to the French "Autorité de Sûreté Nucléaire", is analysing the safety of ITER fusion installation on the basis of the ITER operator's safety file. IRSN set up a multi-year R&D program in 2007 to support this safety assessment process. Priority has been given to four technical issues and the main outcomes of the work done in 2010 and 2011 are summarized in this paper: for simulation of accident scenarios in the vacuum vessel, adaptation of the ASTEC system code; for risk of explosion of gas-dust mixtures in the vacuum vessel, adaptation of the TONUS-CFD code for gas distribution, development of DUST code for dust transport, and preparation of IRSN experiments on gas inerting, dust mobilization, and hydrogen-dust mixtures explosion; for evaluation of the efficiency of the detritiation systems, thermo-chemical calculations of tritium speciation during transport in the gas phase and preparation of future experiments to evaluate the most influent factors on detritiation; for material neutron activation, adaptation of the VESTA Monte Carlo depletion code. The first results of these tasks have been used in 2011 for the analysis of the ITER safety file. In the near future, this R&D global programme may be reoriented to account for the feedback of the latter analysis or for new knowledge.
Hybrid cloud and cluster computing paradigms for life science applications.
Qiu, Judy; Ekanayake, Jaliya; Gunarathne, Thilina; Choi, Jong Youl; Bae, Seung-Hee; Li, Hui; Zhang, Bingjing; Wu, Tak-Lon; Ruan, Yang; Ekanayake, Saliya; Hughes, Adam; Fox, Geoffrey
2010-12-21
Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.