Sample records for transfer code optim3d

  1. Wakefield Computations for the CLIC PETS using the Parallel Finite Element Time-Domain Code T3P

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, A; Kabel, A.; Lee, L.

    In recent years, SLAC's Advanced Computations Department (ACD) has developed the high-performance parallel 3D electromagnetic time-domain code, T3P, for simulations of wakefields and transients in complex accelerator structures. T3P is based on advanced higher-order Finite Element methods on unstructured grids with quadratic surface approximation. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with unprecedented accuracy, aiding the design of the next generation of accelerator facilities. Applications to the Compact Linear Collider (CLIC) Power Extraction and Transfer Structure (PETS) are presented.

  2. A Wideband Circularly Polarized Pixelated Dielectric Resonator Antenna.

    PubMed

    Trinh-Van, Son; Yang, Youngoo; Lee, Kang-Yoon; Hwang, Keum Cheol

    2016-08-23

    The design of a wideband circularly polarized pixelated dielectric resonator antenna using a real-coded genetic algorithm (GA) is presented for far-field wireless power transfer applications. The antenna consists of a dielectric resonator (DR) which is discretized into 8 × 8 grid DR bars. The real-coded GA is utilized to estimate the optimal heights of the 64 DR bars to realize circular polarization. The proposed antenna is excited by a narrow rectangular slot etched on the ground plane. A prototype of the proposed antenna is fabricated and tested. The measured -10 dB reflection and 3 dB axial ratio bandwidths are 32.32% (2.62-3.63 GHz) and 14.63% (2.85-3.30 GHz), respectively. A measured peak gain of 6.13 dBic is achieved at 3.2 GHz.

  3. Development of 1D Liner Compression Code for IDL

    NASA Astrophysics Data System (ADS)

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony

    2015-11-01

    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  4. Ray-tracing 3D dust radiative transfer with DART-Ray: code upgrade and public release

    NASA Astrophysics Data System (ADS)

    Natale, Giovanni; Popescu, Cristina C.; Tuffs, Richard J.; Clarke, Adam J.; Debattista, Victor P.; Fischera, Jörg; Pasetto, Stefano; Rushton, Mark; Thirlwall, Jordan J.

    2017-11-01

    We present an extensively updated version of the purely ray-tracing 3D dust radiation transfer code DART-Ray. The new version includes five major upgrades: 1) a series of optimizations for the ray-angular density and the scattered radiation source function; 2) the implementation of several data and task parallelizations using hybrid MPI+OpenMP schemes; 3) the inclusion of dust self-heating; 4) the ability to produce surface brightness maps for observers within the models in HEALPix format; 5) the possibility to set the expected numerical accuracy already at the start of the calculation. We tested the updated code with benchmark models where the dust self-heating is not negligible. Furthermore, we performed a study of the extent of the source influence volumes, using galaxy models, which are critical in determining the efficiency of the DART-Ray algorithm. The new code is publicly available, documented for both users and developers, and accompanied by several programmes to create input grids for different model geometries and to import the results of N-body and SPH simulations. These programmes can be easily adapted to different input geometries, and for different dust models or stellar emission libraries.

  5. Numerical optimization of perturbative coils for tokamaks

    NASA Astrophysics Data System (ADS)

    Lazerson, Samuel; Park, Jong-Kyu; Logan, Nikolas; Boozer, Allen; NSTX-U Research Team

    2014-10-01

    Numerical optimization of coils which apply three dimensional (3D) perturbative fields to tokamaks is presented. The application of perturbative 3D magnetic fields in tokamaks is now commonplace for control of error fields, resistive wall modes, resonant field drive, and neoclassical toroidal viscosity (NTV) torques. The design of such systems has focused on control of toroidal mode number, with coil shapes based on simple window-pane designs. In this work, a numerical optimization suite based on the STELLOPT 3D equilibrium optimization code is presented. The new code, IPECOPT, replaces the VMEC equilibrium code with the IPEC perturbed equilibrium code, and targets NTV torque by coupling to the PENT code. Fixed boundary optimizations of the 3D fields for the NSTX-U experiment are underway. Initial results suggest NTV torques can be driven by normal field spectrums which are not pitch-resonant with the magnetic field lines. Work has focused on driving core torque with n = 1 and edge torques with n = 3 fields. Optimizations of the coil currents for the planned NSTX-U NCC coils highlight the code's free boundary capability. This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy.

  6. Turbomachinery Heat Transfer and Loss Modeling for 3D Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Ameri, Ali

    2005-01-01

    This report's contents focus on making use of NASA Glenn on-site computational facilities,to develop, validate, and apply models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes to enhance the capability to compute heat transfer and losses in turbomachiney.

  7. Radiative transfer code SHARM-3D for radiance simulations over a non-Lambertian nonhomogeneous surface: intercomparison study.

    PubMed

    Lyapustin, Alexei

    2002-09-20

    Results of an extensive validation study of the new radiative transfer code SHARM-3D are described. The code is designed for modeling of unpolarized monochromatic radiative transfer in the visible and near-IR spectra in the laterally uniform atmosphere over an arbitrarily inhomogeneous anisotropic surface. The surface boundary condition is periodic. The algorithm is based on an exact solution derived with the Green's function method. Several parameterizations were introduced into the algorithm to achieve superior performance. As a result, SHARM-3D is 2-3 orders of magnitude faster than the rigorous code SHDOM. It can model radiances over large surface scenes for a number of incidence-view geometries simultaneously. Extensive comparisons against SHDOM indicate that SHARM-3D has an average accuracy of better than 1%, which along with the high speed of calculations makes it a unique tool for remote-sensing applications in land surface and related atmospheric radiation studies.

  8. Radiative Transfer Code SHARM-3D for Radiance Simulations over a non-Lambertian Nonhomogeneous Surface: Intercomparison Study

    NASA Astrophysics Data System (ADS)

    Lyapustin, Alexei

    2002-09-01

    Results of an extensive validation study of the new radiative transfer code SHARM-3D are described. The code is designed for modeling of unpolarized monochromatic radiative transfer in the visible and near-IR spectra in the laterally uniform atmosphere over an arbitrarily inhomogeneous anisotropic surface. The surface boundary condition is periodic. The algorithm is based on an exact solution derived with the Green ’s function method. Several parameterizations were introduced into the algorithm to achieve superior performance. As a result, SHARM-3D is 2 -3 orders of magnitude faster than the rigorous code SHDOM. It can model radiances over large surface scenes for a number of incidence-view geometries simultaneously. Extensive comparisons against SHDOM indicate that SHARM-3D has an average accuracy of better than 1%, which along with the high speed of calculations makes it a unique tool for remote-sensing applications in land surface and related atmospheric radiation studies.

  9. State of the art in electromagnetic modeling for the Compact Linear Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, Arno; Kabel, Andreas; Lee, Lie-Quan

    SLAC's Advanced Computations Department (ACD) has developed the parallel 3D electromagnetic time-domain code T3P for simulations of wakefields and transients in complex accelerator structures. T3P is based on state-of-the-art Finite Element methods on unstructured grids and features unconditional stability, quadratic surface approximation and up to 6th-order vector basis functions for unprecedented simulation accuracy. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with fast turn-around times, aiding the design of the next generation of accelerator facilities. Applications include simulations of the proposed two-beam accelerator structures for the Compact Linear Collider (CLIC) - wakefieldmore » damping in the Power Extraction and Transfer Structure (PETS) and power transfer to the main beam accelerating structures are investigated.« less

  10. Cirrus Heterogeneity Effects on Cloud Optical Properties Retrieved with an Optimal Estimation Method from MODIS VIS to TIR Channels.

    NASA Technical Reports Server (NTRS)

    Fauchez, T.; Platnick, S.; Meyer, K.; Sourdeval, O.; Cornet, C.; Zhang, Z.; Szczap, F.

    2016-01-01

    This study presents preliminary results on the effect of cirrus heterogeneities on top-of-atmosphere (TOA) simulated radiances or reflectances for MODIS channels centered at 0.86, 2.21, 8.56, 11.01 and 12.03 micrometers , and on cloud optical properties retrieved with a research-level optimal estimation method (OEM). Synthetic cirrus cloud fields are generated using a 3D cloud generator (3DCLOUD) and radiances/reflectances are simulated using a 3D radiative transfer code (3DMCPOL). We find significant differences between the heterogeneity effects on either visible and near-infrared (VNIR) or thermal infrared (TIR) radiances. However, when both wavelength ranges are combined, heterogeneity effects are dominated by the VNIR horizontal radiative transport effect. As a result, small optical thicknesses are overestimated and large ones are underestimated. Retrieved effective diameter are found to be slightly affected, contrarily to retrievals using TIR channels only.

  11. 3D-radiative transfer in terrestrial atmosphere: An efficient parallel numerical procedure

    NASA Astrophysics Data System (ADS)

    Bass, L. P.; Germogenova, T. A.; Nikolaeva, O. V.; Kokhanovsky, A. A.; Kuznetsov, V. S.

    2003-04-01

    Light propagation and scattering in terrestrial atmosphere is usually studied in the framework of the 1D radiative transfer theory [1]. However, in reality particles (e.g., ice crystals, solid and liquid aerosols, cloud droplets) are randomly distributed in 3D space. In particular, their concentrations vary both in vertical and horizontal directions. Therefore, 3D effects influence modern cloud and aerosol retrieval procedures, which are currently based on the 1D radiative transfer theory. It should be pointed out that the standard radiative transfer equation allows to study these more complex situations as well [2]. In recent year the parallel version of the 2D and 3D RADUGA code has been developed. This version is successfully used in gammas and neutrons transport problems [3]. Applications of this code to radiative transfer in atmosphere problems are contained in [4]. Possibilities of code RADUGA are presented in [5]. The RADUGA code system is an universal solver of radiative transfer problems for complicated models, including 2D and 3D aerosol and cloud fields with arbitrary scattering anisotropy, light absorption, inhomogeneous underlying surface and topography. Both delta type and distributed light sources can be accounted for in the framework of the algorithm developed. The accurate numerical procedure is based on the new discrete ordinate SWDD scheme [6]. The algorithm is specifically designed for parallel supercomputers. The version RADUGA 5.1(P) can run on MBC1000M [7] (768 processors with 10 Gb of hard disc memory for each processor). The peak productivity is equal 1 Tfl. Corresponding scalar version RADUGA 5.1 is working on PC. As a first example of application of the algorithm developed, we have studied the shadowing effects of clouds on neighboring cloudless atmosphere, depending on the cloud optical thickness, surface albedo, and illumination conditions. This is of importance for modern satellite aerosol retrieval algorithms development. [1] Sobolev, V. V., 1972: Light scattering in planetary atmosphere, M.:Nauka. [2] Evans, K. F., 1998: The spherical harmonic discrete ordinate method for three dimensional atmospheric radiative transfer, J. Atmos. Sci., 55, 429 446. [3] L.P. Bass, T.A. Germogenova, V.S. Kuznetsov, O.V. Nikolaeva. RADUGA 5.1 and RADUGA 5.1(P) codes for stationary transport equation solution in 2D and 3D geometries on one and multiprocessors computers. Report on seminar “Algorithms and Codes for neutron physical of nuclear reactor calculations” (Neutronica 2001), Obninsk, Russia, 30 October 2 November 2001. [4] T.A. Germogenova, L.P. Bass, V.S. Kuznetsov, O.V. Nikolaeva. Mathematical modeling on parallel computers solar and laser radiation transport in 3D atmosphere. Report on International Symposium CIS countries “Atmosphere radiation”, 18 21 June 2002, St. Peterburg, Russia, p. 15 16. [5] L.P. Bass, T.A. Germogenova, O.V. Nikolaeva, V.S. Kuznetsov. Radiative Transfer Universal 2D 3D Code RADUGA 5.1(P) for Multiprocessor Computer. Abstract. Poster report on this Meeting. [6] L.P. Bass, O.V. Nikolaeva. Correct calculation of Angular Flux Distribution in Strongly Heterogeneous Media and Voids. Proc. of Joint International Conference on Mathematical Methods and Supercomputing for Nuclear Applications, Saratoga Springs, New York, October 5 9, 1997, p. 995 1004. [7] http://www/jscc.ru

  12. Analyze and predict VLTI observations: the Role of 2D/3D dust continuum radiative transfer codes

    NASA Astrophysics Data System (ADS)

    Pascucci, I.; Henning, Th; Steinacker, J.; Wolf, S.

    2003-10-01

    Radiative Transfer (RT) codes with image capability are a fundamental tool for preparing interferometric observations and for interpreting visibility data. In view of the upcoming VLTI facilities, we present the first comparison of images/visibilities coming from two 3D codes that use completely different techniques to solve the problem of self-consistent continuum RT. In addition, we focus on the astrophysical case of a disk distorted by tidal interaction with by-passing stars or internal planets and investigate for which parameters the distortion can be best detected in the mid-infrared using the mid-infrared interferometric device MIDI.

  13. 2D/3D Dust Continuum Radiative Transfer Codes to Analyze and Predict VLTI Observations

    NASA Astrophysics Data System (ADS)

    Pascucci, I.; Henning, Th.; Steinacker, J.; Wolf, S.

    Radiative Transfer (RT) codes with image capability are a fundamental tool for preparing interferometric observations and for interpreting visibility data. In view of the upcoming VLTI facilities, we present the first comparison of images/visibilities coming from two 3D codes that use completely different techniques to solve the problem of self-consistent continuum RT. In addition, we focus on the astrophysical case of a disk distorted by tidal interaction with by-passing stars or internal planets and investigate for which parameters the distortion can be best detected in the mid-infrared using the mid-infrared interferometric device MIDI.

  14. Optimization of 3D Field Design

    NASA Astrophysics Data System (ADS)

    Logan, Nikolas; Zhu, Caoxiang

    2017-10-01

    Recent progress in 3D tokamak modeling is now leveraged to create a conceptual design of new external 3D field coils for the DIII-D tokamak. Using the IPEC dominant mode as a target spectrum, the Finding Optimized Coils Using Space-curves (FOCUS) code optimizes the currents and 3D geometry of multiple coils to maximize the total set's resonant coupling. The optimized coils are individually distorted in space, creating toroidal ``arrays'' containing a variety of shapes that often wrap around a significant poloidal extent of the machine. The generalized perturbed equilibrium code (GPEC) is used to determine optimally efficient spectra for driving total, core, and edge neoclassical toroidal viscosity (NTV) torque and these too provide targets for the optimization of 3D coil designs. These conceptual designs represent a fundamentally new approach to 3D coil design for tokamaks targeting desired plasma physics phenomena. Optimized coil sets based on plasma response theory will be relevant to designs for future reactors or on any active machine. External coils, in particular, must be optimized for reliable and efficient fusion reactor designs. Work supported by the US Department of Energy under DE-AC02-09CH11466.

  15. Hypersonic CFD applications at NASA Langley using CFL3D and CFL3DE

    NASA Technical Reports Server (NTRS)

    Richardson, Pamela F.

    1989-01-01

    The CFL3D/CFL3DE CFD codes and the industrial use status of the codes are outlined. Comparison of grid density, pressure, heat transfer, and aerodynamic coefficience are presented. Future plans related to the National Aerospace Plane Program are briefly outlined.

  16. Time domain topology optimization of 3D nanophotonic devices

    NASA Astrophysics Data System (ADS)

    Elesin, Y.; Lazarov, B. S.; Jensen, J. S.; Sigmund, O.

    2014-02-01

    We present an efficient parallel topology optimization framework for design of large scale 3D nanophotonic devices. The code shows excellent scalability and is demonstrated for optimization of broadband frequency splitter, waveguide intersection, photonic crystal-based waveguide and nanowire-based waveguide. The obtained results are compared to simplified 2D studies and we demonstrate that 3D topology optimization may lead to significant performance improvements.

  17. Assessing 1D Atmospheric Solar Radiative Transfer Models: Interpretation and Handling of Unresolved Clouds.

    NASA Astrophysics Data System (ADS)

    Barker, H. W.; Stephens, G. L.; Partain, P. T.; Bergman, J. W.; Bonnel, B.; Campana, K.; Clothiaux, E. E.; Clough, S.; Cusack, S.; Delamere, J.; Edwards, J.; Evans, K. F.; Fouquart, Y.; Freidenreich, S.; Galin, V.; Hou, Y.; Kato, S.; Li, J.;  Mlawer, E.;  Morcrette, J.-J.;  O'Hirok, W.;  Räisänen, P.;  Ramaswamy, V.;  Ritter, B.;  Rozanov, E.;  Schlesinger, M.;  Shibata, K.;  Sporyshev, P.;  Sun, Z.;  Wendisch, M.;  Wood, N.;  Yang, F.

    2003-08-01

    The primary purpose of this study is to assess the performance of 1D solar radiative transfer codes that are used currently both for research and in weather and climate models. Emphasis is on interpretation and handling of unresolved clouds. Answers are sought to the following questions: (i) How well do 1D solar codes interpret and handle columns of information pertaining to partly cloudy atmospheres? (ii) Regardless of the adequacy of their assumptions about unresolved clouds, do 1D solar codes perform as intended?One clear-sky and two plane-parallel, homogeneous (PPH) overcast cloud cases serve to elucidate 1D model differences due to varying treatments of gaseous transmittances, cloud optical properties, and basic radiative transfer. The remaining four cases involve 3D distributions of cloud water and water vapor as simulated by cloud-resolving models. Results for 25 1D codes, which included two line-by-line (LBL) models (clear and overcast only) and four 3D Monte Carlo (MC) photon transport algorithms, were submitted by 22 groups. Benchmark, domain-averaged irradiance profiles were computed by the MC codes. For the clear and overcast cases, all MC estimates of top-of-atmosphere albedo, atmospheric absorptance, and surface absorptance agree with one of the LBL codes to within ±2%. Most 1D codes underestimate atmospheric absorptance by typically 15-25 W m-2 at overhead sun for the standard tropical atmosphere regardless of clouds.Depending on assumptions about unresolved clouds, the 1D codes were partitioned into four genres: (i) horizontal variability, (ii) exact overlap of PPH clouds, (iii) maximum/random overlap of PPH clouds, and (iv) random overlap of PPH clouds. A single MC code was used to establish conditional benchmarks applicable to each genre, and all MC codes were used to establish the full 3D benchmarks. There is a tendency for 1D codes to cluster near their respective conditional benchmarks, though intragenre variances typically exceed those for the clear and overcast cases. The majority of 1D codes fall into the extreme category of maximum/random overlap of PPH clouds and thus generally disagree with full 3D benchmark values. Given the fairly limited scope of these tests and the inability of any one code to perform extremely well for all cases begs the question that a paradigm shift is due for modeling 1D solar fluxes for cloudy atmospheres.

  18. Memory-efficient decoding of LDPC codes

    NASA Technical Reports Server (NTRS)

    Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon

    2005-01-01

    We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.

  19. A Parallel Numerical Algorithm To Solve Linear Systems Of Equations Emerging From 3D Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Wichert, Viktoria; Arkenberg, Mario; Hauschildt, Peter H.

    2016-10-01

    Highly resolved state-of-the-art 3D atmosphere simulations will remain computationally extremely expensive for years to come. In addition to the need for more computing power, rethinking coding practices is necessary. We take a dual approach by introducing especially adapted, parallel numerical methods and correspondingly parallelizing critical code passages. In the following, we present our respective work on PHOENIX/3D. With new parallel numerical algorithms, there is a big opportunity for improvement when iteratively solving the system of equations emerging from the operator splitting of the radiative transfer equation J = ΛS. The narrow-banded approximate Λ-operator Λ* , which is used in PHOENIX/3D, occurs in each iteration step. By implementing a numerical algorithm which takes advantage of its characteristic traits, the parallel code's efficiency is further increased and a speed-up in computational time can be achieved.

  20. FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression

    PubMed Central

    2015-01-01

    A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation. PMID:26601120

  1. Genetic algorithm optimization of a film cooling array on a modern turbine inlet vane

    NASA Astrophysics Data System (ADS)

    Johnson, Jamie J.

    In response to the need for more advanced gas turbine cooling design methods that factor in the 3-D flowfield and heat transfer characteristics, this study involves the computational optimization of a pressure side film cooling array on a modern turbine inlet vane. Latin hypersquare sampling, genetic algorithm reproduction, and Reynolds-Averaged Navier Stokes (RANS) computational fluid dynamics (CFD) as an evaluation step are used to assess a total of 1,800 film cooling designs over 13 generations. The process was efficient due to the Leo CFD code's ability to estimate cooling mass flux at surface grid cells using a transpiration boundary condition, eliminating the need for remeshing between designs. The optimization resulted in a unique cooling design relative to the baseline with new injection angles, compound angles, cooling row patterns, hole sizes, a redistribution of cooling holes away from the over-cooled midspan to hot areas near the shroud, and a lower maximum surface temperature. To experimentally confirm relative design trends between the optimized and baseline designs, flat plate infrared thermography assessments were carried out at design flow conditions. Use of flat plate experiments to model vane pressure side cooling was justified through a conjugate heat transfer CFD comparison of the 3-D vane and flat plate which showed similar cooling performance trends at multiple span locations. The optimized flat plate model exhibited lower minimum surface temperatures at multiple span locations compared to the baseline. Overall, this work shows promise of optimizing film cooling to reduce design cycle time and save cooling mass flow in a gas turbine.

  2. SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations

    NASA Astrophysics Data System (ADS)

    Baes, M.; Camps, P.

    2015-09-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.

  3. TRUST. I. A 3D externally illuminated slab benchmark for dust radiative transfer

    NASA Astrophysics Data System (ADS)

    Gordon, K. D.; Baes, M.; Bianchi, S.; Camps, P.; Juvela, M.; Kuiper, R.; Lunttila, T.; Misselt, K. A.; Natale, G.; Robitaille, T.; Steinacker, J.

    2017-07-01

    Context. The radiative transport of photons through arbitrary three-dimensional (3D) structures of dust is a challenging problem due to the anisotropic scattering of dust grains and strong coupling between different spatial regions. The radiative transfer problem in 3D is solved using Monte Carlo or Ray Tracing techniques as no full analytic solution exists for the true 3D structures. Aims: We provide the first 3D dust radiative transfer benchmark composed of a slab of dust with uniform density externally illuminated by a star. This simple 3D benchmark is explicitly formulated to provide tests of the different components of the radiative transfer problem including dust absorption, scattering, and emission. Methods: The details of the external star, the slab itself, and the dust properties are provided. This benchmark includes models with a range of dust optical depths fully probing cases that are optically thin at all wavelengths to optically thick at most wavelengths. The dust properties adopted are characteristic of the diffuse Milky Way interstellar medium. This benchmark includes solutions for the full dust emission including single photon (stochastic) heating as well as two simplifying approximations: One where all grains are considered in equilibrium with the radiation field and one where the emission is from a single effective grain with size-distribution-averaged properties. A total of six Monte Carlo codes and one Ray Tracing code provide solutions to this benchmark. Results: The solution to this benchmark is given as global spectral energy distributions (SEDs) and images at select diagnostic wavelengths from the ultraviolet through the infrared. Comparison of the results revealed that the global SEDs are consistent on average to a few percent for all but the scattered stellar flux at very high optical depths. The image results are consistent within 10%, again except for the stellar scattered flux at very high optical depths. The lack of agreement between different codes of the scattered flux at high optical depths is quantified for the first time. Convergence tests using one of the Monte Carlo codes illustrate the sensitivity of the solutions to various model parameters. Conclusions: We provide the first 3D dust radiative transfer benchmark and validate the accuracy of this benchmark through comparisons between multiple independent codes and detailed convergence tests.

  4. Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Garg Vijay; Ameri, Ali

    2005-01-01

    The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.

  5. Influence of flowfield and vehicle parameters on engineering aerothermal methods

    NASA Technical Reports Server (NTRS)

    Wurster, Kathryn E.; Zoby, E. Vincent; Thompson, Richard A.

    1989-01-01

    The reliability and flexibility of three engineering codes used in the aerosphace industry (AEROHEAT, INCHES, and MINIVER) were investigated by comparing the results of these codes with Reentry F flight data and ground-test heat-transfer data for a range of cone angles, and with the predictions obtained using the detailed VSL3D code; the engineering solutions were also compared. In particular, the impact of several vehicle and flow-field parameters on the heat transfer and the capability of the engineering codes to predict these results were determined. It was found that entropy, pressure gradient, nose bluntness, gas chemistry, and angle of attack all affect heating levels. A comparison of the results of the three engineering codes with Reentry F flight data and with the predictions obtained of the VSL3D code showed a very good agreement in the regions of the applicability of the codes. It is emphasized that the parameters used in this study can significantly influence the actual heating levels and the prediction capability of a code.

  6. Report of the Secretary of Defense Task Force on DoD Nuclear Weapons Management. Phase 1. The Air Force’s Nuclear Mission

    DTIC Science & Technology

    2008-09-01

    under- resourced. • Missile transfer vans /warhead transfer vans require upgrades. • ICBM weapon system test sets under-funded; the coding system...Air Force’s Nuclear Mission D-1 Appendix D. Current B-52 Basing Status Barksdale AFB, LA 64 B-52Hs Minot AFB, ND 27 B-52Hs Edwards AFB, CA 3...Barksdale – 64 B-52s 2 BW (ACC) 15 TF; 24 CC; 7 BAI 53 WG (ACC) 2 Test Coded 917 WG (AFRC) 8 CC; 1 BAI 7 Unfunded AR Edwards - 3 B-52s 412 TW 2 Test

  7. Evaluation of critical distances for energy transfer between Pr3+ and Ce3+ in yttrium aluminium garnet

    NASA Astrophysics Data System (ADS)

    Zeng, Peng; Wei, Xiantao; Zhou, Shaoshuai; Yin, Min; Chen, Yonghu

    2016-09-01

    A series of Pr3+/Ce3+ doped yttrium aluminium garnet (Y3Al5O12 or simply YAG) phosphors were synthesized to investigate the energy transfer between Pr3+ and Ce3+ for their potential application in a white light-emitting diode and quantum information storage and processing. The excitation and emission spectra of YAG:Pr3+/Ce3+ were measured and analyzed, and it revealed that the reabsorption between Pr3+ and Ce3+ was so weak that it can be ignored, and the energy transfer from Pr3+ (5d) to Ce3+ (5d) and Ce3+ (5d) to Pr3+ (1D2) did occur. By analyzing the excitation and the emission spectra, the energy transfer from Pr3+ (5d) to Ce3+ (5d) and Ce3+ (5d) to Pr3+ (1D2) was examined in detail with an original strategy deduced from fluorescence dynamics and the Dexter energy transfer theory, and the critical distances of energy transfer were derived to be 7.9 Å and 4.0 Å for Pr3+ (5d) to Ce3+ (5d) and Ce3+ (5d) to Pr3+ (1D2), respectively. The energy transfer rates of the two processes of various concentrations were discussed and evaluated. Furthermore, for the purpose of sensing a single Pr3+ state with a Ce3+ ion, the optimal distance of Ce3+ from Pr3+ was evaluated as 5.60 Å, where the probability of success reaches its maximum value of 78.66%, and meanwhile the probabilities were evaluated for a series of Y3+ sites in a YAG lattice. These results will be of valuable reference for achievement of the optimal energy transfer efficiency in Pr3+/Ce3+ doped YAG and other similar systems.

  8. Comparison of the LLNL ALE3D and AKTS Thermal Safety Computer Codes for Calculating Times to Explosion in ODTX and STEX Thermal Cookoff Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemhoff, A P; Burnham, A K

    2006-04-05

    Cross-comparison of the results of two computer codes for the same problem provides a mutual validation of their computational methods. This cross-validation exercise was performed for LLNL's ALE3D code and AKTS's Thermal Safety code, using the thermal ignition of HMX in two standard LLNL cookoff experiments: the One-Dimensional Time to Explosion (ODTX) test and the Scaled Thermal Explosion (STEX) test. The chemical kinetics model used in both codes was the extended Prout-Tompkins model, a relatively new addition to ALE3D. This model was applied using ALE3D's new pseudospecies feature. In addition, an advanced isoconversional kinetic approach was used in the AKTSmore » code. The mathematical constants in the Prout-Tompkins code were calibrated using DSC data from hermetically sealed vessels and the LLNL optimization code Kinetics05. The isoconversional kinetic parameters were optimized using the AKTS Thermokinetics code. We found that the Prout-Tompkins model calculations agree fairly well between the two codes, and the isoconversional kinetic model gives very similar results as the Prout-Tompkins model. We also found that an autocatalytic approach in the beta-delta phase transition model does affect the times to explosion for some conditions, especially STEX-like simulations at ramp rates above 100 C/hr, and further exploration of that effect is warranted.« less

  9. Radiation Coupling with the FUN3D Unstructured-Grid CFD Code

    NASA Technical Reports Server (NTRS)

    Wood, William A.

    2012-01-01

    The HARA radiation code is fully-coupled to the FUN3D unstructured-grid CFD code for the purpose of simulating high-energy hypersonic flows. The radiation energy source terms and surface heat transfer, under the tangent slab approximation, are included within the fluid dynamic ow solver. The Fire II flight test, at the Mach-31 1643-second trajectory point, is used as a demonstration case. Comparisons are made with an existing structured-grid capability, the LAURA/HARA coupling. The radiative surface heat transfer rates from the present approach match the benchmark values within 6%. Although radiation coupling is the focus of the present work, convective surface heat transfer rates are also reported, and are seen to vary depending upon the choice of mesh connectivity and FUN3D ux reconstruction algorithm. On a tetrahedral-element mesh the convective heating matches the benchmark at the stagnation point, but under-predicts by 15% on the Fire II shoulder. Conversely, on a mixed-element mesh the convective heating over-predicts at the stagnation point by 20%, but matches the benchmark away from the stagnation region.

  10. Numerical optimization of three-dimensional coils for NSTX-U

    DOE PAGES

    Lazerson, S. A.; Park, J. -K.; Logan, N.; ...

    2015-09-03

    A tool for the calculation of optimal three-dimensional (3D) perturbative magnetic fields in tokamaks has been developed. The IPECOPT code builds upon the stellarator optimization code STELLOPT to allow for optimization of linear ideal magnetohydrodynamic perturbed equilibrium (IPEC). This tool has been applied to NSTX-U equilibria, addressing which fields are the most effective at driving NTV torques. The NTV torque calculation is performed by the PENT code. Optimization of the normal field spectrum shows that fields with n = 1 character can drive a large core torque. It is also shown that fields with n = 3 features are capablemore » of driving edge torque and some core torque. Coil current optimization (using the planned in-vessel and existing RWM coils) on NSTX-U suggest the planned coils set is adequate for core and edge torque control. In conclusion, comparison between error field correction experiments on DIII-D and the optimizer show good agreement.« less

  11. Three-dimensional polarization marked multiple-QR code encryption by optimizing a single vectorial beam

    NASA Astrophysics Data System (ADS)

    Lin, Chao; Shen, Xueju; Hua, Binbin; Wang, Zhisong

    2015-10-01

    We demonstrate the feasibility of three dimensional (3D) polarization multiplexing by optimizing a single vectorial beam using a multiple-signal window multiple-plane (MSW-MP) phase retrieval algorithm. Original messages represented with multiple quick response (QR) codes are first partitioned into a series of subblocks. Then, each subblock is marked with a specific polarization state and randomly distributed in 3D space with both longitudinal and transversal adjustable freedoms. A generalized 3D polarization mapping protocol is established to generate a 3D polarization key. Finally, multiple-QR code is encrypted into one phase only mask and one polarization only mask based on the modified Gerchberg-Saxton (GS) algorithm. We take the polarization mask as the cyphertext and the phase only mask as additional dimension of key. Only when both the phase key and 3D polarization key are correct, original messages can be recovered. We verify our proposal with both simulation and experiment evidences.

  12. Coupled Dictionary Learning for the Detail-Enhanced Synthesis of 3-D Facial Expressions.

    PubMed

    Liang, Haoran; Liang, Ronghua; Song, Mingli; He, Xiaofei

    2016-04-01

    The desire to reconstruct 3-D face models with expressions from 2-D face images fosters increasing interest in addressing the problem of face modeling. This task is important and challenging in the field of computer animation. Facial contours and wrinkles are essential to generate a face with a certain expression; however, these details are generally ignored or are not seriously considered in previous studies on face model reconstruction. Thus, we employ coupled radius basis function networks to derive an intermediate 3-D face model from a single 2-D face image. To optimize the 3-D face model further through landmarks, a coupled dictionary that is related to 3-D face models and their corresponding 3-D landmarks is learned from the given training set through local coordinate coding. Another coupled dictionary is then constructed to bridge the 2-D and 3-D landmarks for the transfer of vertices on the face model. As a result, the final 3-D face can be generated with the appropriate expression. In the testing phase, the 2-D input faces are converted into 3-D models that display different expressions. Experimental results indicate that the proposed approach to facial expression synthesis can obtain model details more effectively than previous methods can.

  13. Optimal design of composite hip implants using NASA technology

    NASA Technical Reports Server (NTRS)

    Blake, T. A.; Saravanos, D. A.; Davy, D. T.; Waters, S. A.; Hopkins, D. A.

    1993-01-01

    Using an adaptation of NASA software, we have investigated the use of numerical optimization techniques for the shape and material optimization of fiber composite hip implants. The original NASA inhouse codes, were originally developed for the optimization of aerospace structures. The adapted code, which was called OPORIM, couples numerical optimization algorithms with finite element analysis and composite laminate theory to perform design optimization using both shape and material design variables. The external and internal geometry of the implant and the surrounding bone is described with quintic spline curves. This geometric representation is then used to create an equivalent 2-D finite element model of the structure. Using laminate theory and the 3-D geometric information, equivalent stiffnesses are generated for each element of the 2-D finite element model, so that the 3-D stiffness of the structure can be approximated. The geometric information to construct the model of the femur was obtained from a CT scan. A variety of test cases were examined, incorporating several implant constructions and design variable sets. Typically the code was able to produce optimized shape and/or material parameters which substantially reduced stress concentrations in the bone adjacent of the implant. The results indicate that this technology can provide meaningful insight into the design of fiber composite hip implants.

  14. Users manual for the NASA Lewis three-dimensional ice accretion code (LEWICE 3D)

    NASA Technical Reports Server (NTRS)

    Bidwell, Colin S.; Potapczuk, Mark G.

    1993-01-01

    A description of the methodology, the algorithms, and the input and output data along with an example case for the NASA Lewis 3D ice accretion code (LEWICE3D) has been produced. The manual has been designed to help the user understand the capabilities, the methodologies, and the use of the code. The LEWICE3D code is a conglomeration of several codes for the purpose of calculating ice shapes on three-dimensional external surfaces. A three-dimensional external flow panel code is incorporated which has the capability of calculating flow about arbitrary 3D lifting and nonlifting bodies with external flow. A fourth order Runge-Kutta integration scheme is used to calculate arbitrary streamlines. An Adams type predictor-corrector trajectory integration scheme has been included to calculate arbitrary trajectories. Schemes for calculating tangent trajectories, collection efficiencies, and concentration factors for arbitrary regions of interest for single droplets or droplet distributions have been incorporated. A LEWICE 2D based heat transfer algorithm can be used to calculate ice accretions along surface streamlines. A geometry modification scheme is incorporated which calculates the new geometry based on the ice accretions generated at each section of interest. The three-dimensional ice accretion calculation is based on the LEWICE 2D calculation. Both codes calculate the flow, pressure distribution, and collection efficiency distribution along surface streamlines. For both codes the heat transfer calculation is divided into two regions, one above the stagnation point and one below the stagnation point, and solved for each region assuming a flat plate with pressure distribution. Water is assumed to follow the surface streamlines, hence starting at the stagnation zone any water that is not frozen out at a control volume is assumed to run back into the next control volume. After the amount of frozen water at each control volume has been calculated the geometry is modified by adding the ice at each control volume in the surface normal direction.

  15. Simulation of unsteady state performance of a secondary air system by the 1D-3D-Structure coupled method

    NASA Astrophysics Data System (ADS)

    Wu, Hong; Li, Peng; Li, Yulong

    2016-02-01

    This paper describes the calculation method for unsteady state conditions in the secondary air systems in gas turbines. The 1D-3D-Structure coupled method was applied. A 1D code was used to model the standard components that have typical geometric characteristics. Their flow and heat transfer were described by empirical correlations based on experimental data or CFD calculations. A 3D code was used to model the non-standard components that cannot be described by typical geometric languages, while a finite element analysis was carried out to compute the structural deformation and heat conduction at certain important positions. These codes were coupled through their interfaces. Thus, the changes in heat transfer and structure and their interactions caused by exterior disturbances can be reflected. The results of the coupling method in an unsteady state showed an apparent deviation from the existing data, while the results in the steady state were highly consistent with the existing data. The difference in the results in the unsteady state was caused primarily by structural deformation that cannot be predicted by the 1D method. Thus, in order to obtain the unsteady state performance of a secondary air system more accurately and efficiently, the 1D-3D-Structure coupled method should be used.

  16. Evaluation of critical distances for energy transfer between Pr{sup 3+} and Ce{sup 3+} in yttrium aluminium garnet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Peng; Wei, Xiantao; Yin, Min

    A series of Pr{sup 3+}/Ce{sup 3+} doped yttrium aluminium garnet (Y{sub 3}Al{sub 5}O{sub 12} or simply YAG) phosphors were synthesized to investigate the energy transfer between Pr{sup 3+} and Ce{sup 3+} for their potential application in a white light-emitting diode and quantum information storage and processing. The excitation and emission spectra of YAG:Pr{sup 3+}/Ce{sup 3+} were measured and analyzed, and it revealed that the reabsorption between Pr{sup 3+} and Ce{sup 3+} was so weak that it can be ignored, and the energy transfer from Pr{sup 3+} (5d) to Ce{sup 3+} (5d) and Ce{sup 3+} (5d) to Pr{sup 3+} ({sup 1}D{submore » 2}) did occur. By analyzing the excitation and the emission spectra, the energy transfer from Pr{sup 3+} (5d) to Ce{sup 3+} (5d) and Ce{sup 3+} (5d) to Pr{sup 3+} ({sup 1}D{sub 2}) was examined in detail with an original strategy deduced from fluorescence dynamics and the Dexter energy transfer theory, and the critical distances of energy transfer were derived to be 7.9 Å and 4.0 Å for Pr{sup 3+} (5d) to Ce{sup 3+} (5d) and Ce{sup 3+} (5d) to Pr{sup 3+} ({sup 1}D{sub 2}), respectively. The energy transfer rates of the two processes of various concentrations were discussed and evaluated. Furthermore, for the purpose of sensing a single Pr{sup 3+} state with a Ce{sup 3+} ion, the optimal distance of Ce{sup 3+} from Pr{sup 3+} was evaluated as 5.60 Å, where the probability of success reaches its maximum value of 78.66%, and meanwhile the probabilities were evaluated for a series of Y{sup 3+} sites in a YAG lattice. These results will be of valuable reference for achievement of the optimal energy transfer efficiency in Pr{sup 3+}/Ce{sup 3+} doped YAG and other similar systems.« less

  17. Liner Optimization Studies Using the Ducted Fan Noise Prediction Code TBIEM3D

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Farassat, F.

    1998-01-01

    In this paper we demonstrate the usefulness of the ducted fan noise prediction code TBIEM3D as a liner optimization design tool. Boundary conditions on the interior duct wall allow for hard walls or a locally reacting liner with axially segmented, circumferentially uniform impedance. Two liner optimization studies are considered in which farfield noise attenuation due to the presence of a liner is maximized by adjusting the liner impedance. In the first example, the dependence of optimal liner impedance on frequency and liner length is examined. Results show that both the optimal impedance and attenuation levels are significantly influenced by liner length and frequency. In the second example, TBIEM3D is used to compare radiated sound pressure levels between optimal and non-optimal liner cases at conditions designed to simulate take-off. It is shown that significant noise reduction is achieved for most of the sound field by selecting the optimal or near optimal liner impedance. Our results also indicate that there is relatively large region of the impedance plane over which optimal or near optimal liner behavior is attainable. This is an important conclusion for the designer since there are variations in liner characteristics due to manufacturing imprecisions.

  18. Optimization of lightweight structure and supporting bipod flexure for a space mirror.

    PubMed

    Chen, Yi-Cheng; Huang, Bo-Kai; You, Zhen-Ting; Chan, Chia-Yen; Huang, Ting-Ming

    2016-12-20

    This article presents an optimization process for integrated optomechanical design. The proposed optimization process for integrated optomechanical design comprises computer-aided drafting, finite element analysis (FEA), optomechanical transfer codes, and an optimization solver. The FEA was conducted to determine mirror surface deformation; then, deformed surface nodal data were transferred into Zernike polynomials through MATLAB optomechanical transfer codes to calculate the resulting optical path difference (OPD) and optical aberrations. To achieve an optimum design, the optimization iterations of the FEA, optomechanical transfer codes, and optimization solver were automatically connected through a self-developed Tcl script. Two examples of optimization design were illustrated in this research, namely, an optimum lightweight design of a Zerodur primary mirror with an outer diameter of 566 mm that is used in a spaceborne telescope and an optimum bipod flexure design that supports the optimum lightweight primary mirror. Finally, optimum designs were successfully accomplished in both examples, achieving a minimum peak-to-valley (PV) value for the OPD of the deformed optical surface. The simulated optimization results showed that (1) the lightweight ratio of the primary mirror increased from 56% to 66%; and (2) the PV value of the mirror supported by optimum bipod flexures in the horizontal position effectively decreased from 228 to 61 nm.

  19. Evaluation of a GPS Receiver for Code and Carrier-Phase Time and Frequency Transfer

    DTIC Science & Technology

    2010-11-01

    2], and carrier-phase [3]. NIST also employs GPS time transfer as the backup link to Two Way Satellite Time and Frequency Transfer ( TWSTFT ) [4...4] D. Kirchner, 1999, “Two-Way Satellite Time and Frequency Transfer ( TWSTFT ): Principle, Implementation, and Current Performance,” Review of

  20. Multicore-based 3D-DWT video encoder

    NASA Astrophysics Data System (ADS)

    Galiano, Vicente; López-Granado, Otoniel; Malumbres, Manuel P.; Migallón, Hector

    2013-12-01

    Three-dimensional wavelet transform (3D-DWT) encoders are good candidates for applications like professional video editing, video surveillance, multi-spectral satellite imaging, etc. where a frame must be reconstructed as quickly as possible. In this paper, we present a new 3D-DWT video encoder based on a fast run-length coding engine. Furthermore, we present several multicore optimizations to speed-up the 3D-DWT computation. An exhaustive evaluation of the proposed encoder (3D-GOP-RL) has been performed, and we have compared the evaluation results with other video encoders in terms of rate/distortion (R/D), coding/decoding delay, and memory consumption. Results show that the proposed encoder obtains good R/D results for high-resolution video sequences with nearly in-place computation using only the memory needed to store a group of pictures. After applying the multicore optimization strategies over the 3D DWT, the proposed encoder is able to compress a full high-definition video sequence in real-time.

  1. TOPAZ2D heat transfer code users manual and thermal property data base

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.B.; Edwards, A.L.

    1990-05-01

    TOPAZ2D is a two dimensional implicit finite element computer code for heat transfer analysis. This user's manual provides information on the structure of a TOPAZ2D input file. Also included is a material thermal property data base. This manual is supplemented with The TOPAZ2D Theoretical Manual and the TOPAZ2D Verification Manual. TOPAZ2D has been implemented on the CRAY, SUN, and VAX computers. TOPAZ2D can be used to solve for the steady state or transient temperature field on two dimensional planar or axisymmetric geometries. Material properties may be temperature dependent and either isotropic or orthotropic. A variety of time and temperature dependentmore » boundary conditions can be specified including temperature, flux, convection, and radiation. Time or temperature dependent internal heat generation can be defined locally be element or globally by material. TOPAZ2D can solve problems of diffuse and specular band radiation in an enclosure coupled with conduction in material surrounding the enclosure. Additional features include thermally controlled reactive chemical mixtures, thermal contact resistance across an interface, bulk fluid flow, phase change, and energy balances. Thermal stresses can be calculated using the solid mechanics code NIKE2D which reads the temperature state data calculated by TOPAZ2D. A three dimensional version of the code, TOPAZ3D is available. The material thermal property data base, Chapter 4, included in this manual was originally published in 1969 by Art Edwards for use with his TRUMP finite difference heat transfer code. The format of the data has been altered to be compatible with TOPAZ2D. Bob Bailey is responsible for adding the high explosive thermal property data.« less

  2. Forward Monte Carlo Computations of Polarized Microwave Radiation

    NASA Technical Reports Server (NTRS)

    Battaglia, A.; Kummerow, C.

    2000-01-01

    Microwave radiative transfer computations continue to acquire greater importance as the emphasis in remote sensing shifts towards the understanding of microphysical properties of clouds and with these to better understand the non linear relation between rainfall rates and satellite-observed radiance. A first step toward realistic radiative simulations has been the introduction of techniques capable of treating 3-dimensional geometry being generated by ever more sophisticated cloud resolving models. To date, a series of numerical codes have been developed to treat spherical and randomly oriented axisymmetric particles. Backward and backward-forward Monte Carlo methods are, indeed, efficient in this field. These methods, however, cannot deal properly with oriented particles, which seem to play an important role in polarization signatures over stratiform precipitation. Moreover, beyond the polarization channel, the next generation of fully polarimetric radiometers challenges us to better understand the behavior of the last two Stokes parameters as well. In order to solve the vector radiative transfer equation, one-dimensional numerical models have been developed, These codes, unfortunately, consider the atmosphere as horizontally homogeneous with horizontally infinite plane parallel layers. The next development step for microwave radiative transfer codes must be fully polarized 3-D methods. Recently a 3-D polarized radiative transfer model based on the discrete ordinate method was presented. A forward MC code was developed that treats oriented nonspherical hydrometeors, but only for plane-parallel situations.

  3. Design of an Experimental Facility for Passive Heat Removal in Advanced Nuclear Reactors

    NASA Astrophysics Data System (ADS)

    Bersano, Andrea

    With reference to innovative heat exchangers to be used in passive safety system of Gen- eration IV nuclear reactors and Small Modular Reactors it is necessary to study the natural circulation and the efficiency of heat removal systems. Especially in safety systems, as the decay heat removal system of many reactors, it is increasing the use of passive components in order to improve their availability and reliability during possible accidental scenarios, reducing the need of human intervention. Many of these systems are based on natural circulation, so they require an intense analysis due to the possible instability of the related phenomena. The aim of this thesis work is to build a scaled facility which can reproduce, in a simplified way, the decay heat removal system (DHR2) of the lead-cooled fast reactor ALFRED and, in particular, the bayonet heat exchanger, which transfers heat from lead to water. Given the thermal power to be removed, the natural circulation flow rate and the pressure drops will be studied both experimentally and numerically using the code RELAP5 3D. The first phase of preliminary analysis and project includes: the calculations to design the heat source and heat sink, the choice of materials and components and CAD drawings of the facility. After that, the numerical study is performed using the thermal-hydraulic code RELAP5 3D in order to simulate the behavior of the system. The purpose is to run pretest simulations of the facility to optimize the dimensioning setting the operative parameters (temperature, pressure, etc.) and to chose the most adequate measurement devices. The model of the system is continually developed to better simulate the system studied. High attention is dedicated to the control logic of the system to obtain acceptable results. The initial experimental tests phase consists in cold zero power tests of the facility in order to characterize and to calibrate the pressure drops. In future works the experimental results will be compared to the values predicted by the system code and differences will be discussed with the ultimate goal to qualify RELAP5-3D for the analysis of decay heat removal systems in natural circulation. The numerical data will be also used to understand the key parameters related to the heat transfer in natural circulation and to optimize the operation of the system.

  4. Comparison of the results of several heat transfer computer codes when applied to a hypothetical nuclear waste repository

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Claiborne, H.C.; Wagner, R.S.; Just, R.A.

    1979-12-01

    A direct comparison of transient thermal calculations was made with the heat transfer codes HEATING5, THAC-SIP-3D, ADINAT, SINDA, TRUMP, and TRANCO for a hypothetical nuclear waste repository. With the exception of TRUMP and SINDA (actually closer to the earlier CINDA3G version), the other codes agreed to within +-5% for the temperature rises as a function of time. The TRUMP results agreed within +-5% up to about 50 years, where the maximum temperature occurs, and then began an oscillary behavior with up to 25% deviations at longer times. This could have resulted from time steps that were too large or frommore » some unknown system problems. The available version of the SINDA code was not compatible with the IBM compiler without using an alternative method for handling a variable thermal conductivity. The results were about 40% low, but a reasonable agreement was obtained by assuming a uniform thermal conductivity; however, a programming error was later discovered in the alternative method. Some work is required on the IBM version to make it compatible with the system and still use the recommended method of handling variable thermal conductivity. TRANCO can only be run as a 2-D model, and TRUMP and CINDA apparently required longer running times and did not agree in the 2-D case; therefore, only HEATING5, THAC-SIP-3D, and ADINAT were used for the 3-D model calculations. The codes agreed within +-5%; at distances of about 1 ft from the waste canister edge, temperature rises were also close to that predicted by the 3-D model.« less

  5. Parameterized code SHARM-3D for radiative transfer over inhomogeneous surfaces.

    PubMed

    Lyapustin, Alexei; Wang, Yujie

    2005-12-10

    The code SHARM-3D, developed for fast and accurate simulations of the monochromatic radiance at the top of the atmosphere over spatially variable surfaces with Lambertian or anisotropic reflectance, is described. The atmosphere is assumed to be laterally uniform across the image and to consist of two layers with aerosols contained in the bottom layer. The SHARM-3D code performs simultaneous calculations for all specified incidence-view geometries and multiple wavelengths in one run. The numerical efficiency of the current version of code is close to its potential limit and is achieved by means of two innovations. The first is the development of a comprehensive precomputed lookup table of the three-dimensional atmospheric optical transfer function for various atmospheric conditions. The second is the use of a linear kernel model of the land surface bidirectional reflectance factor (BRF) in our algorithm that has led to a fully parameterized solution in terms of the surface BRF parameters. The code is also able to model inland lakes and rivers. The water pixels are described with the Nakajima-Tanaka BRF model of wind-roughened water surface with a Lambertian offset, which is designed to model approximately the reflectance of suspended matter and of a shallow lake or river bottom.

  6. Parameterized code SHARM-3D for radiative transfer over inhomogeneous surfaces

    NASA Astrophysics Data System (ADS)

    Lyapustin, Alexei; Wang, Yujie

    2005-12-01

    The code SHARM-3D, developed for fast and accurate simulations of the monochromatic radiance at the top of the atmosphere over spatially variable surfaces with Lambertian or anisotropic reflectance, is described. The atmosphere is assumed to be laterally uniform across the image and to consist of two layers with aerosols contained in the bottom layer. The SHARM-3D code performs simultaneous calculations for all specified incidence-view geometries and multiple wavelengths in one run. The numerical efficiency of the current version of code is close to its potential limit and is achieved by means of two innovations. The first is the development of a comprehensive precomputed lookup table of the three-dimensional atmospheric optical transfer function for various atmospheric conditions. The second is the use of a linear kernel model of the land surface bidirectional reflectance factor (BRF) in our algorithm that has led to a fully parameterized solution in terms of the surface BRF parameters. The code is also able to model inland lakes and rivers. The water pixels are described with the Nakajima-Tanaka BRF model of wind-roughened water surface with a Lambertian offset, which is designed to model approximately the reflectance of suspended matter and of a shallow lake or river bottom.

  7. Transfer reaction code with nonlocal interactions

    DOE PAGES

    Titus, L. J.; Ross, A.; Nunes, F. M.

    2016-07-14

    We present a suite of codes (NLAT for nonlocal adiabatic transfer) to calculate the transfer cross section for single-nucleon transfer reactions, (d,N)(d,N) or (N,d)(N,d), including nonlocal nucleon–target interactions, within the adiabatic distorted wave approximation. For this purpose, we implement an iterative method for solving the second order nonlocal differential equation, for both scattering and bound states. The final observables that can be obtained with NLAT are differential angular distributions for the cross sections of A(d,N)BA(d,N)B or B(N,d)AB(N,d)A. Details on the implementation of the TT-matrix to obtain the final cross sections within the adiabatic distorted wave approximation method are also provided.more » This code is suitable to be applied for deuteron induced reactions in the range of View the MathML sourceEd=10–70MeV, and provides cross sections with 4% accuracy.« less

  8. Numerical Analysis of 2-D and 3-D MHD Flows Relevant to Fusion Applications

    DOE PAGES

    Khodak, Andrei

    2017-08-21

    Here, the analysis of many fusion applications such as liquid-metal blankets requires application of computational fluid dynamics (CFD) methods for electrically conductive liquids in geometrically complex regions and in the presence of a strong magnetic field. A current state of the art general purpose CFD code allows modeling of the flow in complex geometric regions, with simultaneous conjugated heat transfer analysis in liquid and surrounding solid parts. Together with a magnetohydrodynamics (MHD) capability, the general purpose CFD code will be a valuable tool for the design and optimization of fusion devices. This paper describes an introduction of MHD capability intomore » the general purpose CFD code CFX, part of the ANSYS Workbench. The code was adapted for MHD problems using a magnetic induction approach. CFX allows introduction of user-defined variables using transport or Poisson equations. For MHD adaptation of the code three additional transport equations were introduced for the components of the magnetic field, in addition to the Poisson equation for electric potential. The Lorentz force is included in the momentum transport equation as a source term. Fusion applications usually involve very strong magnetic fields, with values of the Hartmann number of up to tens of thousands. In this situation a system of MHD equations become very rigid with very large source terms and very strong variable gradients. To increase system robustness, special measures were introduced during the iterative convergence process, such as linearization using source coefficient for momentum equations. The MHD implementation in general purpose CFD code was tested against benchmarks, specifically selected for liquid-metal blanket applications. Results of numerical simulations using present implementation closely match analytical solutions for a Hartmann number of up to 1500 for a 2-D laminar flow in the duct of square cross section, with conducting and nonconducting walls. Results for a 3-D test case are also included.« less

  9. Numerical Analysis of 2-D and 3-D MHD Flows Relevant to Fusion Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khodak, Andrei

    Here, the analysis of many fusion applications such as liquid-metal blankets requires application of computational fluid dynamics (CFD) methods for electrically conductive liquids in geometrically complex regions and in the presence of a strong magnetic field. A current state of the art general purpose CFD code allows modeling of the flow in complex geometric regions, with simultaneous conjugated heat transfer analysis in liquid and surrounding solid parts. Together with a magnetohydrodynamics (MHD) capability, the general purpose CFD code will be a valuable tool for the design and optimization of fusion devices. This paper describes an introduction of MHD capability intomore » the general purpose CFD code CFX, part of the ANSYS Workbench. The code was adapted for MHD problems using a magnetic induction approach. CFX allows introduction of user-defined variables using transport or Poisson equations. For MHD adaptation of the code three additional transport equations were introduced for the components of the magnetic field, in addition to the Poisson equation for electric potential. The Lorentz force is included in the momentum transport equation as a source term. Fusion applications usually involve very strong magnetic fields, with values of the Hartmann number of up to tens of thousands. In this situation a system of MHD equations become very rigid with very large source terms and very strong variable gradients. To increase system robustness, special measures were introduced during the iterative convergence process, such as linearization using source coefficient for momentum equations. The MHD implementation in general purpose CFD code was tested against benchmarks, specifically selected for liquid-metal blanket applications. Results of numerical simulations using present implementation closely match analytical solutions for a Hartmann number of up to 1500 for a 2-D laminar flow in the duct of square cross section, with conducting and nonconducting walls. Results for a 3-D test case are also included.« less

  10. Evolutionary computation applied to the reconstruction of 3-D surface topography in the SEM.

    PubMed

    Kodama, Tetsuji; Li, Xiaoyuan; Nakahira, Kenji; Ito, Dai

    2005-10-01

    A genetic algorithm has been applied to the line profile reconstruction from the signals of the standard secondary electron (SE) and/or backscattered electron detectors in a scanning electron microscope. This method solves the topographical surface reconstruction problem as one of combinatorial optimization. To extend this optimization approach for three-dimensional (3-D) surface topography, this paper considers the use of a string coding where a 3-D surface topography is represented by a set of coordinates of vertices. We introduce the Delaunay triangulation, which attains the minimum roughness for any set of height data to capture the fundamental features of the surface being probed by an electron beam. With this coding, the strings are processed with a class of hybrid optimization algorithms that combine genetic algorithms and simulated annealing algorithms. Experimental results on SE images are presented.

  11. RELAP5 Model of the First Wall/Blanket Primary Heat Transfer System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popov, Emilian L; Yoder Jr, Graydon L; Kim, Seokho H

    2010-06-01

    ITER inductive power operation is modeled and simulated using a system level computer code to evaluate the behavior of the Primary Heat Transfer System (PHTS) and predict parameter operational ranges. The control algorithm strategy and derivation are summarized in this report as well. A major feature of ITER is pulsed operation. The plasma does not burn continuously, but the power is pulsed with large periods of zero power between pulses. This feature requires active temperature control to maintain a constant blanket inlet temperature and requires accommodation of coolant thermal expansion during the pulse. In view of the transient nature ofmore » the power (plasma) operation state a transient system thermal-hydraulics code was selected: RELAP5. The code has a well-documented history for nuclear reactor transient analyses, it has been benchmarked against numerous experiments, and a large user database of commonly accepted modeling practices exists. The process of heat deposition and transfer in the blanket modules is multi-dimensional and cannot be accurately captured by a one-dimensional code such as RELAP5. To resolve this, a separate CFD calculation of blanket thermal power evolution was performed using the 3-D SC/Tetra thermofluid code. A 1D-3D co-simulation more realistically models FW/blanket internal time-dependent thermal inertia while eliminating uncertainties in the time constant assumed in a 1-D system code. Blanket water outlet temperature and heat release histories for any given ITER pulse operation scenario are calculated. These results provide the basis for developing time dependent power forcing functions which are used as input in the RELAP5 calculations.« less

  12. The application of nonlinear programming and collocation to optimal aeroassisted orbital transfers

    NASA Astrophysics Data System (ADS)

    Shi, Y. Y.; Nelson, R. L.; Young, D. H.; Gill, P. E.; Murray, W.; Saunders, M. A.

    1992-01-01

    Sequential quadratic programming (SQP) and collocation of the differential equations of motion were applied to optimal aeroassisted orbital transfers. The Optimal Trajectory by Implicit Simulation (OTIS) computer program codes with updated nonlinear programming code (NZSOL) were used as a testbed for the SQP nonlinear programming (NLP) algorithms. The state-of-the-art sparse SQP method is considered to be effective for solving large problems with a sparse matrix. Sparse optimizers are characterized in terms of memory requirements and computational efficiency. For the OTIS problems, less than 10 percent of the Jacobian matrix elements are nonzero. The SQP method encompasses two phases: finding an initial feasible point by minimizing the sum of infeasibilities and minimizing the quadratic objective function within the feasible region. The orbital transfer problem under consideration involves the transfer from a high energy orbit to a low energy orbit.

  13. Multidisciplinary Modeling Software for Analysis, Design, and Optimization of HRRLS Vehicles

    NASA Technical Reports Server (NTRS)

    Spradley, Lawrence W.; Lohner, Rainald; Hunt, James L.

    2011-01-01

    The concept for Highly Reliable Reusable Launch Systems (HRRLS) under the NASA Hypersonics project is a two-stage-to-orbit, horizontal-take-off / horizontal-landing, (HTHL) architecture with an air-breathing first stage. The first stage vehicle is a slender body with an air-breathing propulsion system that is highly integrated with the airframe. The light weight slender body will deflect significantly during flight. This global deflection affects the flow over the vehicle and into the engine and thus the loads and moments on the vehicle. High-fidelity multi-disciplinary analyses that accounts for these fluid-structures-thermal interactions are required to accurately predict the vehicle loads and resultant response. These predictions of vehicle response to multi physics loads, calculated with fluid-structural-thermal interaction, are required in order to optimize the vehicle design over its full operating range. This contract with ResearchSouth addresses one of the primary objectives of the Vehicle Technology Integration (VTI) discipline: the development of high-fidelity multi-disciplinary analysis and optimization methods and tools for HRRLS vehicles. The primary goal of this effort is the development of an integrated software system that can be used for full-vehicle optimization. This goal was accomplished by: 1) integrating the master code, FEMAP, into the multidiscipline software network to direct the coupling to assure accurate fluid-structure-thermal interaction solutions; 2) loosely-coupling the Euler flow solver FEFLO to the available and proven aeroelasticity and large deformation (FEAP) code; 3) providing a coupled Euler-boundary layer capability for rapid viscous flow simulation; 4) developing and implementing improved Euler/RANS algorithms into the FEFLO CFD code to provide accurate shock capturing, skin friction, and heat-transfer predictions for HRRLS vehicles in hypersonic flow, 5) performing a Reynolds-averaged Navier-Stokes computation on an HRRLS configuration; 6) integrating the RANS solver with the FEAP code for coupled fluid-structure-thermal capability; and 7) integrating the existing NASA SRGULL propulsion flow path prediction software with the FEFLO software for quasi-3D propulsion flow path predictions, 8) improving and integrating into the network, an existing adjoint-based design optimization code.

  14. MHD Code Optimizations and Jets in Dense Gaseous Halos

    NASA Astrophysics Data System (ADS)

    Gaibler, Volker; Vigelius, Matthias; Krause, Martin; Camenzind, Max

    We have further optimized and extended the 3D-MHD-code NIRVANA. The magnetized part runs in parallel, reaching 19 Gflops per SX-6 node, and has a passively advected particle population. In addition, the code is MPI-parallel now - on top of the shared memory parallelization. On a 512^3 grid, we reach 561 Gflops with 32 nodes on the SX-8. Also, we have successfully used FLASH on the Opteron cluster. Scientific results are preliminary so far. We report one computation of highly resolved cocoon turbulence. While we find some similarities to earlier 2D work by us and others, we note a strange reluctancy of cold material to enter the low density cocoon, which has to be investigated further.

  15. Atmospheric Radiative Transfer for Satellite Remote Sensing: Validation and Uncertainty

    NASA Technical Reports Server (NTRS)

    Marshak, Alexander

    2007-01-01

    My presentation will begin with the discussion of the Intercomparison of three-dimensional (3D) Radiative Codes (13RC) project that has been started in 1997. I will highlight the question of how well the atmospheric science community can solve the 3D radiative transfer equation. Initially I3RC was focused only on algorithm intercomparison; now it has acquired a broader identity providing new insights and creating new community resources for 3D radiative transfer calculations. Then I will switch to satellite remote sensing. Almost all radiative transfer calculations for satellite remote sensing are one-dimensional (1D) assuming (i) no variability inside a satellite pixel and (ii) no radiative interactions between pixels. The assumptions behind the 1D approach will be checked using cloud and aerosol data measured by the MODerate Resolution Imaging Spectroradiometer (MODIS) on board of two NASA satellites TERRA and AQUA. In the discussion, I will use both analysis technique: statistical analysis over large areas and time intervals, and single scene analysis to validate how well the 1D radiative transfer equation describes radiative regime in cloudy atmospheres.

  16. Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier-Stokes Heat Transfer Code

    NASA Technical Reports Server (NTRS)

    Gaugler, Raymond E.; Lee, Chi-Miag (Technical Monitor)

    2001-01-01

    For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid heat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this paper, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery for space launch vehicle propulsion systems.

  17. Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier-Stokes Heat Transfer Code

    NASA Technical Reports Server (NTRS)

    Gaugfer, Raymond E.

    2002-01-01

    For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid heat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this presentation, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery.

  18. Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier Stokes Heat Transfer Code

    NASA Technical Reports Server (NTRS)

    Gaugler, Raymond E.

    2002-01-01

    For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid beat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this presentation, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery.

  19. [Modeling and analysis of volume conduction based on field-circuit coupling].

    PubMed

    Tang, Zhide; Liu, Hailong; Xie, Xiaohui; Chen, Xiufa; Hou, Deming

    2012-08-01

    Numerical simulations of volume conduction can be used to analyze the process of energy transfer and explore the effects of some physical factors on energy transfer efficiency. We analyzed the 3D quasi-static electric field by the finite element method, and developed A 3D coupled field-circuit model of volume conduction basing on the coupling between the circuit and the electric field. The model includes a circuit simulation of the volume conduction to provide direct theoretical guidance for energy transfer optimization design. A field-circuit coupling model with circular cylinder electrodes was established on the platform of the software FEM3.5. Based on this, the effects of electrode cross section area, electrode distance and circuit parameters on the performance of volume conduction system were obtained, which provided a basis for optimized design of energy transfer efficiency.

  20. Transfers between libration-point orbits in the elliptic restricted problem

    NASA Astrophysics Data System (ADS)

    Hiday, L. A.; Howell, K. C.

    The present time-fixed impulsive transfers between 3D libration point orbits in the vicinity of the interior L(1) libration point of the sun-earth-moon barycenter system are 'optimal' in that the total characteristic velocity required for implementation of the transfer exhibits a local minimum. The conditions necessary for a time-fixed, two-impulse transfer trajectory to be optimal are stated in terms of the primer vector, and the conditions necessary for satisfying the local optimality of a transfer trajectory containing additional impulses are addressed by requiring continuity of the Hamiltonian and the derivative of the primer vector at all interior impulses.

  1. Preliminary Assessment of Turbomachinery Codes

    NASA Technical Reports Server (NTRS)

    Mazumder, Quamrul H.

    2007-01-01

    This report assesses different CFD codes developed and currently being used at Glenn Research Center to predict turbomachinery fluid flow and heat transfer behavior. This report will consider the following codes: APNASA, TURBO, GlennHT, H3D, and SWIFT. Each code will be described separately in the following section with their current modeling capabilities, level of validation, pre/post processing, and future development and validation requirements. This report addresses only previously published and validations of the codes. However, the codes have been further developed to extend the capabilities of the codes.

  2. Novel microscopy-based screening method reveals regulators of contact-dependent intercellular transfer

    PubMed Central

    Michael Frei, Dominik; Hodneland, Erlend; Rios-Mondragon, Ivan; Burtey, Anne; Neumann, Beate; Bulkescher, Jutta; Schölermann, Julia; Pepperkok, Rainer; Gerdes, Hans-Hermann; Kögel, Tanja

    2015-01-01

    Contact-dependent intercellular transfer (codeIT) of cellular constituents can have functional consequences for recipient cells, such as enhanced survival and drug resistance. Pathogenic viruses, prions and bacteria can also utilize this mechanism to spread to adjacent cells and potentially evade immune detection. However, little is known about the molecular mechanism underlying this intercellular transfer process. Here, we present a novel microscopy-based screening method to identify regulators and cargo of codeIT. Single donor cells, carrying fluorescently labelled endocytic organelles or proteins, are co-cultured with excess acceptor cells. CodeIT is quantified by confocal microscopy and image analysis in 3D, preserving spatial information. An siRNA-based screening using this method revealed the involvement of several myosins and small GTPases as codeIT regulators. Our data indicates that cellular protrusions and tubular recycling endosomes are important for codeIT. We automated image acquisition and analysis to facilitate large-scale chemical and genetic screening efforts to identify key regulators of codeIT. PMID:26271723

  3. Validation of Heat Transfer and Film Cooling Capabilities of the 3-D RANS Code TURBO

    NASA Technical Reports Server (NTRS)

    Shyam, Vikram; Ameri, Ali; Chen, Jen-Ping

    2010-01-01

    The capabilities of the 3-D unsteady RANS code TURBO have been extended to include heat transfer and film cooling applications. The results of simulations performed with the modified code are compared to experiment and to theory, where applicable. Wilcox s k-turbulence model has been implemented to close the RANS equations. Two simulations are conducted: (1) flow over a flat plate and (2) flow over an adiabatic flat plate cooled by one hole inclined at 35 to the free stream. For (1) agreement with theory is found to be excellent for heat transfer, represented by local Nusselt number, and quite good for momentum, as represented by the local skin friction coefficient. This report compares the local skin friction coefficients and Nusselt numbers on a flat plate obtained using Wilcox's k-model with the theory of Blasius. The study looks at laminar and turbulent flows over an adiabatic flat plate and over an isothermal flat plate for two different wall temperatures. It is shown that TURBO is able to accurately predict heat transfer on a flat plate. For (2) TURBO shows good qualitative agreement with film cooling experiments performed on a flat plate with one cooling hole. Quantitatively, film effectiveness is under predicted downstream of the hole.

  4. Dust emission in simulated dwarf galaxies using GRASIL-3D

    NASA Astrophysics Data System (ADS)

    Santos-Santos, I. M.; Domínguez-Tenreiro, R.; Granato, G. L.; Brook, C. B.; Obreja, A.

    2017-03-01

    Recent Herschel observations of dwarf galaxies have shown a wide diversity in the shapes of their IR-submm spectral energy distributions as compared to more massive galaxies, presenting features that cannot be explained with the current models. In order to understand the physics driving these differences, we have computed the emission of a sample of simulated dwarf galaxies using the radiative transfer code GRASIL-3D. This code separately treats the radiative transfer in dust grains from molecular clouds and cirri. The simulated galaxies have masses ranging from 10^6-10^9 M_⊙ and have evolved within a Local Group environment by using CLUES initial conditions. We show that their IR band luminosities are in agreement with observations, with their SEDs reproducing naturally the particular spectral features observed. We conclude that the GRASIL-3D two-component model gives a physical interpretation to the emission of dwarf galaxies, with molecular clouds (cirri) as the warm (cold) dust components needed to recover observational data.

  5. A fast code for channel limb radiances with gas absorption and scattering in a spherical atmosphere

    NASA Astrophysics Data System (ADS)

    Eluszkiewicz, Janusz; Uymin, Gennady; Flittner, David; Cady-Pereira, Karen; Mlawer, Eli; Henderson, John; Moncet, Jean-Luc; Nehrkorn, Thomas; Wolff, Michael

    2017-05-01

    We present a radiative transfer code capable of accurately and rapidly computing channel limb radiances in the presence of gaseous absorption and scattering in a spherical atmosphere. The code has been prototyped for the Mars Climate Sounder measuring limb radiances in the thermal part of the spectrum (200-900 cm-1) where absorption by carbon dioxide and water vapor and absorption and scattering by dust and water ice particles are important. The code relies on three main components: 1) The Gauss Seidel Spherical Radiative Transfer Model (GSSRTM) for scattering, 2) The Planetary Line-By-Line Radiative Transfer Model (P-LBLRTM) for gas opacity, and 3) The Optimal Spectral Sampling (OSS) for selecting a limited number of spectral points to simulate channel radiances and thus achieving a substantial increase in speed. The accuracy of the code has been evaluated against brute-force line-by-line calculations performed on the NASA Pleiades supercomputer, with satisfactory results. Additional improvements in both accuracy and speed are attainable through incremental changes to the basic approach presented in this paper, which would further support the use of this code for real-time retrievals and data assimilation. Both newly developed codes, GSSRTM/OSS for MCS and P-LBLRTM, are available for additional testing and user feedback.

  6. Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?

    NASA Technical Reports Server (NTRS)

    Moore, Greg; Chainyk, Mike; Schiermeier, John

    2004-01-01

    The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.

  7. ASTRORAY: General relativistic polarized radiative transfer code

    NASA Astrophysics Data System (ADS)

    Shcherbakov, Roman V.

    2014-07-01

    ASTRORAY employs a method of ray tracing and performs polarized radiative transfer of (cyclo-)synchrotron radiation. The radiative transfer is conducted in curved space-time near rotating black holes described by Kerr-Schild metric. Three-dimensional general relativistic magneto hydrodynamic (3D GRMHD) simulations, in particular performed with variations of the HARM code, serve as an input to ASTRORAY. The code has been applied to reproduce the sub-mm synchrotron bump in the spectrum of Sgr A*, and to test the detectability of quasi-periodic oscillations in its light curve. ASTRORAY can be readily applied to model radio/sub-mm polarized spectra of jets and cores of other low-luminosity active galactic nuclei. For example, ASTRORAY is uniquely suitable to self-consistently model Faraday rotation measure and circular polarization fraction in jets.

  8. Reference View Selection in DIBR-Based Multiview Coding.

    PubMed

    Maugey, Thomas; Petrazzuoli, Giovanni; Frossard, Pascal; Cagnazzo, Marco; Pesquet-Popescu, Beatrice

    2016-04-01

    Augmented reality, interactive navigation in 3D scenes, multiview video, and other emerging multimedia applications require large sets of images, hence larger data volumes and increased resources compared with traditional video services. The significant increase in the number of images in multiview systems leads to new challenging problems in data representation and data transmission to provide high quality of experience on resource-constrained environments. In order to reduce the size of the data, different multiview video compression strategies have been proposed recently. Most of them use the concept of reference or key views that are used to estimate other images when there is high correlation in the data set. In such coding schemes, the two following questions become fundamental: 1) how many reference views have to be chosen for keeping a good reconstruction quality under coding cost constraints? And 2) where to place these key views in the multiview data set? As these questions are largely overlooked in the literature, we study the reference view selection problem and propose an algorithm for the optimal selection of reference views in multiview coding systems. Based on a novel metric that measures the similarity between the views, we formulate an optimization problem for the positioning of the reference views, such that both the distortion of the view reconstruction and the coding rate cost are minimized. We solve this new problem with a shortest path algorithm that determines both the optimal number of reference views and their positions in the image set. We experimentally validate our solution in a practical multiview distributed coding system and in the standardized 3D-HEVC multiview coding scheme. We show that considering the 3D scene geometry in the reference view, positioning problem brings significant rate-distortion improvements and outperforms the traditional coding strategy that simply selects key frames based on the distance between cameras.

  9. Visibility of Prominences Using the He i D3 Line Filter on the PROBA-3/ASPIICS Coronagraph

    NASA Astrophysics Data System (ADS)

    Jejčič, S.; Heinzel, P.; Labrosse, N.; Zhukov, A. N.; Bemporad, A.; Fineschi, S.; Gunár, S.

    2018-02-01

    We determine the optimal width and shape of the narrow-band filter centered on the He i D3 line for prominence and coronal mass ejection (CME) observations with the ASPIICS ( Association of Spacecraft for Polarimetric and Imaging Investigation of the Corona of the Sun) coronagraph onboard the PROBA-3 ( Project for On-board Autonomy) satellite, to be launched in 2020. We analyze He i D3 line intensities for three representative non-local thermal equilibrium prominence models at temperatures 8, 30, and 100 kK computed with a radiative transfer code and the prominence visible-light (VL) emission due to Thomson scattering on the prominence electrons. We compute various useful relations at prominence line-of-sight velocities of 0, 100, and 300 km s-1 for 20 Å wide flat filter and three Gaussian filters with a full-width at half-maximum (FWHM) equal to 5, 10, and 20 Å to show the relative brightness contribution of the He i D3 line and the prominence VL to the visibility in a given narrow-band filter. We also discuss possible signal contamination by Na i D1 and D2 lines, which otherwise may be useful to detect comets. Our results mainly show that i) an optimal narrow-band filter should be flat or somewhere between flat and Gaussian with an FWHM of 20 Å in order to detect fast-moving prominence structures, ii) the maximum emission in the He i D3 line is at 30 kK and the minimal at 100 kK, and iii) the ratio of emission in the He i D3 line to the VL emission can provide a useful diagnostic for the temperature of prominence structures. This ratio is up to 10 for hot prominence structures, up to 100 for cool structures, and up to 1000 for warm structures.

  10. Integration of Libration Point Orbit Dynamics into a Universal 3-D Autonomous Formation Flying Algorithm

    NASA Technical Reports Server (NTRS)

    Folta, David; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    The autonomous formation flying control algorithm developed by the Goddard Space Flight Center (GSFC) for the New Millennium Program (NMP) Earth Observing-1 (EO-1) mission is investigated for applicability to libration point orbit formations. In the EO-1 formation-flying algorithm, control is accomplished via linearization about a reference transfer orbit with a state transition matrix (STM) computed from state inputs. The effect of libration point orbit dynamics on this algorithm architecture is explored via computation of STMs using the flight proven code, a monodromy matrix developed from a N-body model of a libration orbit, and a standard STM developed from the gravitational and coriolis effects as measured at the libration point. A comparison of formation flying Delta-Vs calculated from these methods is made to a standard linear quadratic regulator (LQR) method. The universal 3-D approach is optimal in the sense that it can be accommodated as an open-loop or closed-loop control using only state information.

  11. Numerical optimization of three-dimensional coils for NSTX-U

    NASA Astrophysics Data System (ADS)

    Lazerson, S. A.; Park, J.-K.; Logan, N.; Boozer, A.

    2015-10-01

    A tool for the calculation of optimal three-dimensional (3D) perturbative magnetic fields in tokamaks has been developed. The IPECOPT code builds upon the stellarator optimization code STELLOPT to allow for optimization of linear ideal magnetohydrodynamic perturbed equilibrium (IPEC). This tool has been applied to NSTX-U equilibria, addressing which fields are the most effective at driving NTV torques. The NTV torque calculation is performed by the PENT code. Optimization of the normal field spectrum shows that fields with n  =  1 character can drive a large core torque. It is also shown that fields with n  =  3 features are capable of driving edge torque and some core torque. Coil current optimization (using the planned in-vessel and existing RWM coils) on NSTX-U suggest the planned coils set is adequate for core and edge torque control. Comparison between error field correction experiments on DIII-D and the optimizer show good agreement. Notice: This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy. The publisher, by accepting the article for publication acknowledges, that the United States Government retains a non-exclusive,paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  12. Two Perspectives on the Origin of the Standard Genetic Code

    NASA Astrophysics Data System (ADS)

    Sengupta, Supratim; Aggarwal, Neha; Bandhu, Ashutosh Vishwa

    2014-12-01

    The origin of a genetic code made it possible to create ordered sequences of amino acids. In this article we provide two perspectives on code origin by carrying out simulations of code-sequence coevolution in finite populations with the aim of examining how the standard genetic code may have evolved from more primitive code(s) encoding a small number of amino acids. We determine the efficacy of the physico-chemical hypothesis of code origin in the absence and presence of horizontal gene transfer (HGT) by allowing a diverse collection of code-sequence sets to compete with each other. We find that in the absence of horizontal gene transfer, natural selection between competing codes distinguished by differences in the degree of physico-chemical optimization is unable to explain the structure of the standard genetic code. However, for certain probabilities of the horizontal transfer events, a universal code emerges having a structure that is consistent with the standard genetic code.

  13. The physics of volume rendering

    NASA Astrophysics Data System (ADS)

    Peters, Thomas

    2014-11-01

    Radiation transfer is an important topic in several physical disciplines, probably most prominently in astrophysics. Computer scientists use radiation transfer, among other things, for the visualization of complex data sets with direct volume rendering. In this article, I point out the connection between physical radiation transfer and volume rendering, and I describe an implementation of direct volume rendering in the astrophysical radiation transfer code RADMC-3D. I show examples for the use of this module on analytical models and simulation data.

  14. Tomo3D 2.0--exploitation of advanced vector extensions (AVX) for 3D reconstruction.

    PubMed

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-02-01

    Tomo3D is a program for fast tomographic reconstruction on multicore computers. Its high speed stems from code optimization, vectorization with Streaming SIMD Extensions (SSE), multithreading and optimization of disk access. Recently, Advanced Vector eXtensions (AVX) have been introduced in the x86 processor architecture. Compared to SSE, AVX double the number of simultaneous operations, thus pointing to a potential twofold gain in speed. However, in practice, achieving this potential is extremely difficult. Here, we provide a technical description and an assessment of the optimizations included in Tomo3D to take advantage of AVX instructions. Tomo3D 2.0 allows huge reconstructions to be calculated in standard computers in a matter of minutes. Thus, it will be a valuable tool for electron tomography studies with increasing resolution needs. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. AirShow 1.0 CFD Software Users' Guide

    NASA Technical Reports Server (NTRS)

    Mohler, Stanley R., Jr.

    2005-01-01

    AirShow is visualization post-processing software for Computational Fluid Dynamics (CFD). Upon reading binary PLOT3D grid and solution files into AirShow, the engineer can quickly see how hundreds of complex 3-D structured blocks are arranged and numbered. Additionally, chosen grid planes can be displayed and colored according to various aerodynamic flow quantities such as Mach number and pressure. The user may interactively rotate and translate the graphical objects using the mouse. The software source code was written in cross-platform Java, C++, and OpenGL, and runs on Unix, Linux, and Windows. The graphical user interface (GUI) was written using Java Swing. Java also provides multiple synchronized threads. The Java Native Interface (JNI) provides a bridge between the Java code and the C++ code where the PLOT3D files are read, the OpenGL graphics are rendered, and numerical calculations are performed. AirShow is easy to learn and simple to use. The source code is available for free from the NASA Technology Transfer and Partnership Office.

  16. Proceeding On : Parallelisation Of Critical Code Passages In PHOENIX/3D

    NASA Astrophysics Data System (ADS)

    Arkenberg, Mario; Wichert, Viktoria; Hauschildt, Peter H.

    2016-10-01

    Highly resolved state-of-the-art 3D atmosphere simulations will remain computationally extremely expensive for years to come. In addition to the need for more computing power, rethinking coding practices is necessary. We take a dual approach here, by introducing especially adapted, parallel numerical methods and correspondingly parallelising time critical code passages. In the following, we present our work on PHOENIX/3D.While parallelisation is generally worthwhile, it requires revision of time-consuming subroutines with respect to separability of localised data and variables in order to determine the optimal approach. Of course, the same applies to the code structure. The importance of this ongoing work can be showcased by recently derived benchmark results, which were generated utilis- ing MPI and OpenMP. Furthermore, the need for a careful and thorough choice of an adequate, machine dependent setup is discussed.

  17. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  18. Novel Scalable 3-D MT Inverse Solver

    NASA Astrophysics Data System (ADS)

    Kuvshinov, A. V.; Kruglyakov, M.; Geraskin, A.

    2016-12-01

    We present a new, robust and fast, three-dimensional (3-D) magnetotelluric (MT) inverse solver. As a forward modelling engine a highly-scalable solver extrEMe [1] is used. The (regularized) inversion is based on an iterative gradient-type optimization (quasi-Newton method) and exploits adjoint sources approach for fast calculation of the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT (single-site and/or inter-site) responses, and supports massive parallelization. Different parallelization strategies implemented in the code allow for optimal usage of available computational resources for a given problem set up. To parameterize an inverse domain a mask approach is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to high-performance clusters demonstrate practically linear scalability of the code up to thousands of nodes. 1. Kruglyakov, M., A. Geraskin, A. Kuvshinov, 2016. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method, Computers and Geosciences, in press.

  19. Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.

    PubMed

    Liu, Tao; Lin, Changyu; Djordjevic, Ivan B

    2016-06-27

    In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.

  20. Validation of hydrogen gas stratification and mixing models

    DOE PAGES

    Wu, Hsingtzu; Zhao, Haihua

    2015-05-26

    Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for amore » large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. In conclusion, computing time for each BMIX++ model with a normal desktop computer is less than 5 min.« less

  1. Discrete diffusion Lyman α radiative transfer

    NASA Astrophysics Data System (ADS)

    Smith, Aaron; Tsang, Benny T.-H.; Bromm, Volker; Milosavljević, Miloš

    2018-06-01

    Due to its accuracy and generality, Monte Carlo radiative transfer (MCRT) has emerged as the prevalent method for Lyα radiative transfer in arbitrary geometries. The standard MCRT encounters a significant efficiency barrier in the high optical depth, diffusion regime. Multiple acceleration schemes have been developed to improve the efficiency of MCRT but the noise from photon packet discretization remains a challenge. The discrete diffusion Monte Carlo (DDMC) scheme has been successfully applied in state-of-the-art radiation hydrodynamics (RHD) simulations. Still, the established framework is not optimal for resonant line transfer. Inspired by the DDMC paradigm, we present a novel extension to resonant DDMC (rDDMC) in which diffusion in space and frequency are treated on equal footing. We explore the robustness of our new method and demonstrate a level of performance that justifies incorporating the method into existing Lyα codes. We present computational speedups of ˜102-106 relative to contemporary MCRT implementations with schemes that skip scattering in the core of the line profile. This is because the rDDMC runtime scales with the spatial and frequency resolution rather than the number of scatterings—the latter is typically ∝τ0 for static media, or ∝(aτ0)2/3 with core-skipping. We anticipate new frontiers in which on-the-fly Lyα radiative transfer calculations are feasible in 3D RHD. More generally, rDDMC is transferable to any computationally demanding problem amenable to a Fokker-Planck approximation of frequency redistribution.

  2. Efficient Radiative Transfer for Dynamically Evolving Stratified Atmospheres

    NASA Astrophysics Data System (ADS)

    Judge, Philip G.

    2017-12-01

    We present a fast multi-level and multi-atom non-local thermodynamic equilibrium radiative transfer method for dynamically evolving stratified atmospheres, such as the solar atmosphere. The preconditioning method of Rybicki & Hummer (RH92) is adopted. But, pressed for the need of speed and stability, a “second-order escape probability” scheme is implemented within the framework of the RH92 method, in which frequency- and angle-integrals are carried out analytically. While minimizing the computational work needed, this comes at the expense of numerical accuracy. The iteration scheme is local, the formal solutions for the intensities are the only non-local component. At present the methods have been coded for vertical transport, applicable to atmospheres that are highly stratified. The probabilistic method seems adequately fast, stable, and sufficiently accurate for exploring dynamical interactions between the evolving MHD atmosphere and radiation using current computer hardware. Current 2D and 3D dynamics codes do not include this interaction as consistently as the current method does. The solutions generated may ultimately serve as initial conditions for dynamical calculations including full 3D radiative transfer. The National Center for Atmospheric Research is sponsored by the National Science Foundation.

  3. Preliminary Results from the Application of Automated Adjoint Code Generation to CFL3D

    NASA Technical Reports Server (NTRS)

    Carle, Alan; Fagan, Mike; Green, Lawrence L.

    1998-01-01

    This report describes preliminary results obtained using an automated adjoint code generator for Fortran to augment a widely-used computational fluid dynamics flow solver to compute derivatives. These preliminary results with this augmented code suggest that, even in its infancy, the automated adjoint code generator can accurately and efficiently deliver derivatives for use in transonic Euler-based aerodynamic shape optimization problems with hundreds to thousands of independent design variables.

  4. Design of 28 GHz, 200 kW Gyrotron for ECRH Applications

    NASA Astrophysics Data System (ADS)

    Yadav, Vivek; Singh, Udaybir; Kumar, Nitin; Kumar, Anil; Deorani, S. C.; Sinha, A. K.

    2013-01-01

    This paper presents the design of 28 GHz, 200 kW gyrotron for Indian TOKAMAK system. The paper reports the designs of interaction cavity, magnetron injection gun and RF window. EGUN code is used for the optimization of electron gun parameters. TE03 mode is selected as the operating mode by using the in-house developed code GCOMS. The simulation and optimization of the cavity parameters are carried out by using the Particle-in-cell, three dimensional (3-D)-electromagnetic simulation code MAGIC. The output power more than 250 kW is achieved.

  5. Exciton management in organic photovoltaic multidonor energy cascades.

    PubMed

    Griffith, Olga L; Forrest, Stephen R

    2014-05-14

    Multilayer donor regions in organic photovoltaics show improved power conversion efficiency when arranged in decreasing exciton energy order from the anode to the acceptor interface. These so-called "energy cascades" drive exciton transfer from the anode to the dissociating interface while reducing exciton quenching and allowing improved overlap with the solar spectrum. Here we investigate the relative importance of exciton transfer and blocking in a donor cascade employing diphenyltetracene (D1), rubrene (D2), and tetraphenyldibenzoperiflanthene (D3) whose optical gaps monotonically decrease from D1 to D3. In this structure, D1 blocks excitons from quenching at the anode, D2 accepts transfer of excitons from D1 and blocks excitons at the interface between D2 and D3, and D3 contributes the most to the photocurrent due to its strong absorption at visible wavelengths, while also determining the open circuit voltage. We observe singlet exciton Förster transfer from D1 to D2 to D3 consistent with cascade operation. The power conversion efficiency of the optimized cascade OPV with a C60 acceptor layer is 7.1 ± 0.4%, which is significantly higher than bilayer devices made with only the individual donors. We develop a quantitative model to identify the dominant exciton processes that govern the photocurrent generation in multilayer organic structures.

  6. Optimized iterative decoding method for TPC coded CPM

    NASA Astrophysics Data System (ADS)

    Ma, Yanmin; Lai, Penghui; Wang, Shilian; Xie, Shunqin; Zhang, Wei

    2018-05-01

    Turbo Product Code (TPC) coded Continuous Phase Modulation (CPM) system (TPC-CPM) has been widely used in aeronautical telemetry and satellite communication. This paper mainly investigates the improvement and optimization on the TPC-CPM system. We first add the interleaver and deinterleaver to the TPC-CPM system, and then establish an iterative system to iteratively decode. However, the improved system has a poor convergence ability. To overcome this issue, we use the Extrinsic Information Transfer (EXIT) analysis to find the optimal factors for the system. The experiments show our method is efficient to improve the convergence performance.

  7. FitSKIRT: genetic algorithms to automatically fit dusty galaxies with a Monte Carlo radiative transfer code

    NASA Astrophysics Data System (ADS)

    De Geyter, G.; Baes, M.; Fritz, J.; Camps, P.

    2013-02-01

    We present FitSKIRT, a method to efficiently fit radiative transfer models to UV/optical images of dusty galaxies. These images have the advantage that they have better spatial resolution compared to FIR/submm data. FitSKIRT uses the GAlib genetic algorithm library to optimize the output of the SKIRT Monte Carlo radiative transfer code. Genetic algorithms prove to be a valuable tool in handling the multi- dimensional search space as well as the noise induced by the random nature of the Monte Carlo radiative transfer code. FitSKIRT is tested on artificial images of a simulated edge-on spiral galaxy, where we gradually increase the number of fitted parameters. We find that we can recover all model parameters, even if all 11 model parameters are left unconstrained. Finally, we apply the FitSKIRT code to a V-band image of the edge-on spiral galaxy NGC 4013. This galaxy has been modeled previously by other authors using different combinations of radiative transfer codes and optimization methods. Given the different models and techniques and the complexity and degeneracies in the parameter space, we find reasonable agreement between the different models. We conclude that the FitSKIRT method allows comparison between different models and geometries in a quantitative manner and minimizes the need of human intervention and biasing. The high level of automation makes it an ideal tool to use on larger sets of observed data.

  8. Recent Progress and Future Plans for Fusion Plasma Synthetic Diagnostics Platform

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Kramer, Gerrit; Tang, William; Tobias, Benjamin; Valeo, Ernest; Churchill, Randy; Hausammann, Loic

    2015-11-01

    The Fusion Plasma Synthetic Diagnostics Platform (FPSDP) is a Python package developed at the Princeton Plasma Physics Laboratory. It is dedicated to providing an integrated programmable environment for applying a modern ensemble of synthetic diagnostics to the experimental validation of fusion plasma simulation codes. The FPSDP will allow physicists to directly compare key laboratory measurements to simulation results. This enables deeper understanding of experimental data, more realistic validation of simulation codes, quantitative assessment of existing diagnostics, and new capabilities for the design and optimization of future diagnostics. The Fusion Plasma Synthetic Diagnostics Platform now has data interfaces for the GTS and XGC-1 global particle-in-cell simulation codes with synthetic diagnostic modules including: (i) 2D and 3D Reflectometry; (ii) Beam Emission Spectroscopy; and (iii) 1D Electron Cyclotron Emission. Results will be reported on the delivery of interfaces for the global electromagnetic PIC code GTC, the extended MHD M3D-C1 code, and the electromagnetic hybrid NOVAK eigenmode code. Progress toward development of a more comprehensive 2D Electron Cyclotron Emission module will also be discussed. This work is supported by DOE contract #DEAC02-09CH11466.

  9. Three-dimensional Navier-Stokes analysis of turbine passage heat transfer

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.; Arnone, Andrea

    1991-01-01

    The three-dimensional Reynolds-averaged Navier-Stokes equations are numerically solved to obtain the pressure distribution and heat transfer rates on the endwalls and the blades of two linear turbine cascades. The TRAF3D code which has recently been developed in a joint project between researchers from the University of Florence and NASA Lewis Research Center is used. The effect of turbulence is taken into account by using the eddy viscosity hypothesis and the two-layer mixing length model of Baldwin and Lomax. Predictions of surface heat transfer are made for Langston's cascade and compared with the data obtained for that cascade by Graziani. The comparison was found to be favorable. The code is also applied to a linear transonic rotor cascade to predict the pressure distributions and heat transfer rates.

  10. Retrieving the Molecular Composition of Planet-Forming Material: An Accurate Non-LTE Radiative Transfer Code for JWST

    NASA Astrophysics Data System (ADS)

    Pontoppidan, Klaus

    Based on the observed distributions of exoplanets and dynamical models of their evolution, the primary planet-forming regions of protoplanetary disks are thought to span distances of 1-20 AU from typical stars. A key observational challenge of the next decade will be to understand the links between the formation of planets in protoplanetary disks and the chemical composition of exoplanets. Potentially habitable planets in particular are likely formed by solids growing within radii of a few AU, augmented by unknown contributions from volatiles formed at larger radii of 10-50 AU. The basic chemical composition of these inner disk regions is characterized by near- to far-infrared (2-200 micron) emission lines from molecular gas at temperatures of 50-1500 K. A critical step toward measuring the chemical composition of planet-forming regions is therefore to convert observed infrared molecular line fluxes, profiles and images to gas temperatures, densities and molecular abundances. However, current techniques typically employ approximate radiative transfer methods and assumptions of local thermodynamic equilibrium (LTE) to retrieve abundances, leading to uncertainties of orders of magnitude and inconclusive comparisons to chemical models. Ultimately, the scientific impact of the high quality spectroscopic data expected from the James Webb Space Telescope (JWST) will be limited by the availability of radiative transfer tools for infrared molecular lines. We propose to develop a numerically accurate, non-LTE 3D line radiative transfer code, needed to interpret mid-infrared molecular line observations of protoplanetary and debris disks in preparation for the James Webb Space Telescope (JWST). This will be accomplished by adding critical functionality to the existing Monte Carlo code LIME, which was originally developed to support (sub)millimeter interferometric observations. In contrast to existing infrared codes, LIME calculates the exact statistical balance of arbitrary collections of molecular lines, and does not use large velocity gradient (LVG) or escape probability approximations. However, to use LIME for infrared line radiative transfer, new functionality must be added and tested, such as dust scattering, UV fluorescence, and interfaces with public state-of-the art 3D dust radiative transfer codes (e.g., RADMC3D) and thermo-chemical codes (e.g, ProDiMo). Infrared transitions of molecules expected to be ubiquitous in JWST spectra currently do not have good databases applicable to astrophysical modeling and protoplanetary disks, including water, OH, CO2, NH3, CH4, HCN, etc. Obtaining accurate solutions of the non-LTE line transfer problem in 3D in the infrared is computationally intensive. We propose to benchmark the new code relative to existing, approximate methods to determine whether they are accurate, and under what conditions. We will also create conversion tables between mid-infrared line strengths of water, OH, CH4, NH3, CH3OH, CO2 and other species expected to be observed with JWST, and their relative abundances in planet-forming regions. We propose to apply the new IR-LIME to retrieve molecular abundances from archival and new spectroscopic observations with Spitzer/Herschel/Keck/VLT of CO, water, OH and organic molecules, and to publish comprehensive tables of retrieved molecular abundances in protoplanetary disks. The proposed research is relevant to the XRP call, since it addresses a critical step in inferring the chemical abundances of planet-forming material, which in turn can be compared to the observed compositions of exoplanets, thereby improving our understanding of the origins of exoplanetary systems. The proposed research is particularly timely as the first JWST science data are expected to become available toward the end of the three-year duration of the project.

  11. Observations on computational methodologies for use in large-scale, gradient-based, multidisciplinary design incorporating advanced CFD codes

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Hou, G. J.-W.; Jones, H. E.; Taylor, A. C., III; Korivi, V. M.

    1992-01-01

    How a combination of various computational methodologies could reduce the enormous computational costs envisioned in using advanced CFD codes in gradient based optimized multidisciplinary design (MdD) procedures is briefly outlined. Implications of these MdD requirements upon advanced CFD codes are somewhat different than those imposed by a single discipline design. A means for satisfying these MdD requirements for gradient information is presented which appear to permit: (1) some leeway in the CFD solution algorithms which can be used; (2) an extension to 3-D problems; and (3) straightforward use of other computational methodologies. Many of these observations have previously been discussed as possibilities for doing parts of the problem more efficiently; the contribution here is observing how they fit together in a mutually beneficial way.

  12. Scalable video transmission over Rayleigh fading channels using LDPC codes

    NASA Astrophysics Data System (ADS)

    Bansal, Manu; Kondi, Lisimachos P.

    2005-03-01

    In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.

  13. Progress report on PIXIE3D, a fully implicit 3D extended MHD solver

    NASA Astrophysics Data System (ADS)

    Chacon, Luis

    2008-11-01

    Recently, invited talk at DPP07 an optimal, massively parallel implicit algorithm for 3D resistive magnetohydrodynamics (PIXIE3D) was demonstrated. Excellent algorithmic and parallel results were obtained with up to 4096 processors and 138 million unknowns. While this is a remarkable result, further developments are still needed for PIXIE3D to become a 3D extended MHD production code in general geometries. In this poster, we present an update on the status of PIXIE3D on several fronts. On the physics side, we will describe our progress towards the full Braginskii model, including: electron Hall terms, anisotropic heat conduction, and gyroviscous corrections. Algorithmically, we will discuss progress towards a robust, optimal, nonlinear solver for arbitrary geometries, including preconditioning for the new physical effects described, the implementation of a coarse processor-grid solver (to maintain optimal algorithmic performance for an arbitrarily large number of processors in massively parallel computations), and of a multiblock capability to deal with complicated geometries. L. Chac'on, Phys. Plasmas 15, 056103 (2008);

  14. Predicting radiative heat transfer in thermochemical nonequilibrium flow fields. Theory and user's manual for the LORAN code

    NASA Technical Reports Server (NTRS)

    Chambers, Lin Hartung

    1994-01-01

    The theory for radiation emission, absorption, and transfer in a thermochemical nonequilibrium flow is presented. The expressions developed reduce correctly to the limit at equilibrium. To implement the theory in a practical computer code, some approximations are used, particularly the smearing of molecular radiation. Details of these approximations are presented and helpful information is included concerning the use of the computer code. This user's manual should benefit both occasional users of the Langley Optimized Radiative Nonequilibrium (LORAN) code and those who wish to use it to experiment with improved models or properties.

  15. An Advanced simulation Code for Modeling Inductive Output Tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thuc Bui; R. Lawrence Ives

    2012-04-27

    During the Phase I program, CCR completed several major building blocks for a 3D large signal, inductive output tube (IOT) code using modern computer language and programming techniques. These included a 3D, Helmholtz, time-harmonic, field solver with a fully functional graphical user interface (GUI), automeshing and adaptivity. Other building blocks included the improved electrostatic Poisson solver with temporal boundary conditions to provide temporal fields for the time-stepping particle pusher as well as the self electric field caused by time-varying space charge. The magnetostatic field solver was also updated to solve for the self magnetic field caused by time changing currentmore » density in the output cavity gap. The goal function to optimize an IOT cavity was also formulated, and the optimization methodologies were investigated.« less

  16. TRO-2D - A code for rational transonic aerodynamic optimization

    NASA Technical Reports Server (NTRS)

    Davis, W. H., Jr.

    1985-01-01

    Features and sample applications of the transonic rational optimization (TRO-2D) code are outlined. TRO-2D includes the airfoil analysis code FLO-36, the CONMIN optimization code and a rational approach to defining aero-function shapes for geometry modification. The program is part of an effort to develop an aerodynamically smart optimizer that will simplify and shorten the design process. The user has a selection of drag minimization and associated minimum lift, moment, and the pressure distribution, a choice among 14 resident aero-function shapes, and options on aerodynamic and geometric constraints. Design variables such as the angle of attack, leading edge radius and camber, shock strength and movement, supersonic pressure plateau control, etc., are discussed. The results of calculations of a reduced leading edge camber transonic airfoil and an airfoil with a natural laminar flow are provided, showing that only four design variables need be specified to obtain satisfactory results.

  17. Computational microscopy: illumination coding and nonlinear optimization enables gigapixel 3D phase imaging

    NASA Astrophysics Data System (ADS)

    Tian, Lei; Waller, Laura

    2017-05-01

    Microscope lenses can have either large field of view (FOV) or high resolution, not both. Computational microscopy based on illumination coding circumvents this limit by fusing images from different illumination angles using nonlinear optimization algorithms. The result is a Gigapixel-scale image having both wide FOV and high resolution. We demonstrate an experimentally robust reconstruction algorithm based on a 2nd order quasi-Newton's method, combined with a novel phase initialization scheme. To further extend the Gigapixel imaging capability to 3D, we develop a reconstruction method to process the 4D light field measurements from sequential illumination scanning. The algorithm is based on a 'multislice' forward model that incorporates both 3D phase and diffraction effects, as well as multiple forward scatterings. To solve the inverse problem, an iterative update procedure that combines both phase retrieval and 'error back-propagation' is developed. To avoid local minimum solutions, we further develop a novel physical model-based initialization technique that accounts for both the geometric-optic and 1st order phase effects. The result is robust reconstructions of Gigapixel 3D phase images having both wide FOV and super resolution in all three dimensions. Experimental results from an LED array microscope were demonstrated.

  18. Investigation of electronic band structure and charge transfer mechanism of oxidized three-dimensional graphene as metal-free anodes material for dye sensitized solar cell application

    NASA Astrophysics Data System (ADS)

    Loeblein, Manuela; Bruno, Annalisa; Loh, G. C.; Bolker, Asaf; Saguy, Cecile; Antila, Liisa; Tsang, Siu Hon; Teo, Edwin Hang Tong

    2017-10-01

    Dye-sensitized solar cells (DSSCs) offer an optimal trade-off between conversion-efficiency and low-cost fabrication. However, since all its electrodes need to fulfill stringent work-function requirements, its materials have remained unchanged since DSSC's first report early-90s. Here we describe a new material, oxidized-three-dimensional-graphene (o-3D-C), with a band gap of 0.2 eV and suitable electronic band-structure as alternative metal-free material for DSSCs-anodes. o-3D-C/dye-complex has a strong chemical bonding via carboxylic-group chemisorption with full saturation after 12 sec at capacity of ∼450 mg/g (600x faster and 7x higher than optimized metal surfaces). Furthermore, fluorescence quenching of life-time by 28-35% was measured demonstrating charge-transfer from dye to o-3D-C.

  19. Glenn-ht/bem Conjugate Heat Transfer Solver for Large-scale Turbomachinery Models

    NASA Technical Reports Server (NTRS)

    Divo, E.; Steinthorsson, E.; Rodriquez, F.; Kassab, A. J.; Kapat, J. S.; Heidmann, James D. (Technical Monitor)

    2003-01-01

    A coupled Boundary Element/Finite Volume Method temperature-forward/flux-hack algorithm is developed for conjugate heat transfer (CHT) applications. A loosely coupled strategy is adopted with each field solution providing boundary conditions for the other in an iteration seeking continuity of temperature and heat flux at the fluid-solid interface. The NASA Glenn Navier-Stokes code Glenn-HT is coupled to a 3-D BEM steady state heat conduction code developed at the University of Central Florida. Results from CHT simulation of a 3-D film-cooled blade section are presented and compared with those computed by a two-temperature approach. Also presented are current developments of an iterative domain decomposition strategy accommodating large numbers of unknowns in the BEM. The blade is artificially sub-sectioned in the span-wise direction, 3-D BEM solutions are obtained in the subdomains, and interface temperatures are averaged symmetrically when the flux is updated while the fluxes are averaged anti-symmetrically to maintain continuity of heat flux when the temperatures are updated. An initial guess for interface temperatures uses a physically-based 1-D conduction argument to provide an effective starting point and significantly reduce iteration. 2-D and 3-D results show the process converges efficiently and offers substantial computational and storage savings. Future developments include a parallel multi-grid implementation of the approach under MPI for computation on PC clusters.

  20. Computer codes developed and under development at Lewis

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1992-01-01

    The objective of this summary is to provide a brief description of: (1) codes developed or under development at LeRC; and (2) the development status of IPACS with some typical early results. The computer codes that have been developed and/or are under development at LeRC are listed in the accompanying charts. This list includes: (1) the code acronym; (2) select physics descriptors; (3) current enhancements; and (4) present (9/91) code status with respect to its availability and documentation. The computer codes list is grouped by related functions such as: (1) composite mechanics; (2) composite structures; (3) integrated and 3-D analysis; (4) structural tailoring; and (5) probabilistic structural analysis. These codes provide a broad computational simulation infrastructure (technology base-readiness) for assessing the structural integrity/durability/reliability of propulsion systems. These codes serve two other very important functions: they provide an effective means of technology transfer; and they constitute a depository of corporate memory.

  1. MINIVER: Miniature version of real/ideal gas aero-heating and ablation computer program

    NASA Technical Reports Server (NTRS)

    Hendler, D. R.

    1976-01-01

    Computer code is used to determine heat transfer multiplication factors, special flow field simulation techniques, different heat transfer methods, different transition criteria, crossflow simulation, and more efficient thin skin thickness optimization procedure.

  2. Py4CAtS - Python tools for line-by-line modelling of infrared atmospheric radiative transfer

    NASA Astrophysics Data System (ADS)

    Schreier, Franz; García, Sebastián Gimeno

    2013-05-01

    Py4CAtS — Python scripts for Computational ATmospheric Spectroscopy is a Python re-implementation of the Fortran infrared radiative transfer code GARLIC, where compute-intensive code sections utilize the Numeric/Scientific Python modules for highly optimized array-processing. The individual steps of an infrared or microwave radiative transfer computation are implemented in separate scripts to extract lines of relevant molecules in the spectral range of interest, to compute line-by-line cross sections for given pressure(s) and temperature(s), to combine cross sections to absorption coefficients and optical depths, and to integrate along the line-of-sight to transmission and radiance/intensity. The basic design of the package, numerical and computational aspects relevant for optimization, and a sketch of the typical workflow are presented.

  3. Three-dimensional integral imaging displays using a quick-response encoded elemental image array: an overview

    NASA Astrophysics Data System (ADS)

    Markman, A.; Javidi, B.

    2016-06-01

    Quick-response (QR) codes are barcodes that can store information such as numeric data and hyperlinks. The QR code can be scanned using a QR code reader, such as those built into smartphone devices, revealing the information stored in the code. Moreover, the QR code is robust to noise, rotation, and illumination when scanning due to error correction built in the QR code design. Integral imaging is an imaging technique used to generate a three-dimensional (3D) scene by combining the information from two-dimensional (2D) elemental images (EIs) each with a different perspective of a scene. Transferring these 2D images in a secure manner can be difficult. In this work, we overview two methods to store and encrypt EIs in multiple QR codes. The first method uses run-length encoding with Huffman coding and the double-random-phase encryption (DRPE) to compress and encrypt an EI. This information is then stored in a QR code. An alternative compression scheme is to perform photon-counting on the EI prior to compression. Photon-counting is a non-linear transformation of data that creates redundant information thus improving image compression. The compressed data is encrypted using the DRPE. Once information is stored in the QR codes, it is scanned using a smartphone device. The information scanned is decompressed and decrypted and an EI is recovered. Once all EIs have been recovered, a 3D optical reconstruction is generated.

  4. From GCode to STL: Reconstruct Models from 3D Printing as a Service

    NASA Astrophysics Data System (ADS)

    Baumann, Felix W.; Schuermann, Martin; Odefey, Ulrich; Pfeil, Markus

    2017-12-01

    The authors present a method to reverse engineer 3D printer specific machine instructions (GCode) to a point cloud representation and then a STL (Stereolithography) file format. GCode is a machine code that is used for 3D printing among other applications, such as CNC routers. Such code files contain instructions for the 3D printer to move and control its actuator, in case of Fused Deposition Modeling (FDM), the printhead that extrudes semi-molten plastics. The reverse engineering method presented here is based on the digital simulation of the extrusion process of FDM type 3D printing. The reconstructed models and pointclouds do not accommodate for hollow structures, such as holes or cavities. The implementation is performed in Python and relies on open source software and libraries, such as Matplotlib and OpenCV. The reconstruction is performed on the model’s extrusion boundary and considers mechanical imprecision. The complete reconstruction mechanism is available as a RESTful (Representational State Transfer) Web service.

  5. A new 3D maser code applied to flaring events

    NASA Astrophysics Data System (ADS)

    Gray, M. D.; Mason, L.; Etoka, S.

    2018-06-01

    We set out the theory and discretization scheme for a new finite-element computer code, written specifically for the simulation of maser sources. The code was used to compute fractional inversions at each node of a 3D domain for a range of optical thicknesses. Saturation behaviour of the nodes with regard to location and optical depth was broadly as expected. We have demonstrated via formal solutions of the radiative transfer equation that the apparent size of the model maser cloud decreases as expected with optical depth as viewed by a distant observer. Simulations of rotation of the cloud allowed the construction of light curves for a number of observable quantities. Rotation of the model cloud may be a reasonable model for quasi-periodic variability, but cannot explain periodic flaring.

  6. Hierarchical meso/macro-porous carbon fabricated from dual MgO templates for direct electron transfer enzymatic electrodes.

    PubMed

    Funabashi, Hiroto; Takeuchi, Satoshi; Tsujimura, Seiya

    2017-03-23

    We designed a three-dimensional (3D) hierarchical pore structure to improve the current production efficiency and stability of direct electron transfer-type biocathodes. The 3D hierarchical electrode structure was fabricated using a MgO-templated porous carbon framework produced from two MgO templates with sizes of 40 and 150 nm. The results revealed that the optimal pore composition for a bilirubin oxidase-catalysed oxygen reduction cathode was a mixture of 33% macropores and 67% mesopores (MgOC 33 ). The macropores improve mass transfer inside the carbon material, and the mesopores improve the electron transfer efficiency of the enzyme by surrounding the enzyme with carbon.

  7. Hierarchical meso/macro-porous carbon fabricated from dual MgO templates for direct electron transfer enzymatic electrodes

    NASA Astrophysics Data System (ADS)

    Funabashi, Hiroto; Takeuchi, Satoshi; Tsujimura, Seiya

    2017-03-01

    We designed a three-dimensional (3D) hierarchical pore structure to improve the current production efficiency and stability of direct electron transfer-type biocathodes. The 3D hierarchical electrode structure was fabricated using a MgO-templated porous carbon framework produced from two MgO templates with sizes of 40 and 150 nm. The results revealed that the optimal pore composition for a bilirubin oxidase-catalysed oxygen reduction cathode was a mixture of 33% macropores and 67% mesopores (MgOC33). The macropores improve mass transfer inside the carbon material, and the mesopores improve the electron transfer efficiency of the enzyme by surrounding the enzyme with carbon.

  8. SOAP and the Interstellar Froth

    NASA Astrophysics Data System (ADS)

    Tüllmann, R.; Rosa, M. R.; Dettmar, R.-J.

    2005-06-01

    We investigate whether the alleged failure of standard photoionization codes to match the Diffuse Ionized Gas (DIG) is simply caused by geometrical effects and the insufficient treatment of the radiative transfer. Standard photoionization models are applicable only to homogeneous and spherically symmetric nebulae with central ionizing stars, whereas the geometry of disk galaxies requires a 3D distribution of ionizing sources in the disk which illuminate the halo. This change in geometry together with a proper radiative transfer model is expected to substantially influence ionization conditions. Therefore, we developed a new and sophisticated 3D Monte Carlo photoionization code, called SOAP (Simulations Of Astrophysical Plasmas), by adapting an existing 1D code for HII-regions tep*{och} such, that it self-consistently models a 3D disk galaxy with a gaseous DIG halo. First results from a simple (dust-free) model with exponentially decreasing gas densities are presented and the predicted ionization structure of disk and halo are discussed. Theoretical line ratios agree well with observed ones, e.g,. for the halo of NGC 891. Moreover, the fraction of ionizing photons leaving the halo of the galaxy is plotted as a function of varying gas densities. This quantity will be of particular importance for forthcoming studies, because rough estimates indicate that about 7% of ionizing photons escape from the halo and contribute to the ionization of the IGM. Given the relatively large number density of normal spiral galaxies, OB-stars could have a much stronger impact on the ionization of the IGM than AGN or QSOs.

  9. User's manual for three dimensional boundary layer (BL3-D) code

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.; Caplin, B.

    1985-01-01

    An assessment has been made of the applicability of a 3-D boundary layer analysis to the calculation of heat transfer, total pressure losses, and streamline flow patterns on the surface of both stationary and rotating turbine passages. In support of this effort, an analysis has been developed to calculate a general nonorthogonal surface coordinate system for arbitrary 3-D surfaces and also to calculate the boundary layer edge conditions for compressible flow using the surface Euler equations and experimental data to calibrate the method, calculations are presented for the pressure endwall, and suction surfaces of a stationary cascade and for the pressure surface of a rotating turbine blade. The results strongly indicate that the 3-D boundary layer analysis can give good predictions of the flow field, loss, and heat transfer on the pressure, suction, and endwall surface of a gas turbine passage.

  10. Leading edge film cooling effects on turbine blade heat transfer

    NASA Technical Reports Server (NTRS)

    Garg, Vijay K.; Gaugler, Raymond E.

    1995-01-01

    An existing three dimensional Navier-Stokes code, modified to include film cooling considerations, has been used to study the effect of spanwise pitch of shower-head holes and coolant to mainstream mass flow ratio on the adiabatic effectiveness and heat transfer coefficient on a film-cooled turbine vane. The mainstream is akin to that under real engine conditions with stagnation temperature = 1900 K and stagnation pressure = 3 MPa. It is found that with the coolant to mainstream mass flow ratio fixed, reducing P, the spanwise pitch for shower-head holes, from 7.5 d to 3.0 d, where d is the hole diameter, increases the average effectiveness considerably over the blade surface. However, when P/d= 7.5, increasing the coolant mass flow increases the effectiveness on the pressure surface but reduces it on the suction surface due to coolant jet lift-off. For P/d = 4.5 or 3.0, such an anomaly does not occur within the range of coolant to mainstream mass flow ratios analyzed. In all cases, adiabatic effectiveness and heat transfer coefficient are highly three-dimensional.

  11. A Rocket Engine Design Expert System

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth J.

    1989-01-01

    The overall structure and capabilities of an expert system designed to evaluate rocket engine performance are described. The expert system incorporates a JANNAF standard reference computer code to determine rocket engine performance and a state of the art finite element computer code to calculate the interactions between propellant injection, energy release in the combustion chamber, and regenerative cooling heat transfer. Rule-of-thumb heuristics were incorporated for the H2-O2 coaxial injector design, including a minimum gap size constraint on the total number of injector elements. One dimensional equilibrium chemistry was used in the energy release analysis of the combustion chamber. A 3-D conduction and/or 1-D advection analysis is used to predict heat transfer and coolant channel wall temperature distributions, in addition to coolant temperature and pressure drop. Inputting values to describe the geometry and state properties of the entire system is done directly from the computer keyboard. Graphical display of all output results from the computer code analyses is facilitated by menu selection of up to five dependent variables per plot.

  12. SU-E-T-590: Optimizing Magnetic Field Strengths with Matlab for An Ion-Optic System in Particle Therapy Consisting of Two Quadrupole Magnets for Subsequent Simulations with the Monte-Carlo Code FLUKA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baumann, K; Weber, U; Simeonov, Y

    Purpose: Aim of this study was to optimize the magnetic field strengths of two quadrupole magnets in a particle therapy facility in order to obtain a beam quality suitable for spot beam scanning. Methods: The particle transport through an ion-optic system of a particle therapy facility consisting of the beam tube, two quadrupole magnets and a beam monitor system was calculated with the help of Matlab by using matrices that solve the equation of motion of a charged particle in a magnetic field and field-free region, respectively. The magnetic field strengths were optimized in order to obtain a circular andmore » thin beam spot at the iso-center of the therapy facility. These optimized field strengths were subsequently transferred to the Monte-Carlo code FLUKA and the transport of 80 MeV/u C12-ions through this ion-optic system was calculated by using a user-routine to implement magnetic fields. The fluence along the beam-axis and at the iso-center was evaluated. Results: The magnetic field strengths could be optimized by using Matlab and transferred to the Monte-Carlo code FLUKA. The implementation via a user-routine was successful. Analyzing the fluence-pattern along the beam-axis the characteristic focusing and de-focusing effects of the quadrupole magnets could be reproduced. Furthermore the beam spot at the iso-center was circular and significantly thinner compared to an unfocused beam. Conclusion: In this study a Matlab tool was developed to optimize magnetic field strengths for an ion-optic system consisting of two quadrupole magnets as part of a particle therapy facility. These magnetic field strengths could subsequently be transferred to and implemented in the Monte-Carlo code FLUKA to simulate the particle transport through this optimized ion-optic system.« less

  13. Entanglement entropy from tensor network states for stabilizer codes

    NASA Astrophysics Data System (ADS)

    He, Huan; Zheng, Yunqin; Bernevig, B. Andrei; Regnault, Nicolas

    2018-03-01

    In this paper, we present the construction of tensor network states (TNS) for some of the degenerate ground states of three-dimensional (3D) stabilizer codes. We then use the TNS formalism to obtain the entanglement spectrum and entropy of these ground states for some special cuts. In particular, we work out examples of the 3D toric code, the X-cube model, and the Haah code. The latter two models belong to the category of "fracton" models proposed recently, while the first one belongs to the conventional topological phases. We mention the cases for which the entanglement entropy and spectrum can be calculated exactly: For these, the constructed TNS is a singular value decomposition (SVD) of the ground states with respect to particular entanglement cuts. Apart from the area law, the entanglement entropies also have constant and linear corrections for the fracton models, while the entanglement entropies for the toric code models only have constant corrections. For the cuts we consider, the entanglement spectra of these three models are completely flat. We also conjecture that the negative linear correction to the area law is a signature of extensive ground-state degeneracy. Moreover, the transfer matrices of these TNSs can be constructed. We show that the transfer matrices are projectors whose eigenvalues are either 1 or 0. The number of nonzero eigenvalues is tightly related to the ground-state degeneracy.

  14. Space Radiation Transport Code Development: 3DHZETRN

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2015-01-01

    The space radiation transport code, HZETRN, has been used extensively for research, vehicle design optimization, risk analysis, and related applications. One of the simplifying features of the HZETRN transport formalism is the straight-ahead approximation, wherein all particles are assumed to travel along a common axis. This reduces the governing equation to one spatial dimension allowing enormous simplification and highly efficient computational procedures to be implemented. Despite the physical simplifications, the HZETRN code is widely used for space applications and has been found to agree well with fully 3D Monte Carlo simulations in many circumstances. Recent work has focused on the development of 3D transport corrections for neutrons and light ions (Z < 2) for which the straight-ahead approximation is known to be less accurate. Within the development of 3D corrections, well-defined convergence criteria have been considered, allowing approximation errors at each stage in model development to be quantified. The present level of development assumes the neutron cross sections have an isotropic component treated within N explicit angular directions and a forward component represented by the straight-ahead approximation. The N = 1 solution refers to the straight-ahead treatment, while N = 2 represents the bi-directional model in current use for engineering design. The figure below shows neutrons, protons, and alphas for various values of N at locations in an aluminum sphere exposed to a solar particle event (SPE) spectrum. The neutron fluence converges quickly in simple geometry with N > 14 directions. The improved code, 3DHZETRN, transports neutrons, light ions, and heavy ions under space-like boundary conditions through general geometry while maintaining a high degree of computational efficiency. A brief overview of the 3D transport formalism for neutrons and light ions is given, and extensive benchmarking results with the Monte Carlo codes Geant4, FLUKA, and PHITS are provided for a variety of boundary conditions and geometries. Improvements provided by the 3D corrections are made clear in the comparisons. Developments needed to connect 3DHZETRN to vehicle design and optimization studies will be discussed. Future theoretical development will relax the forward plus isotropic interaction assumption to more general angular dependence.

  15. Wakefield Simulation of CLIC PETS Structure Using Parallel 3D Finite Element Time-Domain Solver T3P

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, A.; Kabel, A.; Lee, L.

    In recent years, SLAC's Advanced Computations Department (ACD) has developed the parallel 3D Finite Element electromagnetic time-domain code T3P. Higher-order Finite Element methods on conformal unstructured meshes and massively parallel processing allow unprecedented simulation accuracy for wakefield computations and simulations of transient effects in realistic accelerator structures. Applications include simulation of wakefield damping in the Compact Linear Collider (CLIC) power extraction and transfer structure (PETS).

  16. Optimization technique of wavefront coding system based on ZEMAX externally compiled programs

    NASA Astrophysics Data System (ADS)

    Han, Libo; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua

    2016-10-01

    Wavefront coding technique as a means of athermalization applied to infrared imaging system, the design of phase plate is the key to system performance. This paper apply the externally compiled programs of ZEMAX to the optimization of phase mask in the normal optical design process, namely defining the evaluation function of wavefront coding system based on the consistency of modulation transfer function (MTF) and improving the speed of optimization by means of the introduction of the mathematical software. User write an external program which computes the evaluation function on account of the powerful computing feature of the mathematical software in order to find the optimal parameters of phase mask, and accelerate convergence through generic algorithm (GA), then use dynamic data exchange (DDE) interface between ZEMAX and mathematical software to realize high-speed data exchanging. The optimization of the rotational symmetric phase mask and the cubic phase mask have been completed by this method, the depth of focus increases nearly 3 times by inserting the rotational symmetric phase mask, while the other system with cubic phase mask can be increased to 10 times, the consistency of MTF decrease obviously, the maximum operating temperature of optimized system range between -40°-60°. Results show that this optimization method can be more convenient to define some unconventional optimization goals and fleetly to optimize optical system with special properties due to its externally compiled function and DDE, there will be greater significance for the optimization of unconventional optical system.

  17. Structured Set Intra Prediction With Discriminative Learning in a Max-Margin Markov Network for High Efficiency Video Coding

    PubMed Central

    Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen

    2014-01-01

    This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829

  18. Inner hydrogen atom transfer in benzo-fused low symmetrical metal-free tetraazaporphyrin and phthalocyanine analogues: density functional theory studies.

    PubMed

    Qi, Dongdong; Zhang, Yuexing; Cai, Xue; Jiang, Jianzhuang; Bai, Ming

    2009-02-01

    Density functional theory (DFT) calculations were carried out to study the inner hydrogen atom transfer in low symmetrical metal-free tetrapyrrole analogues ranging from tetraazaporphyrin H(2)TAP (A(0)B(0)C(0)D(0)) to naphthalocyanine H(2)Nc (A(2)B(2)C(2)D(2)) via phthalocyanine H(2)Pc (A(1)B(1)C(1)D(1)). All the transition paths of sixteen different compounds (A(0)B(0)C(0)D(0)-A(2)B(2)C(2)D(2) and A(0)B(0)C(m)D(n), m

  19. Influence of temperature fluctuations on infrared limb radiance: a new simulation code

    NASA Astrophysics Data System (ADS)

    Rialland, Valérie; Chervet, Patrick

    2006-08-01

    Airborne infrared limb-viewing detectors may be used as surveillance sensors in order to detect dim military targets. These systems' performances are limited by the inhomogeneous background in the sensor field of view which impacts strongly on target detection probability. This background clutter, which results from small-scale fluctuations of temperature, density or pressure must therefore be analyzed and modeled. Few existing codes are able to model atmospheric structures and their impact on limb-observed radiance. SAMM-2 (SHARC-4 and MODTRAN4 Merged), the Air Force Research Laboratory (AFRL) background radiance code can be used to in order to predict the radiance fluctuation as a result of a normalized temperature fluctuation, as a function of the line-of-sight. Various realizations of cluttered backgrounds can then be computed, based on these transfer functions and on a stochastic temperature field. The existing SIG (SHARC Image Generator) code was designed to compute the cluttered background which would be observed from a space-based sensor. Unfortunately, this code was not able to compute accurate scenes as seen by an airborne sensor especially for lines-of-sight close to the horizon. Recently, we developed a new code called BRUTE3D and adapted to our configuration. This approach is based on a method originally developed in the SIG model. This BRUTE3D code makes use of a three-dimensional grid of temperature fluctuations and of the SAMM-2 transfer functions to synthesize an image of radiance fluctuations according to sensor characteristics. This paper details the working principles of the code and presents some output results. The effects of the small-scale temperature fluctuations on infrared limb radiance as seen by an airborne sensor are highlighted.

  20. On FAST3D simulations of directly-driven inertial-fusion targets with high-Z layers for reducing laser imprint and surface non-uniformity growth

    NASA Astrophysics Data System (ADS)

    Bates, Jason; Schmitt, Andrew; Klapisch, Marcel; Karasik, Max; Obenschain, Steve

    2013-10-01

    Modifications to the FAST3D code have been made to enhance its ability to simulate the dynamics of plastic ICF targets with high-Z overcoats. This class of problems is challenging computationally due in part to plasma conditions that are not in a state of local thermodynamic equilibrium and to the presence of mixed computational cells containing more than one material. Recently, new opacity tables for gold, palladium and plastic have been generated with an improved version of the STA code. These improved tables provide smoother, higher-fidelity opacity data over a wider range of temperature and density states than before, and contribute to a more accurate treatment of radiative transfer processes in FAST3D simulations. Furthermore, a new, more efficient subroutine known as ``MMEOS'' has been installed in the FAST3D code for determining pressure and temperature equilibrium conditions within cells containing multiple materials. We will discuss these topics, and present new simulation results for high-Z planar-target experiments performed recently on the NIKE Laser Facility. Work supported by DOE/NNSA.

  1. Optimization of CO2 Storage in Saline Aquifers Using Water-Alternating Gas (WAG) Scheme - Case Study for Utsira Formation

    NASA Astrophysics Data System (ADS)

    Agarwal, R. K.; Zhang, Z.; Zhu, C.

    2013-12-01

    For optimization of CO2 storage and reduced CO2 plume migration in saline aquifers, a genetic algorithm (GA) based optimizer has been developed which is combined with the DOE multi-phase flow and heat transfer numerical simulation code TOUGH2. Designated as GA-TOUGH2, this combined solver/optimizer has been verified by performing optimization studies on a number of model problems and comparing the results with brute-force optimization which requires a large number of simulations. Using GA-TOUGH2, an innovative reservoir engineering technique known as water-alternating-gas (WAG) injection has been investigated to determine the optimal WAG operation for enhanced CO2 storage capacity. The topmost layer (layer # 9) of Utsira formation at Sleipner Project, Norway is considered as a case study. A cylindrical domain, which possesses identical characteristics of the detailed 3D Utsira Layer #9 model except for the absence of 3D topography, was used. Topographical details are known to be important in determining the CO2 migration at Sleipner, and are considered in our companion model for history match of the CO2 plume migration at Sleipner. However, simplification on topography here, without compromising accuracy, is necessary to analyze the effectiveness of WAG operation on CO2 migration without incurring excessive computational cost. Selected WAG operation then can be simulated with full topography details later. We consider a cylindrical domain with thickness of 35 m with horizontal flat caprock. All hydrogeological properties are retained from the detailed 3D Utsira Layer #9 model, the most important being the horizontal-to-vertical permeability ratio of 10. Constant Gas Injection (CGI) operation with nine-year average CO2 injection rate of 2.7 kg/s is considered as the baseline case for comparison. The 30-day, 15-day, and 5-day WAG cycle durations are considered for the WAG optimization design. Our computations show that for the simplified Utsira Layer #9 model, the WAG operation with 5-day cycle leads to most noticeable reduction in plume migration. For 5-day WAG cycle, the values of design variables corresponding to optimal WAG operation are found as optimal CO2 injection ICO2,optimal = 11.56 kg/s, and optimal water injection Iwater,optimal = 7.62 kg/s. The durations of CO2 and water injection in one WAG cycle are 11 and 19 days, respectively. Identical WAG cycles are repeated 20 times to complete a two-year operation. Significant reduction (22%) in CO2 migration is achieved compared to CGI operation after only two years of WAG operation. In addition, CO2 dissolution is also significantly enhanced from about 9% to 22% of the total injected CO2 . The results obtained from this and other optimization studies suggest that over 50% reduction of in situ CO2 footprint, greatly enhanced CO2 dissolution, and significantly improved well injectivity can be achieved by employing GA-TOUGH2. The optimization code has also been employed to determine the optimal well placement in a multi-well injection operation. GA-TOUGH2 appears to hold great promise for studying a host of other optimization problems related to Carbon Storage.

  2. RF Simulation of the 187 MHz CW Photo-RF Gun Cavity at LBNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Tong-Ming

    2008-12-01

    A 187 MHz normal conducting Photo-RF gun cavity is designed for the next generation light sources. The cavity is capable of operating in CW mode. As high as 750 kV gap voltage can be achieved with a 20 MV/m acceleration gradient. The original cavity optimization is conducted using Superfish code (2D) by Staples. 104 vacuum pumping slots are added and evenly spaced over the cavity equator in order to achieve better than 10 -10-Tor of vacuum. Two loop couplers will be used to feed RF power into the cavity. 3D simulations are necessary to study effects from the vacuum pumpingmore » slots, couplers and possible multipactoring. The cavity geometry is optimized to minimize the power density and avoid multipactoring at operating field level. The vacuum slot dimensions are carefully chosen in consideration of both the vacuum conduction, local power density enhancement and the power attenuation at the getter pumps. This technical note gives a summary of 3D RF simulation results, multipactoring simulations (2D) and preliminary electromagnetic-thermal analysis using ANSYS code.« less

  3. Specification and Prediction of the Radiation Environment Using Data Assimilative VERB code

    NASA Astrophysics Data System (ADS)

    Shprits, Yuri; Kellerman, Adam

    2016-07-01

    We discuss how data assimilation can be used for the reconstruction of long-term evolution, bench-marking of the physics based codes and used to improve the now-casting and focusing of the radiation belts and ring current. We also discuss advanced data assimilation methods such as parameter estimation and smoothing. We present a number of data assimilation applications using the VERB 3D code. The 3D data assimilative VERB allows us to blend together data from GOES, RBSP A and RBSP B. 1) Model with data assimilation allows us to propagate data to different pitch angles, energies, and L-shells and blends them together with the physics-based VERB code in an optimal way. We illustrate how to use this capability for the analysis of the previous events and for obtaining a global and statistical view of the system. 2) The model predictions strongly depend on initial conditions that are set up for the model. Therefore, the model is as good as the initial conditions that it uses. To produce the best possible initial conditions, data from different sources (GOES, RBSP A, B, our empirical model predictions based on ACE) are all blended together in an optimal way by means of data assimilation, as described above. The resulting initial conditions do not have gaps. This allows us to make more accurate predictions. Real-time prediction framework operating on our website, based on GOES, RBSP A, B and ACE data, and 3D VERB, is presented and discussed.

  4. Finite Element Flow Code Optimization on the Cray T3D,

    DTIC Science & Technology

    1997-04-01

    present time, the system is configured with 512 processing elements and 32.8 Cigabytes of memory. Through a gift of time from MSCI and other arrangements, the AHPCRC has limited access to this system.

  5. Constructing Episodes of Inpatient Care: How to Define Hospital Transfer in Hospital Administrative Health Data?

    PubMed

    Peng, Mingkai; Li, Bing; Southern, Danielle A; Eastwood, Cathy A; Quan, Hude

    2017-01-01

    Hospital administrative health data create separate records for each hospital stay of patients. Treating a hospital transfer as a readmission could lead to biased results in health service research. This is a cross-sectional study. We used the hospital discharge abstract database in 2013 from Alberta, Canada. Transfer cases were defined by transfer institution code and were used as the reference standard. Four time gaps between 2 hospitalizations (6, 9, 12, and 24 h) and 2 day gaps between hospitalizations [same day (up to 24 h), ≤1 d (up to 48 h)] were used to identify transfer cases. We compared the sensitivity and positive predictive value (PPV) of 6 definitions across different categories of sex, age, and location of residence. Readmission rates within 30 days were compared after episodes of care were defined at the different time gaps. Among the 6 definitions, sensitivity ranged from 93.3% to 98.7% and PPV ranged from 86.4% to 96%. The time gap of 9 hours had the optimal balance of sensitivity and PPV. The time gaps of same day (up to 24 h) and 9 hours had comparable 30-day readmission rates as the transfer indicator after defining episode of care. We recommend the use of a time gap of 9 hours between 2 hospitalizations to define hospital transfer in inpatient databases. When admission or discharge time is not available in the database, a time gap of same day (up to 24 h) can be used to define hospital transfer.

  6. Potential Projective Material on the Rorschach: Comparing Comprehensive System Protocols to Their Modeled R-Optimized Administration Counterparts.

    PubMed

    Pianowski, Giselle; Meyer, Gregory J; Villemor-Amaral, Anna Elisa de

    2016-01-01

    Exner ( 1989 ) and Weiner ( 2003 ) identified 3 types of Rorschach codes that are most likely to contain personally relevant projective material: Distortions, Movement, and Embellishments. We examine how often these types of codes occur in normative data and whether their frequency changes for the 1st, 2nd, 3rd, 4th, or last response to a card. We also examine the impact on these variables of the Rorschach Performance Assessment System's (R-PAS) statistical modeling procedures that convert the distribution of responses (R) from Comprehensive System (CS) administered protocols to match the distribution of R found in protocols obtained using R-optimized administration guidelines. In 2 normative reference databases, the results indicated that about 40% of responses (M = 39.25) have 1 type of code, 15% have 2 types, and 1.5% have all 3 types, with frequencies not changing by response number. In addition, there were no mean differences in the original CS and R-optimized modeled records (M Cohen's d = -0.04 in both databases). When considered alongside findings showing minimal differences between the protocols of people randomly assigned to CS or R-optimized administration, the data suggest R-optimized administration should not alter the extent to which potential projective material is present in a Rorschach protocol.

  7. Optimization of beam shaping assembly based on D-T neutron generator and dose evaluation for BNCT

    NASA Astrophysics Data System (ADS)

    Naeem, Hamza; Chen, Chaobin; Zheng, Huaqing; Song, Jing

    2017-04-01

    The feasibility of developing an epithermal neutron beam for a boron neutron capture therapy (BNCT) facility based on a high intensity D-T fusion neutron generator (HINEG) and using the Monte Carlo code SuperMC (Super Monte Carlo simulation program for nuclear and radiation process) is proposed in this study. The Monte Carlo code SuperMC is used to determine and optimize the final configuration of the beam shaping assembly (BSA). The optimal BSA design in a cylindrical geometry which consists of a natural uranium sphere (14 cm) as a neutron multiplier, AlF3 and TiF3 as moderators (20 cm each), Cd (1 mm) as a thermal neutron filter, Bi (5 cm) as a gamma shield, and Pb as a reflector and collimator to guide neutrons towards the exit window. The epithermal neutron beam flux of the proposed model is 5.73 × 109 n/cm2s, and other dosimetric parameters for the BNCT reported by IAEA-TECDOC-1223 have been verified. The phantom dose analysis shows that the designed BSA is accurate, efficient and suitable for BNCT applications. Thus, the Monte Carlo code SuperMC is concluded to be capable of simulating the BSA and the dose calculation for BNCT, and high epithermal flux can be achieved using proposed BSA.

  8. Arcjet thruster research and technology, phase 1

    NASA Technical Reports Server (NTRS)

    Knowles, Steven C.

    1987-01-01

    The objectives of Phase 1 were to evaluate analytically and experimentally the operation, performance, and lifetime of arcjet thrusters operating between 0.5 and 3.0 kW with catalytically decomposed hydrazine (N2H4) and to begin development of the requisite power control unit (PCU) technology. Fundamental analyses were performed of the arcjet nozzle, the gas kinetic reaction effects, the thermal environment, and the arc stabilizing vortex. The VNAP2 flow code was used to analyze arcjet nozzle performance with non-uniform entrance profiles. Viscous losses become dominant beyond expansion ratios of 50:1 because of the low Reynolds numbers. A survey of vortex phenomena and analysis techniques identified viscous dissipation and vortex breakdown as two flow instabilities that could affect arcjet operation. The gas kinetics code CREK1D was used to study the gas kinetics of high temperature N2H4 decomposition products. The arc/gas energy transfer is a non-equilibrium process because of the reaction rate constants and the short gas residence times. A thermal analysis code was used to guide design work and to provide a means to back out power losses at the anode fall based on test thermocouple data. The low flow rate and large thermal masses made optimization of a regenerative heating scheme unnecessary.

  9. The effect of total noise on two-dimension OCDMA codes

    NASA Astrophysics Data System (ADS)

    Dulaimi, Layth A. Khalil Al; Badlishah Ahmed, R.; Yaakob, Naimah; Aljunid, Syed A.; Matem, Rima

    2017-11-01

    In this research, we evaluate the performance of total noise effect on two dimension (2-D) optical code-division multiple access (OCDMA) performance systems using 2-D Modified Double Weight MDW under various link parameters. The impact of the multi-access interference (MAI) and other noise effect on the system performance. The 2-D MDW is compared mathematically with other codes which use similar techniques. We analyzed and optimized the data rate and effective receive power. The performance and optimization of MDW code in OCDMA system are reported, the bit error rate (BER) can be significantly improved when the 2-D MDW code desired parameters are selected especially the cross correlation properties. It reduces the MAI in the system compensate BER and phase-induced intensity noise (PIIN) in incoherent OCDMA The analysis permits a thorough understanding of PIIN, shot and thermal noises impact on 2-D MDW OCDMA system performance. PIIN is the main noise factor in the OCDMA network.

  10. The three-dimensional Multi-Block Advanced Grid Generation System (3DMAGGS)

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Weilmuenster, Kenneth J.

    1993-01-01

    As the size and complexity of three dimensional volume grids increases, there is a growing need for fast and efficient 3D volumetric elliptic grid solvers. Present day solvers are limited by computational speed and do not have all the capabilities such as interior volume grid clustering control, viscous grid clustering at the wall of a configuration, truncation error limiters, and convergence optimization residing in one code. A new volume grid generator, 3DMAGGS (Three-Dimensional Multi-Block Advanced Grid Generation System), which is based on the 3DGRAPE code, has evolved to meet these needs. This is a manual for the usage of 3DMAGGS and contains five sections, including the motivations and usage, a GRIDGEN interface, a grid quality analysis tool, a sample case for verifying correct operation of the code, and a comparison to both 3DGRAPE and GRIDGEN3D. Since it was derived from 3DGRAPE, this technical memorandum should be used in conjunction with the 3DGRAPE manual (NASA TM-102224).

  11. Computational Simulation of Thermal and Spattering Phenomena and Microstructure in Selective Laser Melting of Inconel 625

    NASA Astrophysics Data System (ADS)

    Özel, Tuğrul; Arısoy, Yiğit M.; Criales, Luis E.

    Computational modelling of Laser Powder Bed Fusion (L-PBF) processes such as Selective laser Melting (SLM) can reveal information that is hard to obtain or unobtainable by in-situ experimental measurements. A 3D thermal field that is not visible by the thermal camera can be obtained by solving the 3D heat transfer problem. Furthermore, microstructural modelling can be used to predict the quality and mechanical properties of the product. In this paper, a nonlinear 3D Finite Element Method based computational code is developed to simulate the SLM process with different process parameters such as laser power and scan velocity. The code is further improved by utilizing an in-situ thermal camera recording to predict spattering which is in turn included as a stochastic heat loss. Then, thermal gradients extracted from the simulations applied to predict growth directions in the resulting microstructure.

  12. Upscaling of Solar Induced Chlorophyll Fluorescence from Leaf to Canopy Using the Dart Model and a Realistic 3d Forest Scene

    NASA Astrophysics Data System (ADS)

    Liu, W.; Atherton, J.; Mõttus, M.; MacArthur, A.; Teemu, H.; Maseyk, K.; Robinson, I.; Honkavaara, E.; Porcar-Castell, A.

    2017-10-01

    Solar induced chlorophyll a fluorescence (SIF) has been shown to be an excellent proxy of photosynthesis at multiple scales. However, the mechanical linkages between fluorescence and photosynthesis at the leaf level cannot be directly applied at canopy or field scales, as the larger scale SIF emission depends on canopy structure. This is especially true for the forest canopies characterized by high horizontal and vertical heterogeneity. While most of the current studies on SIF radiative transfer in plant canopies are based on the assumption of a homogeneous canopy, recently codes have been developed capable of simulation of fluorescence signal in explicit 3-D forest canopies. Here we present a canopy SIF upscaling method consisting of the integration of the 3-D radiative transfer model DART and a 3-D object model BLENDER. Our aim was to better understand the effect of boreal forest canopy structure on SIF for a spatially explicit forest canopy.

  13. Electro-Thermal-Mechanical Simulation Capability Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, D

    This is the Final Report for LDRD 04-ERD-086, 'Electro-Thermal-Mechanical Simulation Capability'. The accomplishments are well documented in five peer-reviewed publications and six conference presentations and hence will not be detailed here. The purpose of this LDRD was to research and develop numerical algorithms for three-dimensional (3D) Electro-Thermal-Mechanical simulations. LLNL has long been a world leader in the area of computational mechanics, and recently several mechanics codes have become 'multiphysics' codes with the addition of fluid dynamics, heat transfer, and chemistry. However, these multiphysics codes do not incorporate the electromagnetics that is required for a coupled Electro-Thermal-Mechanical (ETM) simulation. There aremore » numerous applications for an ETM simulation capability, such as explosively-driven magnetic flux compressors, electromagnetic launchers, inductive heating and mixing of metals, and MEMS. A robust ETM simulation capability will enable LLNL physicists and engineers to better support current DOE programs, and will prepare LLNL for some very exciting long-term DoD opportunities. We define a coupled Electro-Thermal-Mechanical (ETM) simulation as a simulation that solves, in a self-consistent manner, the equations of electromagnetics (primarily statics and diffusion), heat transfer (primarily conduction), and non-linear mechanics (elastic-plastic deformation, and contact with friction). There is no existing parallel 3D code for simulating ETM systems at LLNL or elsewhere. While there are numerous magnetohydrodynamic codes, these codes are designed for astrophysics, magnetic fusion energy, laser-plasma interaction, etc. and do not attempt to accurately model electromagnetically driven solid mechanics. This project responds to the Engineering R&D Focus Areas of Simulation and Energy Manipulation, and addresses the specific problem of Electro-Thermal-Mechanical simulation for design and analysis of energy manipulation systems such as magnetic flux compression generators and railguns. This project compliments ongoing DNT projects that have an experimental emphasis. Our research efforts have been encapsulated in the Diablo and ALE3D simulation codes. This new ETM capability already has both internal and external users, and has spawned additional research in plasma railgun technology. By developing this capability Engineering has become a world-leader in ETM design, analysis, and simulation. This research has positioned LLNL to be able to compete for new business opportunities with the DoD in the area of railgun design. We currently have a three-year $1.5M project with the Office of Naval Research to apply our ETM simulation capability to railgun bore life issues and we expect to be a key player in the railgun community.« less

  14. Modelling crystal plasticity by 3D dislocation dynamics and the finite element method: The Discrete-Continuous Model revisited

    NASA Astrophysics Data System (ADS)

    Vattré, A.; Devincre, B.; Feyel, F.; Gatti, R.; Groh, S.; Jamond, O.; Roos, A.

    2014-02-01

    A unified model coupling 3D dislocation dynamics (DD) simulations with the finite element (FE) method is revisited. The so-called Discrete-Continuous Model (DCM) aims to predict plastic flow at the (sub-)micron length scale of materials with complex boundary conditions. The evolution of the dislocation microstructure and the short-range dislocation-dislocation interactions are calculated with a DD code. The long-range mechanical fields due to the dislocations are calculated by a FE code, taking into account the boundary conditions. The coupling procedure is based on eigenstrain theory, and the precise manner in which the plastic slip, i.e. the dislocation glide as calculated by the DD code, is transferred to the integration points of the FE mesh is described in full detail. Several test cases are presented, and the DCM is applied to plastic flow in a single-crystal Nickel-based superalloy.

  15. Prediction of the 21-cm signal from reionization: comparison between 3D and 1D radiative transfer schemes

    NASA Astrophysics Data System (ADS)

    Ghara, Raghunath; Mellema, Garrelt; Giri, Sambit K.; Choudhury, T. Roy; Datta, Kanan K.; Majumdar, Suman

    2018-05-01

    Three-dimensional radiative transfer simulations of the epoch of reionization can produce realistic results, but are computationally expensive. On the other hand, simulations relying on one-dimensional radiative transfer solutions are faster but limited in accuracy due to their more approximate nature. Here, we compare the performance of the reionization simulation codes GRIZZLY and C2-RAY which use 1D and 3D radiative transfer schemes, respectively. The comparison is performed using the same cosmological density fields, halo catalogues, and source properties. We find that the ionization maps, as well as the 21-cm signal maps from these two simulations are very similar even for complex scenarios which include thermal feedback on low-mass haloes. The comparison between the schemes in terms of the statistical quantities such as the power spectrum of the brightness temperature fluctuation agrees with each other within 10 per cent error throughout the entire reionization history. GRIZZLY seems to perform slightly better than the seminumerical approaches considered in Majumdar et al. which are based on the excursion set principle. We argue that GRIZZLY can be efficiently used for exploring parameter space, establishing observations strategies, and estimating parameters from 21-cm observations.

  16. HPF Implementation of ARC3D

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Yan, Jerry

    1999-01-01

    We present an HPF (High Performance Fortran) implementation of ARC3D code along with the profiling and performance data on SGI Origin 2000. Advantages and limitations of HPF as a parallel programming language for CFD applications are discussed. For achieving good performance results we used the data distributions optimized for implementation of implicit and explicit operators of the solver and boundary conditions. We compare the results with MPI and directive based implementations.

  17. Theoretical insights of proton transfer and hydrogen bonded charge transfer complex of 1,2-dimethylimidazolium-3,5-dinitrobenzoate crystal

    NASA Astrophysics Data System (ADS)

    Afroz, Ziya; Faizan, Mohd.; Alam, Mohammad Jane; Ahmad, Shabbir; Ahmad, Afaq

    2018-04-01

    Proton transfer (PT) and hydrogen bonded charge transfer (HBCT) 1:1 complex of 1,2-dimethylimidazole (DMI) and 3,5-dinitrobenzoic acid (DNBA) have been theoretically analyzed and compared with reported experimental results. Both the structures in the isolated gaseous state have been optimized at DFT/B3LYP/6-311G(d,p) level of theory and further, the PT energy barrier has been calculated from potential energy surface scan. Along with structural investigations, theoretical vibrational spectra have been inspected and compared with the FTIR spectrum. Moreover, frontier molecular analysis has also been carried out.

  18. Incorporation of Electrical Systems Models Into an Existing Thermodynamic Cycle Code

    NASA Technical Reports Server (NTRS)

    Freeh, Josh

    2003-01-01

    Integration of entire system includes: Fuel cells, motors, propulsors, thermal/power management, compressors, etc. Use of existing, pre-developed NPSS capabilities includes: 1) Optimization tools; 2) Gas turbine models for hybrid systems; 3) Increased interplay between subsystems; 4) Off-design modeling capabilities; 5) Altitude effects; and 6) Existing transient modeling architecture. Other factors inclde: 1) Easier transfer between users and groups of users; 2) General aerospace industry acceptance and familiarity; and 3) Flexible analysis tool that can also be used for ground power applications.

  19. Analytical and Experimental Evaluation of the Heat Transfer Distribution over the Surfaces of Turbine Vanes

    NASA Technical Reports Server (NTRS)

    Hylton, L. D.; Mihelc, M. S.; Turner, E. R.; Nealy, D. A.; York, R. E.

    1983-01-01

    Three airfoil data sets were selected for use in evaluating currently available analytical models for predicting airfoil surface heat transfer distributions in a 2-D flow field. Two additional airfoils, representative of highly loaded, low solidity airfoils currently being designed, were selected for cascade testing at simulated engine conditions. Some 2-D analytical methods were examined and a version of the STAN5 boundary layer code was chosen for modification. The final form of the method utilized a time dependent, transonic inviscid cascade code coupled to a modified version of the STAN5 boundary layer code featuring zero order turbulence modeling. The boundary layer code is structured to accommodate a full spectrum of empirical correlations addressing the coupled influences of pressure gradient, airfoil curvature, and free-stream turbulence on airfoil surface heat transfer distribution and boundary layer transitional behavior. Comparison of pedictions made with the model to the data base indicates a significant improvement in predictive capability.

  20. Analytical and experimental evaluation of the heat transfer distribution over the surfaces of turbine vanes

    NASA Astrophysics Data System (ADS)

    Hylton, L. D.; Mihelc, M. S.; Turner, E. R.; Nealy, D. A.; York, R. E.

    1983-05-01

    Three airfoil data sets were selected for use in evaluating currently available analytical models for predicting airfoil surface heat transfer distributions in a 2-D flow field. Two additional airfoils, representative of highly loaded, low solidity airfoils currently being designed, were selected for cascade testing at simulated engine conditions. Some 2-D analytical methods were examined and a version of the STAN5 boundary layer code was chosen for modification. The final form of the method utilized a time dependent, transonic inviscid cascade code coupled to a modified version of the STAN5 boundary layer code featuring zero order turbulence modeling. The boundary layer code is structured to accommodate a full spectrum of empirical correlations addressing the coupled influences of pressure gradient, airfoil curvature, and free-stream turbulence on airfoil surface heat transfer distribution and boundary layer transitional behavior. Comparison of pedictions made with the model to the data base indicates a significant improvement in predictive capability.

  1. The novel high-performance 3-D MT inverse solver

    NASA Astrophysics Data System (ADS)

    Kruglyakov, Mikhail; Geraskin, Alexey; Kuvshinov, Alexey

    2016-04-01

    We present novel, robust, scalable, and fast 3-D magnetotelluric (MT) inverse solver. The solver is written in multi-language paradigm to make it as efficient, readable and maintainable as possible. Separation of concerns and single responsibility concepts go through implementation of the solver. As a forward modelling engine a modern scalable solver extrEMe, based on contracting integral equation approach, is used. Iterative gradient-type (quasi-Newton) optimization scheme is invoked to search for (regularized) inverse problem solution, and adjoint source approach is used to calculate efficiently the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT responses, and supports massive parallelization. Moreover, different parallelization strategies implemented in the code allow optimal usage of available computational resources for a given problem statement. To parameterize an inverse domain the so-called mask parameterization is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to HPC Piz Daint (6th supercomputer in the world) demonstrate practically linear scalability of the code up to thousands of nodes.

  2. Quantum Mechanical Modeling of Ballistic MOSFETs

    NASA Technical Reports Server (NTRS)

    Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The objective of this project was to develop theory, approximations, and computer code to model quasi 1D structures such as nanotubes, DNA, and MOSFETs: (1) Nanotubes: Influence of defects on ballistic transport, electro-mechanical properties, and metal-nanotube coupling; (2) DNA: Model electron transfer (biochemistry) and transport experiments, and sequence dependence of conductance; and (3) MOSFETs: 2D doping profiles, polysilicon depletion, source to drain and gate tunneling, understand ballistic limit.

  3. SPECT3D - A multi-dimensional collisional-radiative code for generating diagnostic signatures based on hydrodynamics and PIC simulation output

    NASA Astrophysics Data System (ADS)

    MacFarlane, J. J.; Golovkin, I. E.; Wang, P.; Woodruff, P. R.; Pereyra, N. A.

    2007-05-01

    SPECT3D is a multi-dimensional collisional-radiative code used to post-process the output from radiation-hydrodynamics (RH) and particle-in-cell (PIC) codes to generate diagnostic signatures (e.g. images, spectra) that can be compared directly with experimental measurements. This ability to post-process simulation code output plays a pivotal role in assessing the reliability of RH and PIC simulation codes and their physics models. SPECT3D has the capability to operate on plasmas in 1D, 2D, and 3D geometries. It computes a variety of diagnostic signatures that can be compared with experimental measurements, including: time-resolved and time-integrated spectra, space-resolved spectra and streaked spectra; filtered and monochromatic images; and X-ray diode signals. Simulated images and spectra can include the effects of backlighters, as well as the effects of instrumental broadening and time-gating. SPECT3D also includes a drilldown capability that shows where frequency-dependent radiation is emitted and absorbed as it propagates through the plasma towards the detector, thereby providing insights on where the radiation seen by a detector originates within the plasma. SPECT3D has the capability to model a variety of complex atomic and radiative processes that affect the radiation seen by imaging and spectral detectors in high energy density physics (HEDP) experiments. LTE (local thermodynamic equilibrium) or non-LTE atomic level populations can be computed for plasmas. Photoabsorption rates can be computed using either escape probability models or, for selected 1D and 2D geometries, multi-angle radiative transfer models. The effects of non-thermal (i.e. non-Maxwellian) electron distributions can also be included. To study the influence of energetic particles on spectra and images recorded in intense short-pulse laser experiments, the effects of both relativistic electrons and energetic proton beams can be simulated. SPECT3D is a user-friendly software package that runs on Windows, Linux, and Mac platforms. A parallel version of SPECT3D is supported for Linux clusters for large-scale calculations. We will discuss the major features of SPECT3D, and present example results from simulations and comparisons with experimental data.

  4. Studies of Heat Transfer in Complex Internal Flows.

    DTIC Science & Technology

    1982-01-01

    D.C. 20362 (Tel 202-692-6874) Mr. Richard S. Carlton Director, Engines Division, Code 523 NC #4 Naval Sea Systems Command Washington, D.C. 20362...Walter Ritz Code 033C Naval Ships Systems Engineering Station Philadelphia, Pennsylvania 19112 (Tel. 215-755-3841) Dr. Simion Kuo United Tech. Res

  5. A study of the 3D radiative transfer effect in cloudy atmospheres

    NASA Astrophysics Data System (ADS)

    Okata, M.; Teruyuki, N.; Suzuki, K.

    2015-12-01

    Evaluation of the effect of clouds in the atmosphere is a significant problem in the Earth's radiation budget study with their large uncertainties of microphysics and the optical properties. In this situation, we still need more investigations of 3D cloud radiative transer problems using not only models but also satellite observational data.For this purpose, we have developed a 3D-Monte-Carlo radiative transfer code that is implemented with various functions compatible with the OpenCLASTR R-Star radiation code for radiance and flux computation, i.e. forward and backward tracing routines, non-linear k-distribution parameterization (Sekiguchi and Nakajima, 2008) for broad band solar flux calculation, and DM-method for flux and TMS-method for upward radiance (Nakajima and Tnaka 1998). We also developed a Minimum cloud Information Deviation Profiling Method (MIDPM) as a method for a construction of 3D cloud field with MODIS/AQUA and CPR/CloudSat data. We then selected a best-matched radar reflectivity factor profile from the library for each of off-nadir pixels of MODIS where CPR profile is not available, by minimizing the deviation between library MODIS parameters and those at the pixel. In this study, we have used three cloud microphysical parameters as key parameters for the MIDPM, i.e. effective particle radius, cloud optical thickness and top of cloud temperature, and estimated 3D cloud radiation budget. We examined the discrepancies between satellite observed and mode-simulated radiances and three cloud microphysical parameter's pattern for studying the effects of cloud optical and microphysical properties on the radiation budget of the cloud-laden atmospheres.

  6. 3D Visualization of Machine Learning Algorithms with Astronomical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2016-01-01

    We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.

  7. A Scalable Architecture of a Structured LDPC Decoder

    NASA Technical Reports Server (NTRS)

    Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon

    2004-01-01

    We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.

  8. An Analysis of Elliptic Grid Generation Techniques Using an Implicit Euler Solver.

    DTIC Science & Technology

    1986-06-09

    automatic determination of the control fu.nction, . elements of covariant metric tensor in the elliptic grid generation system , from the Cm = 1,2,3...computational fluid d’nan1-cs code. Tne code Inclues a tnree-dimensional current research is aimed primaril: at algebraic generation system based on transfinite...start the iterative solution of the f. ow, nea, transfer, and combustion proble:s. elliptic generation system . Tn13 feature also .:ven-.ts :.t be made

  9. Acceleration of MCNP calculations for small pipes configurations by using Weigth Windows Importance cards created by the SN-3D ATTILA

    NASA Astrophysics Data System (ADS)

    Castanier, Eric; Paterne, Loic; Louis, Céline

    2017-09-01

    In the nuclear engineering, you have to manage time and precision. Especially in shielding design, you have to be more accurate and efficient to reduce cost (shielding thickness optimization), and for this, you use 3D codes. In this paper, we want to see if we can easily applicate the CADIS methods for design shielding of small pipes which go through large concrete walls. We assess the impact of the WW generated by the 3D-deterministic code ATTILA versus WW directly generated by MCNP (iterative and manual process). The comparison is based on the quality of the convergence (estimated relative error (σ), Variance of Variance (VOV) and Figure of Merit (FOM)), on time (computer time + modelling) and on the implement for the engineer.

  10. 3D FEM Geometry and Material Flow Optimization of Porthole-Die Extrusion

    NASA Astrophysics Data System (ADS)

    Ceretti, Elisabetta; Mazzoni, Luca; Giardini, Claudio

    2007-05-01

    The aim of this work is to design and to improve the geometry of a porthole-die for the production of aluminum components by means of 3D FEM simulations. In fact, the use of finite element models will allow to investigate the effects of the die geometry (webs, extrusion cavity) on the material flow and on the stresses acting on the die so to reduce the die wear and to improve the tool life. The software used to perform the simulations was a commercial FEM code, Deform 3D. The technological data introduced in the FE model have been furnished by METRA S.p.A. Company, partner in this research. The results obtained have been considered valid and helpful by the Company for building a new optimized extrusion porthole-die.

  11. Human operator identification model and related computer programs

    NASA Technical Reports Server (NTRS)

    Kessler, K. M.; Mohr, J. N.

    1978-01-01

    Four computer programs which provide computational assistance in the analysis of man/machine systems are reported. The programs are: (1) Modified Transfer Function Program (TF); (2) Time Varying Response Program (TVSR); (3) Optimal Simulation Program (TVOPT); and (4) Linear Identification Program (SCIDNT). The TV program converts the time domain state variable system representative to frequency domain transfer function system representation. The TVSR program computes time histories of the input/output responses of the human operator model. The TVOPT program is an optimal simulation program and is similar to TVSR in that it produces time histories of system states associated with an operator in the loop system. The differences between the two programs are presented. The SCIDNT program is an open loop identification code which operates on the simulated data from TVOPT (or TVSR) or real operator data from motion simulators.

  12. DESTINY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-03-10

    DESTINY is a comprehensive tool for modeling 3D and 2D cache designs using SRAM,embedded DRAM (eDRAM), spin transfer torque RAM (STT-RAM), resistive RAM (ReRAM), and phase change RAM (PCN). In its purpose, it is similar to CACTI, CACTI-3DD or NVSim. DESTINY is very useful for performing design-space exploration across several dimensions, such as optimizing for a target (e.g. latency, area or energy-delay product) for agiven memory technology, choosing the suitable memory technology or fabrication method (i.e. 2D v/s 3D) for a given optimization target, etc. DESTINY has been validated against several cache prototypes. DESTINY is expected to boost studies ofmore » next-generation memory architectures used in systems ranging from mobile devices to extreme-scale supercomputers.« less

  13. Heat-transfer optimization of a high-spin thermal battery

    NASA Astrophysics Data System (ADS)

    Krieger, Frank C.

    Recent advancements in thermal battery technology have produced batteries incorporating a fusible material heat reservoir for operating temperature control that operate reliably under the high spin rates often encountered in ordnance applications. Attention is presently given to the heat-transfer optimization of a high-spin thermal battery employing a nonfusible steel heat reservoir, on the basis of a computer code that simulated the effect of an actual fusible material heat reservoir on battery performance. Both heat paper and heat pellet employing thermal battery configurations were considered.

  14. Electromagnetic plasma simulation in realistic geometries

    NASA Astrophysics Data System (ADS)

    Brandon, S.; Ambrosiano, J. J.; Nielsen, D.

    1991-08-01

    Particle-in-Cell (PIC) calculations have become an indispensable tool to model the nonlinear collective behavior of charged particle species in electromagnetic fields. Traditional finite difference codes, such as CONDOR (2-D) and ARGUS (3-D), are used extensively to design experiments and develop new concepts. A wide variety of physical processes can be modeled simply and efficiently by these codes. However, experiments have become more complex. Geometrical shapes and length scales are becoming increasingly more difficult to model. Spatial resolution requirements for the electromagnetic calculation force large grids and small time steps. Many hours of CRAY YMP time may be required to complete 2-D calculation -- many more for 3-D calculations. In principle, the number of mesh points and particles need only to be increased until all relevant physical processes are resolved. In practice, the size of a calculation is limited by the computer budget. As a result, experimental design is being limited by the ability to calculate, not by the experimenters ingenuity or understanding of the physical processes involved. Several approaches to meet these computational demands are being pursued. Traditional PIC codes continue to be the major design tools. These codes are being actively maintained, optimized, and extended to handle large and more complex problems. Two new formulations are being explored to relax the geometrical constraints of the finite difference codes. A modified finite volume test code, TALUS, uses a data structure compatible with that of standard finite difference meshes. This allows a basic conformal boundary/variable grid capability to be retrofitted to CONDOR. We are also pursuing an unstructured grid finite element code, MadMax. The unstructured mesh approach provides maximum flexibility in the geometrical model while also allowing local mesh refinement.

  15. Improved inter-layer prediction for light field content coding with display scalability

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Ducla Soares, Luís.; Nunes, Paulo

    2016-09-01

    Light field imaging based on microlens arrays - also known as plenoptic, holoscopic and integral imaging - has recently risen up as feasible and prospective technology due to its ability to support functionalities not straightforwardly available in conventional imaging systems, such as: post-production refocusing and depth of field changing. However, to gradually reach the consumer market and to provide interoperability with current 2D and 3D representations, a display scalable coding solution is essential. In this context, this paper proposes an improved display scalable light field codec comprising a three-layer hierarchical coding architecture (previously proposed by the authors) that provides interoperability with 2D (Base Layer) and 3D stereo and multiview (First Layer) representations, while the Second Layer supports the complete light field content. For further improving the compression performance, novel exemplar-based inter-layer coding tools are proposed here for the Second Layer, namely: (i) an inter-layer reference picture construction relying on an exemplar-based optimization algorithm for texture synthesis, and (ii) a direct prediction mode based on exemplar texture samples from lower layers. Experimental results show that the proposed solution performs better than the tested benchmark solutions, including the authors' previous scalable codec.

  16. The ITER ICRF Antenna Design with TOPICA

    NASA Astrophysics Data System (ADS)

    Milanesio, Daniele; Maggiora, Riccardo; Meneghini, Orso; Vecchi, Giuseppe

    2007-11-01

    TOPICA (Torino Polytechnic Ion Cyclotron Antenna) code is an innovative tool for the 3D/1D simulation of Ion Cyclotron Radio Frequency (ICRF), i.e. accounting for antennas in a realistic 3D geometry and with an accurate 1D plasma model [1]. The TOPICA code has been deeply parallelized and has been already proved to be a reliable tool for antennas design and performance prediction. A detailed analysis of the 24 straps ITER ICRF antenna geometry has been carried out, underlining the strong dependence and asymmetries of the antenna input parameters due to the ITER plasma response. We optimized the antenna array geometry dimensions to maximize loading, lower mutual couplings and mitigate sheath effects. The calculated antenna input impedance matrices are TOPICA results of a paramount importance for the tuning and matching system design. Electric field distributions have been also calculated and they are used as the main input for the power flux estimation tool. The designed optimized antenna is capable of coupling 20 MW of power to plasma in the 40 -- 55 MHz frequency range with a maximum voltage of 45 kV in the feeding coaxial cables. [1] V. Lancellotti et al., Nuclear Fusion, 46 (2006) S476-S499

  17. Optimisation d'un systeme d'antigivrage a air chaud pour aile d'avion basee sur la methode du krigeage dual

    NASA Astrophysics Data System (ADS)

    Hannat, Ridha

    The aim of this thesis is to apply a new methodology of optimization based on the dual kriging method to a hot air anti-icing system for airplanes wings. The anti-icing system consists of a piccolo tube placed along the span of the wing, in the leading edge area. The hot air is injected through small nozzles and impact on the inner wall of the wing. The objective function targeted by the optimization is the effectiveness of the heat transfer of the anti-icing system. This heat transfer effectiveness is regarded as being the ratio of the wing inner wall heat flux and the sum of all the nozzles heat flows of the anti-icing system. The methodology adopted to optimize an anti-icing system consists of three steps. The first step is to build a database according to the Box-Behnken design of experiment. The objective function is then modeled by the dual kriging method and finally the SQP optimization method is applied. One of the advantages of the dual kriging is that the model passes exactly through all measurement points, but it can also take into account the numerical errors and deviates from these points. Moreover, the kriged model can be updated at each new numerical simulation. These features of the dual kriging seem to give a good tool to build the response surfaces necessary for the anti-icing system optimization. The first chapter presents a literature review and the optimization problem related to the antiicing system. Chapters two, three and four present the three articles submitted. Chapter two is devoted to the validation of CFD codes used to perform the numerical simulations of an anti-icing system and to compute the conjugate heat transfer (CHT). The CHT is calculated by taking into account the external flow around the airfoil, the internal flow in the anti-icing system, and the conduction in the wing. The heat transfer coefficient at the external skin of the airfoil is almost the same if the external flow is taken into account or no. Therefore, only the internal flow is considered in the following articles. Chapter three concerns the design of experiment (DoE) matrix and the construction of a second order parametric model. The objective function model is based on the Box-Behnken DoE. The parametric model that results from numerical simulations serve for comparison with the kriged model of the third article. Chapter four applies the dual kriging method to model the heat transfer effectiveness of the anti-icing system and use the model for optimization. The possibility of including the numerical error in the results is explored. For the test cases studied, introduction of the numerical error in the optimization process does not improve the results. Dual kriging method is also used to model the distribution of the local heat flux and to interpolate the local heat flux corresponding to the optimal design of the anti-icing system.

  18. Two-dimensional viscous flow computations of hypersonic scramjet nozzle flowfields at design and off-design conditions

    NASA Technical Reports Server (NTRS)

    Harloff, G. J.; Lai, H. T.; Nelson, E. S.

    1988-01-01

    The PARC2D code has been selected to analyze the flowfields of a representative hypersonic scramjet nozzle over a range of flight conditions from Mach 3 to 20. The flowfields, wall pressures, wall skin friction values, heat transfer values and overall nozzle performance are presented.

  19. Laser-induced forward transfer (LIFT) of congruent voxels

    NASA Astrophysics Data System (ADS)

    Piqué, Alberto; Kim, Heungsoo; Auyeung, Raymond C. Y.; Beniam, Iyoel; Breckenfeld, Eric

    2016-06-01

    Laser-induced forward transfer (LIFT) of functional materials offers unique advantages and capabilities for the rapid prototyping of electronic, optical and sensor elements. The use of LIFT for printing high viscosity metallic nano-inks and nano-pastes can be optimized for the transfer of voxels congruent with the shape of the laser pulse, forming thin film-like structures non-lithographically. These processes are capable of printing patterns with excellent lateral resolution and thickness uniformity typically found in 3-dimensional stacked assemblies, MEMS-like structures and free-standing interconnects. However, in order to achieve congruent voxel transfer with LIFT, the particle size and viscosity of the ink or paste suspensions must be adjusted to minimize variations due to wetting and drying effects. When LIFT is carried out with high-viscosity nano-suspensions, the printed voxel size and shape become controllable parameters, allowing the printing of thin-film like structures whose shape is determined by the spatial distribution of the laser pulse. The result is a new level of parallelization beyond current serial direct-write processes whereby the geometry of each printed voxel can be optimized according to the pattern design. This work shows how LIFT of congruent voxels can be applied to the fabrication of 2D and 3D microstructures by adjusting the viscosity of the nano-suspension and laser transfer parameters.

  20. Accretion Structures in Algol-Type Interacting Binary Systems

    NASA Astrophysics Data System (ADS)

    Peters, Geraldine

    The physics of mass transfer in interacting binaries of the Algol type will be investigated through an analysis of an extensive collection of FUV spectra from the FUSE spacecraft, Kepler photometry, and FUV spectra from IUE and ORFEUS-SPAS II. The Algols range from close direct impact systems to wider systems that contain prominent accretion disks. Several components of the circumstellar (CS) material have been identified, including the gas stream, splash/outflow domains, a high temperature accretion region (HTAR), accretion disk, and magnetically-controlled flows (cf. Peters 2001, 2007, Richards et al. 2010). Hot spots are sometimes seen at the site where the gas stream impacts the mass gainer's photosphere. Collectively we call these components of mass transfer "accretion structures". The CS material will be studied from an analysis of both line-of-sight FUV absorption features and emission lines. The emission line regions will be mapped in and above/below the orbital plane with 2D and 3D Doppler tomography techniques. We will look for the presence of hot accretion spots in both the Kepler photometry of Algols in the Kepler fields and phase-dependent flux variability in the FUSE spectra. We will also search for evidence of microflaring at the impact site of the gas stream. An abundance study of the mass gainer will reveal the extent to which CNO-processed material from the core of the mass loser is being deposited on the primary. Analysis codes that will be used include 2D and 3D tomography codes, SHELLSPEC, light curve analysis programs such as PHOEBE and Wilson-Devinney, and the NLTE codes TLUSTY/SYNSPEC. This project will transform our understanding of the mass transfer process from a generic to a hydrodynamical one and provide important information on the degree of mass loss from the system which is needed for calculations of the evolution of Algol binaries.

  1. Using RADMC-3D to model the radiative transfer of spectral lines in protoplanetary disks and envelopes

    NASA Astrophysics Data System (ADS)

    DeVries, John; Terebey, Susan

    2018-06-01

    Protoplanetary disks are the birthplaces of planets in our universe. Observations of these disks with radio telescopes like the Atacama Large Millimeter Array (ALMA) offer great insight into the star and planet formation process. Comparing theories of formation with observations requires tracing the energy transfer via electromagnetic radiation, known as radiative transfer. To determine the temperature distribution of circumstellar material, a Monte Carlo code (Whitney et al. [1]) was used to to perform the radiative transfer through dust. The goal of this research is to utilize RADMC-3D [2] to handle the spectral line radiative transfer computations. An existing model of a rotating ring was expanded to include emission from the C18O isotopologue of carbon monoxide using data from the Leiden Atomic and Molecular Database (LAMDA). This feature of our model compliments ALMA's ability to measure C18O line emission, a proxy for disk rotation. In addition to modeling gas in the protoplanetary disk, dust also plays an important role. The generic description of absorption and scattering for dust provided by RADMC-3D was changed in favor of a more physically-realistic description with OH5 grains. This description is more appropriate in high-density regions of the envelope around a protostar. Further improvements, such as consideration for the finite resolution of observations, have been implemented. The task at present is to compare our model with observations of protoplanetary systems like L1527. Some results of these comparisons will be presented.[1] Whitney et al. 2013, ApJS, 207:30[2] RADMC-3D: http://www.ita.uni-heidelberg.de/~dullemond/software/radmc-3d/

  2. Theoretical Investigation Leading to Energy Storage in Atomic and Molecular Systems

    DTIC Science & Technology

    1990-12-01

    can be calculated in a single run. 21 j) Non-gradient optimization of basis function exponents is possible. The source code can be modified to carry...basis. The 10s3p/5s3p basis consists of the 9s/4s contraction of Siegbahn and Liu (Reference 91) augmented by a diffuse s-type function ( exponent ...vibrational modes. Introduction of diffuse basis functions and optimization of the d-orbital exponents have a small but important effect on the

  3. Next-generation acceleration and code optimization for light transport in turbid media using GPUs

    PubMed Central

    Alerstam, Erik; Lo, William Chun Yip; Han, Tianyi David; Rose, Jonathan; Andersson-Engels, Stefan; Lilge, Lothar

    2010-01-01

    A highly optimized Monte Carlo (MC) code package for simulating light transport is developed on the latest graphics processing unit (GPU) built for general-purpose computing from NVIDIA - the Fermi GPU. In biomedical optics, the MC method is the gold standard approach for simulating light transport in biological tissue, both due to its accuracy and its flexibility in modelling realistic, heterogeneous tissue geometry in 3-D. However, the widespread use of MC simulations in inverse problems, such as treatment planning for PDT, is limited by their long computation time. Despite its parallel nature, optimizing MC code on the GPU has been shown to be a challenge, particularly when the sharing of simulation result matrices among many parallel threads demands the frequent use of atomic instructions to access the slow GPU global memory. This paper proposes an optimization scheme that utilizes the fast shared memory to resolve the performance bottleneck caused by atomic access, and discusses numerous other optimization techniques needed to harness the full potential of the GPU. Using these techniques, a widely accepted MC code package in biophotonics, called MCML, was successfully accelerated on a Fermi GPU by approximately 600x compared to a state-of-the-art Intel Core i7 CPU. A skin model consisting of 7 layers was used as the standard simulation geometry. To demonstrate the possibility of GPU cluster computing, the same GPU code was executed on four GPUs, showing a linear improvement in performance with an increasing number of GPUs. The GPU-based MCML code package, named GPU-MCML, is compatible with a wide range of graphics cards and is released as an open-source software in two versions: an optimized version tuned for high performance and a simplified version for beginners (http://code.google.com/p/gpumcml). PMID:21258498

  4. Cooperative optimization and their application in LDPC codes

    NASA Astrophysics Data System (ADS)

    Chen, Ke; Rong, Jian; Zhong, Xiaochun

    2008-10-01

    Cooperative optimization is a new way for finding global optima of complicated functions of many variables. The proposed algorithm is a class of message passing algorithms and has solid theory foundations. It can achieve good coding gains over the sum-product algorithm for LDPC codes. For (6561, 4096) LDPC codes, the proposed algorithm can achieve 2.0 dB gains over the sum-product algorithm at BER of 4×10-7. The decoding complexity of the proposed algorithm is lower than the sum-product algorithm can do; furthermore, the former can achieve much lower error floor than the latter can do after the Eb / No is higher than 1.8 dB.

  5. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    PubMed

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  6. Crashworthiness: Planes, trains, and automobiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Logan, R.W.; Tokarz, F.J.; Whirley, R.G.

    A powerful DYNA3D computer code simulates the dynamic effects of stress traveling through structures. It is the most advanced modeling tool available to study crashworthiness problems and to analyze impacts. Now used by some 1000 companies, government research laboratories, and universities in the U.S. and abroad, DYNA3D is also a preeminent example of successful technology transfer. The initial interest in such a code was to simulate the structural response of weapons systems. The need was to model not the explosive or nuclear events themselves but rather the impacts of weapons systems with the ground, tracking the stress waves as theymore » move through the object. This type of computer simulation augmented or, in certain cases, reduced the need for expensive and time-consuming crash testing.« less

  7. A demonstration of adjoint methods for multi-dimensional remote sensing of the atmosphere and surface

    NASA Astrophysics Data System (ADS)

    Martin, William G. K.; Hasekamp, Otto P.

    2018-01-01

    In previous work, we derived the adjoint method as a computationally efficient path to three-dimensional (3D) retrievals of clouds and aerosols. In this paper we will demonstrate the use of adjoint methods for retrieving two-dimensional (2D) fields of cloud extinction. The demonstration uses a new 2D radiative transfer solver (FSDOM). This radiation code was augmented with adjoint methods to allow efficient derivative calculations needed to retrieve cloud and surface properties from multi-angle reflectance measurements. The code was then used in three synthetic retrieval studies. Our retrieval algorithm adjusts the cloud extinction field and surface albedo to minimize the measurement misfit function with a gradient-based, quasi-Newton approach. At each step we compute the value of the misfit function and its gradient with two calls to the solver FSDOM. First we solve the forward radiative transfer equation to compute the residual misfit with measurements, and second we solve the adjoint radiative transfer equation to compute the gradient of the misfit function with respect to all unknowns. The synthetic retrieval studies verify that adjoint methods are scalable to retrieval problems with many measurements and unknowns. We can retrieve the vertically-integrated optical depth of moderately thick clouds as a function of the horizontal coordinate. It is also possible to retrieve the vertical profile of clouds that are separated by clear regions. The vertical profile retrievals improve for smaller cloud fractions. This leads to the conclusion that cloud edges actually increase the amount of information that is available for retrieving the vertical profile of clouds. However, to exploit this information one must retrieve the horizontally heterogeneous cloud properties with a 2D (or 3D) model. This prototype shows that adjoint methods can efficiently compute the gradient of the misfit function. This work paves the way for the application of similar methods to 3D remote sensing problems.

  8. Surface 3D nanostructuring by tightly focused laser pulse: simulations by Lagrangian code and molecular dynamics

    NASA Astrophysics Data System (ADS)

    Inogamov, Nail A.; Zhakhovsky, Vasily V.

    2016-02-01

    There are many important applications in which the ultrashort diffraction-limited and therefore tightly focused laser pulses irradiates metal films mounted on dielectric substrate. Here we present the detailed picture of laser peeling and 3D structure formation of the thin (relative to a depth of a heat affected zone in the bulk targets) gold films on glass substrate. The underlying physics of such diffraction-limited laser peeling was not well understood previously. Our approach is based on a physical model which takes into consideration the new calculations of the two-temperature (2T) equation of state (2T EoS) and the two-temperature transport coefficients together with the coupling parameter between electron and ion subsystems. The usage of the 2T EoS and the kinetic coefficients is required because absorption of an ultrashort pulse with duration of 10-1000 fs excites electron subsystem of metal and transfers substance into the 2T state with hot electrons (typical electron temperatures 1-3 eV) and much colder ions. It is shown that formation of submicrometer-sized 3D structures is a result of the electron-ion energy transfer, melting, and delamination of film from substrate under combined action of electron and ion pressures, capillary deceleration of the delaminated liquid metal or semiconductor, and ultrafast freezing of molten material. We found that the freezing is going in non-equilibrium regime with strongly overcooled liquid phase. In this case the Stefan approximation is non-applicable because the solidification front speed is limited by the diffusion rate of atoms in the molten material. To solve the problem we have developed the 2T Lagrangian code including all this reach physics in. We also used the high-performance combined Monte- Carlo and molecular dynamics code for simulation of surface 3D nanostructuring at later times after completion of electron-ion relaxation.

  9. Simulation on an optimal combustion control strategy for 3-D temperature distributions in tangentially pc-fired utility boiler furnaces.

    PubMed

    Wang, Xi-fen; Zhou, Huai-chun

    2005-01-01

    The control of 3-D temperature distribution in a utility boiler furnace is essential for the safe, economic and clean operation of pc-fired furnace with multi-burner system. The development of the visualization of 3-D temperature distributions in pc-fired furnaces makes it possible for a new combustion control strategy directly with the furnace temperature as its goal to improve the control quality for the combustion processes. Studied in this paper is such a new strategy that the whole furnace is divided into several parts in the vertical direction, and the average temperature and its bias from the center in every cross section can be extracted from the visualization results of the 3-D temperature distributions. In the simulation stage, a computational fluid dynamics (CFD) code served to calculate the 3-D temperature distributions in a furnace, then a linear model was set up to relate the features of the temperature distributions with the input of the combustion processes, such as the flow rates of fuel and air fed into the furnaces through all the burners. The adaptive genetic algorithm was adopted to find the optimal combination of the whole input parameters which ensure to form an optimal 3-D temperature field in the furnace desired for the operation of boiler. Simulation results showed that the strategy could soon find the factors making the temperature distribution apart from the optimal state and give correct adjusting suggestions.

  10. Bayesian Atmospheric Radiative Transfer (BART)Thermochemical Equilibrium Abundance (TEA) Code and Application to WASP-43b

    NASA Astrophysics Data System (ADS)

    Blecic, Jasmina; Harrington, Joseph; Bowman, Matthew O.; Cubillos, Patricio E.; Stemm, Madison; Foster, Andrew

    2014-11-01

    We present a new, open-source, Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. TEA uses the Gibbs-free-energy minimization method with an iterative Lagrangian optimization scheme. It initializes the radiative-transfer calculation in our Bayesian Atmospheric Radiative Transfer (BART) code. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. The code is tested against the original method developed by White at al. (1958), the analytic method developed by Burrows and Sharp (1999), and the Newton-Raphson method implemented in the open-source Chemical Equilibrium with Applications (CEA) code. TEA is written in Python and is available to the community via the open-source development site GitHub.com. We also present BART applied to eclipse depths of WASP-43b exoplanet, constraining atmospheric thermal and chemical parameters. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.

  11. Intercomparison of three microwave/infrared high resolution line-by-line radiative transfer codes

    NASA Astrophysics Data System (ADS)

    Schreier, Franz; Milz, Mathias; Buehler, Stefan A.; von Clarmann, Thomas

    2018-05-01

    An intercomparison of three line-by-line (lbl) codes developed independently for atmospheric radiative transfer and remote sensing - ARTS, GARLIC, and KOPRA - has been performed for a thermal infrared nadir sounding application assuming a HIRS-like (High resolution Infrared Radiation Sounder) setup. Radiances for the 19 HIRS infrared channels and a set of 42 atmospheric profiles from the "Garand dataset" have been computed. The mutual differences of the equivalent brightness temperatures are presented and possible causes of disagreement are discussed. In particular, the impact of path integration schemes and atmospheric layer discretization is assessed. When the continuum absorption contribution is ignored because of the different implementations, residuals are generally in the sub-Kelvin range and smaller than 0.1 K for some window channels (and all atmospheric models and lbl codes). None of the three codes turned out to be perfect for all channels and atmospheres. Remaining discrepancies are attributed to different lbl optimization techniques. Lbl codes seem to have reached a maturity in the implementation of radiative transfer that the choice of the underlying physical models (line shape models, continua etc) becomes increasingly relevant.

  12. Rocket injector anomalies study. Volume 1: Description of the mathematical model and solution procedure

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Singhal, A. K.; Tam, L. T.

    1984-01-01

    The capability of simulating three dimensional two phase reactive flows with combustion in the liquid fuelled rocket engines is demonstrated. This was accomplished by modifying an existing three dimensional computer program (REFLAN3D) with Eulerian Lagrangian approach to simulate two phase spray flow, evaporation and combustion. The modified code is referred as REFLAN3D-SPRAY. The mathematical formulation of the fluid flow, heat transfer, combustion and two phase flow interaction of the numerical solution procedure, boundary conditions and their treatment are described.

  13. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    NASA Astrophysics Data System (ADS)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  14. FPGA-based LDPC-coded APSK for optical communication systems.

    PubMed

    Zou, Ding; Lin, Changyu; Djordjevic, Ivan B

    2017-02-20

    In this paper, with the aid of mutual information and generalized mutual information (GMI) capacity analyses, it is shown that the geometrically shaped APSK that mimics an optimal Gaussian distribution with equiprobable signaling together with the corresponding gray-mapping rules can approach the Shannon limit closer than conventional quadrature amplitude modulation (QAM) at certain range of FEC overhead for both 16-APSK and 64-APSK. The field programmable gate array (FPGA) based LDPC-coded APSK emulation is conducted on block interleaver-based and bit interleaver-based systems; the results verify a significant improvement in hardware efficient bit interleaver-based systems. In bit interleaver-based emulation, the LDPC-coded 64-APSK outperforms 64-QAM, in terms of symbol signal-to-noise ratio (SNR), by 0.1 dB, 0.2 dB, and 0.3 dB at spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz, respectively. It is found by emulation that LDPC-coded 64-APSK for spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz is 1.6 dB, 1.7 dB, and 2.2 dB away from the GMI capacity.

  15. Active Structural Acoustic Control of Interior Noise on a Raytheon 1900D

    NASA Technical Reports Server (NTRS)

    Palumbo, Dan; Cabell, Ran; Sullivan, Brenda; Cline, John

    2000-01-01

    An active structural acoustic control system has been demonstrated on a Raytheon Aircraft Company 1900D turboprop airliner. Both single frequency and multi-frequency control of the blade passage frequency and its harmonics was accomplished. The control algorithm was a variant of the popular filtered-x LMS implemented in the principal component domain. The control system consisted of 21 inertial actuators and 32 microphones. The actuators were mounted to the aircraft's ring frames. The microphones were distributed uniformly throughout the interior at head height, both seated and standing. Actuator locations were selected using a combinatorial search optimization algorithm. The control system achieved a 14 dB noise reduction of the blade passage frequency during single frequency tests. Multi-frequency control of the first 1st, 2nd and 3rd harmonics resulted in 10.2 dB, 3.3 dB and 1.6 dB noise reductions respectively. These results fall short of the predictions which were produced by the optimization algorithm (13.5 dB, 8.6 dB and 6.3 dB). The optimization was based on actuator transfer functions taken on the ground and it is postulated that cabin pressurization at flight altitude was a factor in this discrepancy.

  16. The Intercomparison of 3D Radiation Codes (I3RC): Showcasing Mathematical and Computational Physics in a Critical Atmospheric Application

    NASA Astrophysics Data System (ADS)

    Davis, A. B.; Cahalan, R. F.

    2001-05-01

    The Intercomparison of 3D Radiation Codes (I3RC) is an on-going initiative involving an international group of over 30 researchers engaged in the numerical modeling of three-dimensional radiative transfer as applied to clouds. Because of their strong variability and extreme opacity, clouds are indeed a major source of uncertainty in the Earth's local radiation budget (at GCM grid scales). Also 3D effects (at satellite pixel scales) invalidate the standard plane-parallel assumption made in the routine of cloud-property remote sensing at NASA and NOAA. Accordingly, the test-cases used in I3RC are based on inputs and outputs which relate to cloud effects in atmospheric heating rates and in real-world remote sensing geometries. The main objectives of I3RC are to (1) enable participants to improve their models, (2) publish results as a community, (3) archive source code, and (4) educate. We will survey the status of I3RC and its plans for the near future with a special emphasis on the mathematical models and computational approaches. We will also describe some of the prime applications of I3RC's efforts in climate models, cloud-resolving models, and remote-sensing observations of clouds, or that of the surface in their presence. In all these application areas, computational efficiency is the main concern and not accuracy. One of I3RC's main goals is to document the performance of as wide a variety as possible of three-dimensional radiative transfer models for a small but representative number of ``cases.'' However, it is dominated by modelers working at the level of linear transport theory (i.e., they solve the radiative transfer equation) and an overwhelming majority of these participants use slow-but-robust Monte Carlo techniques. This means that only a small portion of the efficiency vs. accuracy vs. flexibility domain is currently populated by I3RC participants. To balance this natural clustering the present authors have organized a systematic outreach towards modelers that have used approximate methods in radiation transport. In this context, different, presumably simpler, equations (such as diffusion) are used in order to make a significant gain on the efficiency axis. We will describe in some detail the most promising approaches to approximate 3D radiative transfer in clouds. Somewhat paradoxically, and in spite of its importance in the above-mentioned applications, approximate radiative transfer modeling lags significantly behind its exact counterpart because the required mathematical and computational culture is essentially alien to the native atmospheric radiation community. I3RC is receiving enough funding from NASA/HQ and DOE/ARM for its essential operations out of NASA/GSFC. However, this does not cover the time and effort of any of the participants; so only existing models were entered. At present, none of inherently approximate methods are represented, only severe truncations of some exact methods. We therefore welcome the Math/Geo initiative at NSF which should enable the proper consortia of experts in atmospheric radiation and in applied mathematics to fill an important niche.

  17. Survey of computer programs for heat transfer analysis

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1986-01-01

    An overview is given of the current capabilities of thirty-three computer programs that are used to solve heat transfer problems. The programs considered range from large general-purpose codes with broad spectrum of capabilities, large user community, and comprehensive user support (e.g., ABAQUS, ANSYS, EAL, MARC, MITAS II, MSC/NASTRAN, and SAMCEF) to the small, special-purpose codes with limited user community such as ANDES, NTEMP, TAC2D, TAC3D, TEPSA and TRUMP. The majority of the programs use either finite elements or finite differences for the spatial discretization. The capabilities of the programs are listed in tabular form followed by a summary of the major features of each program. The information presented herein is based on a questionnaire sent to the developers of each program. This information is preceded by a brief background material needed for effective evaluation and use of computer programs for heat transfer analysis. The present survey is useful in the initial selection of the programs which are most suitable for a particular application. The final selection of the program to be used should, however, be based on a detailed examination of the documentation and the literature about the program.

  18. TEMPEST: A computer code for three-dimensional analysis of transient fluid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fort, J.A.

    TEMPEST (Transient Energy Momentum and Pressure Equations Solutions in Three dimensions) is a powerful tool for solving engineering problems in nuclear energy, waste processing, chemical processing, and environmental restoration because it analyzes and illustrates 3-D time-dependent computational fluid dynamics and heat transfer analysis. It is a family of codes with two primary versions, a N- Version (available to public) and a T-Version (not currently available to public). This handout discusses its capabilities, applications, numerical algorithms, development status, and availability and assistance.

  19. Analysis and Description of HOLTIN Service Provision for AECG monitoring in Complex Indoor Environments

    PubMed Central

    Led, Santiago; Azpilicueta, Leire; Aguirre, Erik; de Espronceda, Miguel Martínez; Serrano, Luis; Falcone, Francisco

    2013-01-01

    In this work, a novel ambulatory ECG monitoring device developed in-house called HOLTIN is analyzed when operating in complex indoor scenarios. The HOLTIN system is described, from the technological platform level to its functional model. In addition, by using in-house 3D ray launching simulation code, the wireless channel behavior, which enables ubiquitous operation, is performed. The effect of human body presence is taken into account by a novel simplified model embedded within the 3D Ray Launching code. Simulation as well as measurement results are presented, showing good agreement. These results may aid in the adequate deployment of this novel device to automate conventional medical processes, increasing the coverage radius and optimizing energy consumption. PMID:23584122

  20. Coded excitation with spectrum inversion (CEXSI) for ultrasound array imaging.

    PubMed

    Wang, Yao; Metzger, Kurt; Stephens, Douglas N; Williams, Gregory; Brownlie, Scott; O'Donnell, Matthew

    2003-07-01

    In this paper, a scheme called coded excitation with spectrum inversion (CEXSI) is presented. An established optimal binary code whose spectrum has no nulls and possesses the least variation is encoded as a burst for transmission. Using this optimal code, the decoding filter can be derived directly from its inverse spectrum. Various transmission techniques can be used to improve energy coupling within the system pass-band. We demonstrate its potential to achieve excellent decoding with very low (< 80 dB) side-lobes. For a 2.6 micros code, an array element with a center frequency of 10 MHz and fractional bandwidth of 38%, range side-lobes of about 40 dB have been achieved experimentally with little compromise in range resolution. The signal-to-noise ratio (SNR) improvement also has been characterized at about 14 dB. Along with simulations and experimental data, we present a formulation of the scheme, according to which CEXSI can be extended to improve SNR in sparse array imaging in general.

  1. Optimization of algorithm of coding of genetic information of Chlamydia

    NASA Astrophysics Data System (ADS)

    Feodorova, Valentina A.; Ulyanov, Sergey S.; Zaytsev, Sergey S.; Saltykov, Yury V.; Ulianova, Onega V.

    2018-04-01

    New method of coding of genetic information using coherent optical fields is developed. Universal technique of transformation of nucleotide sequences of bacterial gene into laser speckle pattern is suggested. Reference speckle patterns of the nucleotide sequences of omp1 gene of typical wild strains of Chlamydia trachomatis of genovars D, E, F, G, J and K and Chlamydia psittaci serovar I as well are generated. Algorithm of coding of gene information into speckle pattern is optimized. Fully developed speckles with Gaussian statistics for gene-based speckles have been used as criterion of optimization.

  2. Full-dimensional quantum calculations of ground-state tunneling splitting of malonaldehyde using an accurate ab initio potential energy surface

    NASA Astrophysics Data System (ADS)

    Wang, Yimin; Braams, Bastiaan J.; Bowman, Joel M.; Carter, Stuart; Tew, David P.

    2008-06-01

    Quantum calculations of the ground vibrational state tunneling splitting of H-atom and D-atom transfer in malonaldehyde are performed on a full-dimensional ab initio potential energy surface (PES). The PES is a fit to 11 147 near basis-set-limit frozen-core CCSD(T) electronic energies. This surface properly describes the invariance of the potential with respect to all permutations of identical atoms. The saddle-point barrier for the H-atom transfer on the PES is 4.1 kcal/mol, in excellent agreement with the reported ab initio value. Model one-dimensional and ``exact'' full-dimensional calculations of the splitting for H- and D-atom transfer are done using this PES. The tunneling splittings in full dimensionality are calculated using the unbiased ``fixed-node'' diffusion Monte Carlo (DMC) method in Cartesian and saddle-point normal coordinates. The ground-state tunneling splitting is found to be 21.6 cm-1 in Cartesian coordinates and 22.6 cm-1 in normal coordinates, with an uncertainty of 2-3 cm-1. This splitting is also calculated based on a model which makes use of the exact single-well zero-point energy (ZPE) obtained with the MULTIMODE code and DMC ZPE and this calculation gives a tunneling splitting of 21-22 cm-1. The corresponding computed splittings for the D-atom transfer are 3.0, 3.1, and 2-3 cm-1. These calculated tunneling splittings agree with each other to within less than the standard uncertainties obtained with the DMC method used, which are between 2 and 3 cm-1, and agree well with the experimental values of 21.6 and 2.9 cm-1 for the H and D transfer, respectively.

  3. Full-dimensional quantum calculations of ground-state tunneling splitting of malonaldehyde using an accurate ab initio potential energy surface.

    PubMed

    Wang, Yimin; Braams, Bastiaan J; Bowman, Joel M; Carter, Stuart; Tew, David P

    2008-06-14

    Quantum calculations of the ground vibrational state tunneling splitting of H-atom and D-atom transfer in malonaldehyde are performed on a full-dimensional ab initio potential energy surface (PES). The PES is a fit to 11 147 near basis-set-limit frozen-core CCSD(T) electronic energies. This surface properly describes the invariance of the potential with respect to all permutations of identical atoms. The saddle-point barrier for the H-atom transfer on the PES is 4.1 kcalmol, in excellent agreement with the reported ab initio value. Model one-dimensional and "exact" full-dimensional calculations of the splitting for H- and D-atom transfer are done using this PES. The tunneling splittings in full dimensionality are calculated using the unbiased "fixed-node" diffusion Monte Carlo (DMC) method in Cartesian and saddle-point normal coordinates. The ground-state tunneling splitting is found to be 21.6 cm(-1) in Cartesian coordinates and 22.6 cm(-1) in normal coordinates, with an uncertainty of 2-3 cm(-1). This splitting is also calculated based on a model which makes use of the exact single-well zero-point energy (ZPE) obtained with the MULTIMODE code and DMC ZPE and this calculation gives a tunneling splitting of 21-22 cm(-1). The corresponding computed splittings for the D-atom transfer are 3.0, 3.1, and 2-3 cm(-1). These calculated tunneling splittings agree with each other to within less than the standard uncertainties obtained with the DMC method used, which are between 2 and 3 cm(-1), and agree well with the experimental values of 21.6 and 2.9 cm(-1) for the H and D transfer, respectively.

  4. Optimal high- and low-thrust geocentric transfer

    NASA Technical Reports Server (NTRS)

    Sackett, L. L.; Edelbaum, T. N.

    1974-01-01

    A computer code which rapidly calculates time optimal combined high- and low-thrust transfers between two geocentric orbits in the presence of a strong gravitational field has been developed as a mission analysis tool. The low-thrust portion of the transfer can be between any two arbitrary ellipses. There is an option for including the effect of two initial high-thrust impulses which would raise the spacecraft from a low, initially circular orbit to the initial orbit for the low-thrust portion of the transfer. In addition, the effect of a single final impulse after the low-thrust portion of the transfer may be included. The total Delta V for the initial two impulses must be specified as well as the Delta V for the final impulse. Either solar electric or nuclear electric propulsion can be assumed for the low-thrust phase of the transfer.

  5. GPU Optimizations for a Production Molecular Docking Code*

    PubMed Central

    Landaverde, Raphael; Herbordt, Martin C.

    2015-01-01

    Modeling molecular docking is critical to both understanding life processes and designing new drugs. In previous work we created the first published GPU-accelerated docking code (PIPER) which achieved a roughly 5× speed-up over a contemporaneous 4 core CPU. Advances in GPU architecture and in the CPU code, however, have since reduced this relalative performance by a factor of 10. In this paper we describe the upgrade of GPU PIPER. This required an entire rewrite, including algorithm changes and moving most remaining non-accelerated CPU code onto the GPU. The result is a 7× improvement in GPU performance and a 3.3× speedup over the CPU-only code. We find that this difference in time is almost entirely due to the difference in run times of the 3D FFT library functions on CPU (MKL) and GPU (cuFFT), respectively. The GPU code has been integrated into the ClusPro docking server which has over 4000 active users. PMID:26594667

  6. GPU Optimizations for a Production Molecular Docking Code.

    PubMed

    Landaverde, Raphael; Herbordt, Martin C

    2014-09-01

    Modeling molecular docking is critical to both understanding life processes and designing new drugs. In previous work we created the first published GPU-accelerated docking code (PIPER) which achieved a roughly 5× speed-up over a contemporaneous 4 core CPU. Advances in GPU architecture and in the CPU code, however, have since reduced this relalative performance by a factor of 10. In this paper we describe the upgrade of GPU PIPER. This required an entire rewrite, including algorithm changes and moving most remaining non-accelerated CPU code onto the GPU. The result is a 7× improvement in GPU performance and a 3.3× speedup over the CPU-only code. We find that this difference in time is almost entirely due to the difference in run times of the 3D FFT library functions on CPU (MKL) and GPU (cuFFT), respectively. The GPU code has been integrated into the ClusPro docking server which has over 4000 active users.

  7. The complete mitogenome of Ginkgo-toothed beaked whale (Mesoplodon ginkgodens) (Chordata: Ziphiidae).

    PubMed

    Yao, Chiou-Ju; Chen, Ching-Hung; Hsiao, Chung-Der

    2016-07-01

    In this study, we used the next-generation sequencing method to deduce the complete mitogenome of Ginkgo-toothed beaked whale (Mesoplodon ginkgodens) for the first time. The nucleotide composition was asymmetric (33.3% A, 25.3% C, 12.6% G, and 28.7% T) with an overall GC content of 37.9%. The length of the assembled mitogenome was 16,339 bp and follows the typical vertebrate arrangement, including 13 protein coding genes, 22 transfer RNAs, 2 ribosomal RNAs genes, and a non-coding control region of D-loop. The D-loop contains 870 bp and is located between tRNA-Pro and tRNA-Phe. The complete mitogenome of Ginkgo-toothed beaked whale deduced in this study provides essential and important DNA molecular data for further phylogenetic and evolutionary analysis for cetaceans.

  8. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courau, T.; Plagne, L.; Ponicot, A.

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unkelbach, Jan, E-mail: junkelbach@mgh.harvard.edu; Botas, Pablo; Faculty of Physics, Ruprecht-Karls-Universität Heidelberg, Heidelberg

    Purpose: We describe a treatment plan optimization method for intensity modulated proton therapy (IMPT) that avoids high values of linear energy transfer (LET) in critical structures located within or near the target volume while limiting degradation of the best possible physical dose distribution. Methods and Materials: To allow fast optimization based on dose and LET, a GPU-based Monte Carlo code was extended to provide dose-averaged LET in addition to dose for all pencil beams. After optimizing an initial IMPT plan based on physical dose, a prioritized optimization scheme is used to modify the LET distribution while constraining the physical dosemore » objectives to values close to the initial plan. The LET optimization step is performed based on objective functions evaluated for the product of LET and physical dose (LET×D). To first approximation, LET×D represents a measure of the additional biological dose that is caused by high LET. Results: The method is effective for treatments where serial critical structures with maximum dose constraints are located within or near the target. We report on 5 patients with intracranial tumors (high-grade meningiomas, base-of-skull chordomas, ependymomas) in whom the target volume overlaps with the brainstem and optic structures. In all cases, high LET×D in critical structures could be avoided while minimally compromising physical dose planning objectives. Conclusion: LET-based reoptimization of IMPT plans represents a pragmatic approach to bridge the gap between purely physical dose-based and relative biological effectiveness (RBE)-based planning. The method makes IMPT treatments safer by mitigating a potentially increased risk of side effects resulting from elevated RBE of proton beams near the end of range.« less

  10. Magnus: A New Resistive MHD Code with Heat Flow Terms

    NASA Astrophysics Data System (ADS)

    Navarro, Anamaría; Lora-Clavijo, F. D.; González, Guillermo A.

    2017-07-01

    We present a new magnetohydrodynamic (MHD) code for the simulation of wave propagation in the solar atmosphere, under the effects of electrical resistivity—but not dominant—and heat transference in a uniform 3D grid. The code is based on the finite-volume method combined with the HLLE and HLLC approximate Riemann solvers, which use different slope limiters like MINMOD, MC, and WENO5. In order to control the growth of the divergence of the magnetic field, due to numerical errors, we apply the Flux Constrained Transport method, which is described in detail to understand how the resistive terms are included in the algorithm. In our results, it is verified that this method preserves the divergence of the magnetic fields within the machine round-off error (˜ 1× {10}-12). For the validation of the accuracy and efficiency of the schemes implemented in the code, we present some numerical tests in 1D and 2D for the ideal MHD. Later, we show one test for the resistivity in a magnetic reconnection process and one for the thermal conduction, where the temperature is advected by the magnetic field lines. Moreover, we display two numerical problems associated with the MHD wave propagation. The first one corresponds to a 3D evolution of a vertical velocity pulse at the photosphere-transition-corona region, while the second one consists of a 2D simulation of a transverse velocity pulse in a coronal loop.

  11. User's manual for the BNW-I optimization code for dry-cooled power plants. Volume I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braun, D.J.; Daniel, D.J.; De Mier, W.V.

    1977-01-01

    This User's Manual provides information on the use and operation of three versions of BNW-I, a computer code developed by Battelle, Pacific Northwest Laboratory (PNL) as a part of its activities under the ERDA Dry Cooling Tower Program. These three versions of BNW-I were used as reported elsewhere to obtain comparative incremental costs of electrical power production by two advanced concepts (one using plastic heat exchangers and one using ammonia as an intermediate heat transfer fluid) and a state-of-the-art system. The computer program offers a comprehensive method of evaluating the cost savings potential of dry-cooled heat rejection systems and componentsmore » for power plants. This method goes beyond simple ''figure-of-merit'' optimization of the cooling tower and includes such items as the cost of replacement capacity needed on an annual basis and the optimum split between plant scale-up and replacement capacity, as well as the purchase and operating costs of all major heat rejection components. Hence, the BNW-I code is a useful tool for determining potential cost savings of new heat transfer surfaces, new piping or other components as part of an optimized system for a dry-cooled power plant.« less

  12. Performance of an Optimized Eta Model Code on the Cray T3E and a Network of PCs

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Rancic, Miodrag; Geiger, Jim

    2000-01-01

    In the year 2001, NASA will launch the satellite TRIANA that will be the first Earth observing mission to provide a continuous, full disk view of the sunlit Earth. As a part of the HPCC Program at NASA GSFC, we have started a project whose objectives are to develop and implement a 3D cloud data assimilation system, by combining TRIANA measurements with model simulation, and to produce accurate statistics of global cloud coverage as an important element of the Earth's climate. For simulation of the atmosphere within this project we are using the NCEP/NOAA operational Eta model. In order to compare TRIANA and the Eta model data on approximately the same grid without significant downscaling, the Eta model will be integrated at a resolution of about 15 km. The integration domain (from -70 to +70 deg in latitude and 150 deg in longitude) will cover most of the sunlit Earth disc and will continuously rotate around the globe following TRIANA. The cloud data assimilation is supposed to run and produce 3D clouds on a near real-time basis. Such a numerical setup and integration design is very ambitious and computationally demanding. Thus, though the Eta model code has been very carefully developed and its computational efficiency has been systematically polished during the years of operational implementation at NCEP, the current MPI version may still have problems with memory and efficiency for the TRIANA simulations. Within this work, we optimize a parallel version of the Eta model code on a Cray T3E and a network of PCs (theHIVE) in order to improve its overall efficiency. Our optimization procedure consists of introducing dynamically allocated arrays to reduce the size of static memory, and optimizing on a single processor by splitting loops to limit the number of streams. All the presented results are derived using an integration domain centered at the equator, with a size of 60 x 60 deg, and with horizontal resolutions of 1/2 and 1/3 deg, respectively. In accompanying charts we report the elapsed time, the speedup and the Mflops as a function of the number of processors for the non-optimized version of the code on the T3E and theHIVE. The large amount of communication required for model integration explains its poor performance on theHIVE. Our initial implementation of the dynamic memory allocation has contributed to about 12% reduction of memory but has introduced a 3% overhead in computing time. This overhead was removed by performing loop splitting in some of the high demanding subroutines. When the Eta code is fully optimized in order to meet the memory requirement for TRIANA simulations, a non-negligeable overhead may appear that may seriously affect the efficiency of the code. To alleviate this problem, we are considering implementation of a new algorithm for the horizontal advection that is computationally less expensive, and also a new approach for marching in time.

  13. 3D Radiative Transfer Code for Polarized Scattered Light with Aligned Grains

    NASA Astrophysics Data System (ADS)

    Pelkonen, V. M.; Penttilä, A.; Juvela, M.; Muinonen, K.

    2017-12-01

    Polarized scattered light has been observed in cometary comae and in circumstellar disks. It carries information about the grains from which the light scattered. However, modelling polarized scattered light is a complicated problem. We are working on a 3D Monte Carlo radiative transfer code which incorporates hierarchical grid structure (octree) and the full Stokes vector for both the incoming radiation and the radiation scattered by dust grains. In octree grid format an upper level cell can be divided into 8 subcells by halving the cell in each of the three axis. Levels of further refinement of the grid may be added, until the desired resolution is reached. The radiation field is calculated with Monte Carlo methods. The path of the model ray is traced in the cloud: absorbed intensity is counted in each cell, and from time to time, the model ray is scattered towards a new direction as determined by the dust model. Due to the non-spherical grains and the polarization, the scattering problem will be the main issue for the code and most time consuming. The scattering parameters will be taken from the models for individual grains. We can introduce populations of different grain shapes into the dust model, and randomly select, based on their amounts, from which shape the model ray scatters. Similarly, we can include aligned and non-aligned subpopulations of these grains, based on the grain alignment calculations, to see which grains should be oriented with the magnetic field, or, in the absence of a magnetic field close to the comet nucleus, with another axis of alignment (e.g., the radiation direction). The 3D nature of the grid allows us to assign these values, as well as density, for each computational cell, to model phenomena like e.g., cometary jets. The code will record polarized scattered light towards one or more observer directions within a single simulation run. These results can then be compared with the observations of comets at different phase angles, or, in the case of other star systems, of circumstellar disks, to help us study these objects. We will present tests of the code in development with simple models.

  14. Planetary Torque in 3D Isentropic Disks

    NASA Astrophysics Data System (ADS)

    Fung, Jeffrey; Masset, Frédéric; Lega, Elena; Velasco, David

    2017-03-01

    Planetary migration is inherently a three-dimensional (3D) problem, because Earth-size planetary cores are deeply embedded in protoplanetary disks. Simulations of these 3D disks remain challenging due to the steep resolution requirements. Using two different hydrodynamics codes, FARGO3D and PEnGUIn, we simulate disk-planet interaction for a one to five Earth-mass planet embedded in an isentropic disk. We measure the torque on the planet and ensure that the measurements are converged both in resolution and between the two codes. We find that the torque is independent of the smoothing length of the planet’s potential (r s), and that it has a weak dependence on the adiabatic index of the gaseous disk (γ). The torque values correspond to an inward migration rate qualitatively similar to previous linear calculations. We perform additional simulations with explicit radiative transfer using FARGOCA, and again find agreement between 3D simulations and existing torque formulae. We also present the flow pattern around the planets that show active flow is present within the planet’s Hill sphere, and meridional vortices are shed downstream. The vertical flow speed near the planet is faster for a smaller r s or γ, up to supersonic speeds for the smallest r s and γ in our study.

  15. Study of information transfer optimization for communication satellites

    NASA Technical Reports Server (NTRS)

    Odenwalder, J. P.; Viterbi, A. J.; Jacobs, I. M.; Heller, J. A.

    1973-01-01

    The results are presented of a study of source coding, modulation/channel coding, and systems techniques for application to teleconferencing over high data rate digital communication satellite links. Simultaneous transmission of video, voice, data, and/or graphics is possible in various teleconferencing modes and one-way, two-way, and broadcast modes are considered. A satellite channel model including filters, limiter, a TWT, detectors, and an optimized equalizer is treated in detail. A complete analysis is presented for one set of system assumptions which exclude nonlinear gain and phase distortion in the TWT. Modulation, demodulation, and channel coding are considered, based on an additive white Gaussian noise channel model which is an idealization of an equalized channel. Source coding with emphasis on video data compression is reviewed, and the experimental facility utilized to test promising techniques is fully described.

  16. Modeling 3D conjugate heat and mass transfer for turbulent air drying of Chilean papaya in a direct contact dryer

    NASA Astrophysics Data System (ADS)

    Lemus-Mondaca, Roberto A.; Vega-Gálvez, Antonio; Zambra, Carlos E.; Moraga, Nelson O.

    2017-01-01

    A 3D model considering heat and mass transfer for food dehydration inside a direct contact dryer is studied. The k- ɛ model is used to describe turbulent air flow. The samples thermophysical properties as density, specific heat, and thermal conductivity are assumed to vary non-linearly with temperature. FVM, SIMPLE algorithm based on a FORTRAN code are used. Results unsteady velocity, temperature, moisture, kinetic energy and dissipation rate for the air flow are presented, whilst temperature and moisture values for the food also are presented. The validation procedure includes a comparison with experimental and numerical temperature and moisture content results obtained from experimental data, reaching a deviation 7-10 %. In addition, this turbulent k- ɛ model provided a better understanding of the transport phenomenon inside the dryer and sample.

  17. LETTER: Study of combined NBI and ICRF enhancement of the D-3He fusion yield with a Fokker-Planck code

    NASA Astrophysics Data System (ADS)

    Azoulay, M.; George, M. A.; Burger, A.; Collins, W. E.; Silberman, E.

    A two-dimensional bounce averaged Fokker-Planck code is used to study the fusion yield and the wave absorption by residual hydrogen ions in higher harmonic ICRF heating of D (120 keV) and 3He (80 keV) beams in the JT-60U tokamak. Both for the fourth harmonic resonance of 3He (ω = 4ωc3He(0), which is accompanied by the third harmonic resonance of hydrogen (ω = 3ωcH) at the low field side, and for the third harmonic resonance of 3He (ω = 4ωcD(0) = 3ωc3He(0)) = 2ωcH(0)), a few per cent of hydrogen ions are found to absorb a large fraction of the ICRF power and to degrade the fusion output power. In the latter case, D beam acceleration due to the fourth harmonic resonance in the 3He(D) regime can enhance the fusion yield more effectively. A discussion is given of the effect of D beam acceleration due to the fifth harmonic resonance (ω = 5ωcD) at the high field side in the case of ω = 4ωc3He(0) and of the optimization of the fusion yield in the case of lower electron density and higher electron temperature

  18. Analysis of Coupled Seals, Secondary and Powerstream Flow Fields in Aircraft and Aerospace Turbomachines

    NASA Technical Reports Server (NTRS)

    Athavale, M. M.; Ho, Y. H.; Prezekwas, A. J.

    2005-01-01

    Higher power, high efficiency gas turbine engines require optimization of the seals and secondary flow systems as well as their impact on the powerstream. This work focuses on two aspects: 1. To apply the present day CFD tools (SCISEAL) to different real-life secondary flow applications from different original equipment manufacturers (OEM s) to provide feedback data and 2. Develop a computational methodology for coupled time-accurate simulation of the powerstream and secondary flow with emphasis on the interaction between the disk-cavity and rim seals flows with the powerstream (SCISEAL-MS-TURBO). One OEM simulation was of the Allison Engine Company T-56 turbine drum cavities including conjugate heat transfer with good agreement with data and provided design feedback information. Another was the GE aspirating seal where the 3-D CFD simulations played a major role in analysis and modification of that seal configuration. The second major objective, development of a coupled flow simulation capability was achieved by using two codes MS-TURBO for the powerstream and SCISEAL for the secondary flows with an interface coupling algorithm. The coupled code was tested against data from three differed configurations: 1. bladeless-rotor-stator-cavity turbine test rig, 2. UTRC high pressure turbine test rig, and, 3. the NASA Low-Speed-Air Compressor rig (LSAC) with results and limitations discussed herein.

  19. ScintSim1: A new Monte Carlo simulation code for transport of optical photons in 2D arrays of scintillation detectors

    PubMed Central

    Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali

    2014-01-01

    Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization. PMID:24600168

  20. ScintSim1: A new Monte Carlo simulation code for transport of optical photons in 2D arrays of scintillation detectors.

    PubMed

    Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali

    2014-01-01

    Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization.

  1. An efficient dictionary learning algorithm and its application to 3-D medical image denoising.

    PubMed

    Li, Shutao; Fang, Leyuan; Yin, Haitao

    2012-02-01

    In this paper, we propose an efficient dictionary learning algorithm for sparse representation of given data and suggest a way to apply this algorithm to 3-D medical image denoising. Our learning approach is composed of two main parts: sparse coding and dictionary updating. On the sparse coding stage, an efficient algorithm named multiple clusters pursuit (MCP) is proposed. The MCP first applies a dictionary structuring strategy to cluster the atoms with high coherence together, and then employs a multiple-selection strategy to select several competitive atoms at each iteration. These two strategies can greatly reduce the computation complexity of the MCP and assist it to obtain better sparse solution. On the dictionary updating stage, the alternating optimization that efficiently approximates the singular value decomposition is introduced. Furthermore, in the 3-D medical image denoising application, a joint 3-D operation is proposed for taking the learning capabilities of the presented algorithm to simultaneously capture the correlations within each slice and correlations across the nearby slices, thereby obtaining better denoising results. The experiments on both synthetically generated data and real 3-D medical images demonstrate that the proposed approach has superior performance compared to some well-known methods. © 2011 IEEE

  2. Adaptation and optimization of a line-by-line radiative transfer program for the STAR-100 (STARSMART)

    NASA Technical Reports Server (NTRS)

    Rarig, P. L.

    1980-01-01

    A program to calculate upwelling infrared radiation was modified to operate efficiently on the STAR-100. The modified software processes specific test cases significantly faster than the initial STAR-100 code. For example, a midlatitude summer atmospheric model is executed in less than 2% of the time originally required on the STAR-100. Furthermore, the optimized program performs extra operations to save the calculated absorption coefficients. Some of the advantages and pitfalls of virtual memory and vector processing are discussed along with strategies used to avoid loss of accuracy and computing power. Results from the vectorized code, in terms of speed, cost, and relative error with respect to serial code solutions are encouraging.

  3. Optimizing LX-17 Thermal Decomposition Model Parameters with Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Moore, Jason; McClelland, Matthew; Tarver, Craig; Hsu, Peter; Springer, H. Keo

    2017-06-01

    We investigate and model the cook-off behavior of LX-17 because this knowledge is critical to understanding system response in abnormal thermal environments. Thermal decomposition of LX-17 has been explored in conventional ODTX (One-Dimensional Time-to-eXplosion), PODTX (ODTX with pressure-measurement), TGA (thermogravimetric analysis), and DSC (differential scanning calorimetry) experiments using varied temperature profiles. These experimental data are the basis for developing multiple reaction schemes with coupled mechanics in LLNL's multi-physics hydrocode, ALE3D (Arbitrary Lagrangian-Eulerian code in 2D and 3D). We employ evolutionary algorithms to optimize reaction rate parameters on high performance computing clusters. Once experimentally validated, this model will be scalable to a number of applications involving LX-17 and can be used to develop more sophisticated experimental methods. Furthermore, the optimization methodology developed herein should be applicable to other high explosive materials. This work was performed under the auspices of the U.S. DOE by LLNL under contract DE-AC52-07NA27344. LLNS, LLC.

  4. Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Green, Lawrence; Carle, Alan; Fagan, Mike

    1999-01-01

    Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop limit is reached, or no further design improvement is possible due to active design variable bounds and/or constraints. The resulting shape parameters are then used by the grid generation code to define a new wing surface and computational grid. The lift-to-drag ratio and its gradient are computed for the new design by the automatically-generated adjoint codes. Several optimization iterations may be required to find an optimum wing shape. Results from two sample cases will be discussed. The reader should note that this work primarily represents a demonstration of use of automatically- generated adjoint code within an aerodynamic shape optimization. As such, little significance is placed upon the actual optimization results, relative to the method for obtaining the results.

  5. 28 CFR 522.12 - Relationship between existing criminal sentences imposed under the U.S. or D.C. Code and new...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Relationship between existing criminal sentences imposed under the U.S. or D.C. Code and new civil contempt commitment orders. 522.12 Section 522.12 Judicial Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER ADMISSION TO INSTITUTION Civi...

  6. Analysis of LH Launcher Arrays (Like the ITER One) Using the TOPLHA Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maggiora, R.; Milanesio, D.; Vecchi, G.

    2009-11-26

    TOPLHA (Torino Polytechnic Lower Hybrid Antenna) code is an innovative tool for the 3D/1D simulation of Lower Hybrid (LH) antennas, i.e. accounting for realistic 3D waveguides geometry and for accurate 1D plasma models, and without restrictions on waveguide shape, including curvature. This tool provides a detailed performances prediction of any LH launcher, by computing the antenna scattering parameters, the current distribution, electric field maps and power spectra for any user-specified waveguide excitation. In addition, a fully parallelized and multi-cavity version of TOPLHA permits the analysis of large and complex waveguide arrays in a reasonable simulation time. A detailed analysis ofmore » the performances of the proposed ITER LH antenna geometry has been carried out, underlining the strong dependence of the antenna input parameters with respect to plasma conditions. A preliminary optimization of the antenna dimensions has also been accomplished. Electric current distribution on conductors, electric field distribution at the interface with plasma, and power spectra have been calculated as well. The analysis shows the strong capabilities of the TOPLHA code as a predictive tool and its usefulness to LH launcher arrays detailed design.« less

  7. Seals Code Development Workshop

    NASA Technical Reports Server (NTRS)

    Hendricks, Robert C. (Compiler); Liang, Anita D. (Compiler)

    1996-01-01

    Seals Workshop of 1995 industrial code (INDSEAL) release include ICYL, GCYLT, IFACE, GFACE, SPIRALG, SPIRALI, DYSEAL, and KTK. The scientific code (SCISEAL) release includes conjugate heat transfer and multidomain with rotordynamic capability. Several seals and bearings codes (e.g., HYDROFLEX, HYDROTRAN, HYDROB3D, FLOWCON1, FLOWCON2) are presented and results compared. Current computational and experimental emphasis includes multiple connected cavity flows with goals of reducing parasitic losses and gas ingestion. Labyrinth seals continue to play a significant role in sealing with face, honeycomb, and new sealing concepts under investigation for advanced engine concepts in view of strict environmental constraints. The clean sheet approach to engine design is advocated with program directions and anticipated percentage SFC reductions cited. Future activities center on engine applications with coupled seal/power/secondary flow streams.

  8. Jet fuel toxicity: skin damage measured by 900-MHz MRI skin microscopy and visualization by 3D MR image processing.

    PubMed

    Sharma, Rakesh; Locke, Bruce R

    2010-09-01

    The toxicity of jet fuels was measured using noninvasive magnetic resonance microimaging (MRM) at 900-MHz magnetic field. The hypothesis was that MRM can visualize and measure the epidermis exfoliation and hair follicle size of rat skin tissue due to toxic skin irritation after skin exposure to jet fuels. High-resolution 900-MHz MRM was used to measure the change in size of hair follicle, epidermis thickening and dermis in the skin after jet fuel exposure. A number of imaging techniques utilized included magnetization transfer contrast (MTC), spin-lattice relaxation constant (T1-weighting), combination of T2-weighting with magnetic field inhomogeneity (T2*-weighting), magnetization transfer weighting, diffusion tensor weighting and chemical shift weighting. These techniques were used to obtain 2D slices and 3D multislice-multiecho images with high-contrast resolution and high magnetic resonance signal with better skin details. The segmented color-coded feature spaces after image processing of the epidermis and hair follicle structures were used to compare the toxic exposure to tetradecane, dodecane, hexadecane and JP-8 jet fuels. Jet fuel exposure caused skin damage (erythema) at high temperature in addition to chemical intoxication. Erythema scores of the skin were distinct for jet fuels. The multicontrast enhancement at optimized TE and TR parameters generated high MRM signal of different skin structures. The multiple contrast approach made visible details of skin structures by combining specific information achieved from each of the microimaging techniques. At short echo time, MRM images and digitized histological sections confirmed exfoliated epidermis, dermis thickening and hair follicle atrophy after exposure to jet fuels. MRM data showed correlation with the histopathology data for epidermis thickness (R(2)=0.9052, P<.0002) and hair root area (R(2)=0.88, P<.0002). The toxicity of jet fuels on skin structures was in the order of tetradecane>hexadecane>dodecane. The method showed a sensitivity of 87.5% and a specificity of 75%. By MR image processing, different color-coded skin structures were extracted and 3D shapes of the epidermis and hair follicle size were compared. In conclusion, high-resolution MRM measured the change in skin epidermis and hair follicle size due to toxicity of jet fuels. MRM offers a three-dimensional spatial visualization of the change in skin structures as a method of toxicity evaluation and for comparison of jet fuels.

  9. Rate-distortion optimized tree-structured compression algorithms for piecewise polynomial images.

    PubMed

    Shukla, Rahul; Dragotti, Pier Luigi; Do, Minh N; Vetterli, Martin

    2005-03-01

    This paper presents novel coding algorithms based on tree-structured segmentation, which achieve the correct asymptotic rate-distortion (R-D) behavior for a simple class of signals, known as piecewise polynomials, by using an R-D based prune and join scheme. For the one-dimensional case, our scheme is based on binary-tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly to achieve the correct exponentially decaying R-D behavior (D(R) - c(o)2(-c1R)), thus improving over classic wavelet schemes. We also prove that the computational complexity of the scheme is of O(N log N). We then show the extension of this scheme to the two-dimensional case using a quadtree. This quadtree-coding scheme also achieves an exponentially decaying R-D behavior, for the polygonal image model composed of a white polygon-shaped object against a uniform black background, with low computational cost of O(N log N). Again, the key is an R-D optimized prune and join strategy. Finally, we conclude with numerical results, which show that the proposed quadtree-coding scheme outperforms JPEG2000 by about 1 dB for real images, like cameraman, at low rates of around 0.15 bpp.

  10. 3D neutronic codes coupled with thermal-hydraulic system codes for PWR, and BWR and VVER reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langenbuch, S.; Velkov, K.; Lizorkin, M.

    1997-07-01

    This paper describes the objectives of code development for coupling 3D neutronics codes with thermal-hydraulic system codes. The present status of coupling ATHLET with three 3D neutronics codes for VVER- and LWR-reactors is presented. After describing the basic features of the 3D neutronic codes BIPR-8 from Kurchatov-Institute, DYN3D from Research Center Rossendorf and QUABOX/CUBBOX from GRS, first applications of coupled codes for different transient and accident scenarios are presented. The need of further investigations is discussed.

  11. Theoretical and experimental investigation of millimeter-wave TED's in cross-waveguide oscillators

    NASA Astrophysics Data System (ADS)

    Rydberg, A.

    1985-07-01

    Theoretical and experimental investigations of millimeterwave GaAs second harmonic transferred electron device (TED) oscillators using separate circuits for frequency and power optimization, are described. The theory predicts the oscillation frequency with less than 2 percent error for the second harmonic. Apart from the 2d and 3d, a 4th harmonic from the TED was observed up to 130 GHz.

  12. Rate distortion optimal bit allocation methods for volumetric data using JPEG 2000.

    PubMed

    Kosheleva, Olga M; Usevitch, Bryan E; Cabrera, Sergio D; Vidal, Edward

    2006-08-01

    Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.

  13. MuSim, a Graphical User Interface for Multiple Simulation Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, Thomas; Cummings, Mary Anne; Johnson, Rolland

    2016-06-01

    MuSim is a new user-friendly program designed to interface to many different particle simulation codes, regardless of their data formats or geometry descriptions. It presents the user with a compelling graphical user interface that includes a flexible 3-D view of the simulated world plus powerful editing and drag-and-drop capabilities. All aspects of the design can be parametrized so that parameter scans and optimizations are easy. It is simple to create plots and display events in the 3-D viewer (with a slider to vary the transparency of solids), allowing for an effortless comparison of different simulation codes. Simulation codes: G4beamline, MAD-X,more » and MCNP; more coming. Many accelerator design tools and beam optics codes were written long ago, with primitive user interfaces by today's standards. MuSim is specifically designed to make it easy to interface to such codes, providing a common user experience for all, and permitting the construction and exploration of models with very little overhead. For today's technology-driven students, graphical interfaces meet their expectations far better than text-based tools, and education in accelerator physics is one of our primary goals.« less

  14. Spectroscopic diagnostics of tungsten-doped CH plasmas

    NASA Astrophysics Data System (ADS)

    Klapisch, M.; Colombant, D.; Lehecka, T.

    1998-11-01

    Spectra of CH with different concentrations of W dopant and laser intensities ( 2.5-10 x10^12 W/cm^2 ) were obtained at NRL with the Nike Laser. They were recorded in the 100-500 eV range with an XUV grating spectrometer. The hydrodynamic simulations are performed with the 1D code FAST1D(J. H. Gardner et al., Phys. Plasmas, 5, May (1998).) where non LTE effects are introduced by Busquet's model( M. Busquet, Phys. Fluids B, 5, 4191 (1993); M. Klapisch, A. Bar-Shalom, J. Oreg and D. Colombant, Phys. Plasmas, 5, May (1998).). They are then post-processed with TRANSPEC( O. Peyrusse, J. Quant. Spectrosc. Radiat. Transfer, 51, 281 (1994)), a time dependent collisional radiative code with radiation coupling. The necessary atomic data are obtained from the HULLAC code( M. Klapisch and A. Bar-Shalom, J. Quant. Spectrosc. Radiat. Transfer, 58, 687 (1997).). The post processing and diagnostics were performed on carbon lines and the results are compared with the experimental data.

  15. Accelerating next generation sequencing data analysis with system level optimizations.

    PubMed

    Kathiresan, Nagarajan; Temanni, Ramzi; Almabrazi, Hakeem; Syed, Najeeb; Jithesh, Puthen V; Al-Ali, Rashid

    2017-08-22

    Next generation sequencing (NGS) data analysis is highly compute intensive. In-memory computing, vectorization, bulk data transfer, CPU frequency scaling are some of the hardware features in the modern computing architectures. To get the best execution time and utilize these hardware features, it is necessary to tune the system level parameters before running the application. We studied the GATK-HaplotypeCaller which is part of common NGS workflows, that consume more than 43% of the total execution time. Multiple GATK 3.x versions were benchmarked and the execution time of HaplotypeCaller was optimized by various system level parameters which included: (i) tuning the parallel garbage collection and kernel shared memory to simulate in-memory computing, (ii) architecture-specific tuning in the PairHMM library for vectorization, (iii) including Java 1.8 features through GATK source code compilation and building a runtime environment for parallel sorting and bulk data transfer (iv) the default 'on-demand' mode of CPU frequency is over-clocked by using 'performance-mode' to accelerate the Java multi-threads. As a result, the HaplotypeCaller execution time was reduced by 82.66% in GATK 3.3 and 42.61% in GATK 3.7. Overall, the execution time of NGS pipeline was reduced to 70.60% and 34.14% for GATK 3.3 and GATK 3.7 respectively.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    BRISC is a developmental prototype for a nextgeneration “systems-level” integrated performance and safety code (IPSC) for nuclear reactors. Its development served to demonstrate how a lightweight multi-physics coupling approach can be used to tightly couple the physics models in several different physics codes (written in a variety of languages) into one integrated package for simulating accident scenarios in a liquid sodium cooled “burner” nuclear reactor. For example, the RIO Fluid Flow and Heat transfer code developed at Sandia (SNL: Chris Moen, Dept. 08005) is used in BRISC to model fluid flow and heat transfer, as well as conduction heat transfermore » in solids. Because BRISC is a prototype, its most practical application is as a foundation or starting point for developing a true production code. The sub-codes and the associated models and correlations currently employed within BRISC were chosen to cover the required application space and demonstrate feasibility, but were not optimized or validated against experimental data within the context of their use in BRISC.« less

  17. HERO - A 3D general relativistic radiative post-processor for accretion discs around black holes

    NASA Astrophysics Data System (ADS)

    Zhu, Yucong; Narayan, Ramesh; Sadowski, Aleksander; Psaltis, Dimitrios

    2015-08-01

    HERO (Hybrid Evaluator for Radiative Objects) is a 3D general relativistic radiative transfer code which has been tailored to the problem of analysing radiation from simulations of relativistic accretion discs around black holes. HERO is designed to be used as a post-processor. Given some fixed fluid structure for the disc (i.e. density and velocity as a function of position from a hydrodynamic or magnetohydrodynamic simulation), the code obtains a self-consistent solution for the radiation field and for the gas temperatures using the condition of radiative equilibrium. The novel aspect of HERO is that it combines two techniques: (1) a short-characteristics (SC) solver that quickly converges to a self-consistent disc temperature and radiation field, with (2) a long-characteristics (LC) solver that provides a more accurate solution for the radiation near the photosphere and in the optically thin regions. By combining these two techniques, we gain both the computational speed of SC and the high accuracy of LC. We present tests of HERO on a range of 1D, 2D, and 3D problems in flat space and show that the results agree well with both analytical and benchmark solutions. We also test the ability of the code to handle relativistic problems in curved space. Finally, we discuss the important topic of ray defects, a major limitation of the SC method, and describe our strategy for minimizing the induced error.

  18. Hydrodynamic Studies of Turbulent AGN Tori

    NASA Astrophysics Data System (ADS)

    Schartmann, M.; Meisenheimer, K.; Klahr, H.; Camenzind, M.; Wolf, S.; Henning, Th.; Burkert, A.; Krause, M.

    Recently, the MID-infrared Interferometric instrument (MIDI) at the VLTI has shown that dust tori in the two nearby Seyfert galaxies NGC 1068 and the Circinus galaxy are geometrically thick and can be well described by a thin, warm central disk, surrounded by a colder and fluffy torus component. By carrying out hydrodynamical simulations with the help of the TRAMP code (Klahr et al. 1999), we follow the evolution of a young nuclear star cluster in terms of discrete mass-loss and energy injection from stellar processes. This naturally leads to a filamentary large scale torus component, where cold gas is able to flow radially inwards. The filaments join into a dense and very turbulent disk structure. In a post-processing step, we calculate spectral energy distributions and images with the 3D radiative transfer code MC3D Wolf (2003) and compare them to observations. Turbulence in the dense disk component is investigated in a separate project.

  19. 3D Ultrasonic Wave Simulations for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Campbell, Leckey Cara A/; Miler, Corey A.; Hinders, Mark K.

    2011-01-01

    Structural health monitoring (SHM) for the detection of damage in aerospace materials is an important area of research at NASA. Ultrasonic guided Lamb waves are a promising SHM damage detection technique since the waves can propagate long distances. For complicated flaw geometries experimental signals can be difficult to interpret. High performance computing can now handle full 3-dimensional (3D) simulations of elastic wave propagation in materials. We have developed and implemented parallel 3D elastodynamic finite integration technique (3D EFIT) code to investigate ultrasound scattering from flaws in materials. EFIT results have been compared to experimental data and the simulations provide unique insight into details of the wave behavior. This type of insight is useful for developing optimized experimental SHM techniques. 3D EFIT can also be expanded to model wave propagation and scattering in anisotropic composite materials.

  20. Modeling of photon migration in the human lung using a finite volume solver

    NASA Astrophysics Data System (ADS)

    Sikorski, Zbigniew; Furmanczyk, Michal; Przekwas, Andrzej J.

    2006-02-01

    The application of the frequency domain and steady-state diffusive optical spectroscopy (DOS) and steady-state near infrared spectroscopy (NIRS) to diagnosis of the human lung injury challenges many elements of these techniques. These include the DOS/NIRS instrument performance and accurate models of light transport in heterogeneous thorax tissue. The thorax tissue not only consists of different media (e.g. chest wall with ribs, lungs) but its optical properties also vary with time due to respiration and changes in thorax geometry with contusion (e.g. pneumothorax or hemothorax). This paper presents a finite volume solver developed to model photon migration in the diffusion approximation in heterogeneous complex 3D tissues. The code applies boundary conditions that account for Fresnel reflections. We propose an effective diffusion coefficient for the void volumes (pneumothorax) based on the assumption of the Lambertian diffusion of photons entering the pleural cavity and accounting for the local pleural cavity thickness. The code has been validated using the MCML Monte Carlo code as a benchmark. The code environment enables a semi-automatic preparation of 3D computational geometry from medical images and its rapid automatic meshing. We present the application of the code to analysis/optimization of the hybrid DOS/NIRS/ultrasound technique in which ultrasound provides data on the localization of thorax tissue boundaries. The code effectiveness (3D complex case computation takes 1 second) enables its use to quantitatively relate detected light signal to absorption and reduced scattering coefficients that are indicators of the pulmonary physiologic state (hemoglobin concentration and oxygenation).

  1. Application of Adjoint Method and Spectral-Element Method to Tomographic Inversion of Regional Seismological Structure Beneath Japanese Islands

    NASA Astrophysics Data System (ADS)

    Tsuboi, S.; Miyoshi, T.; Obayashi, M.; Tono, Y.; Ando, K.

    2014-12-01

    Recent progress in large scale computing by using waveform modeling technique and high performance computing facility has demonstrated possibilities to perform full-waveform inversion of three dimensional (3D) seismological structure inside the Earth. We apply the adjoint method (Liu and Tromp, 2006) to obtain 3D structure beneath Japanese Islands. First we implemented Spectral-Element Method to K-computer in Kobe, Japan. We have optimized SPECFEM3D_GLOBE (Komatitsch and Tromp, 2002) by using OpenMP so that the code fits hybrid architecture of K-computer. Now we could use 82,134 nodes of K-computer (657,072 cores) to compute synthetic waveform with about 1 sec accuracy for realistic 3D Earth model and its performance was 1.2 PFLOPS. We use this optimized SPECFEM3D_GLOBE code and take one chunk around Japanese Islands from global mesh and compute synthetic seismograms with accuracy of about 10 second. We use GAP-P2 mantle tomography model (Obayashi et al., 2009) as an initial 3D model and use as many broadband seismic stations available in this region as possible to perform inversion. We then use the time windows for body waves and surface waves to compute adjoint sources and calculate adjoint kernels for seismic structure. We have performed several iteration and obtained improved 3D structure beneath Japanese Islands. The result demonstrates that waveform misfits between observed and theoretical seismograms improves as the iteration proceeds. We now prepare to use much shorter period in our synthetic waveform computation and try to obtain seismic structure for basin scale model, such as Kanto basin, where there are dense seismic network and high seismic activity. Acknowledgements: This research was partly supported by MEXT Strategic Program for Innovative Research. We used F-net seismograms of the National Research Institute for Earth Science and Disaster Prevention.

  2. Evolutionary Models of Cold, Magnetized, Interstellar Clouds

    NASA Technical Reports Server (NTRS)

    Gammie, Charles F.; Ostriker, Eve; Stone, James M.

    2004-01-01

    We modeled the long-term and small-scale evolution of molecular clouds using direct 2D and 3D magnetohydrodynamic (MHD) simulations. This work followed up on previous research by our group under auspices of the ATP in which we studied the energetics of turbulent, magnetized clouds and their internal structure on intermediate scales. Our new work focused on both global and smallscale aspects of the evolution of turbulent, magnetized clouds, and in particular studied the response of turbulent proto-cloud material to passage through the Galactic spiral potential, and the dynamical collapse of turbulent, magnetized (supercritical) clouds into fragments to initiate the formation of a stellar cluster. Technical advances under this program include developing an adaptive-mesh MHD code as a successor to ZEUS (ATHENA) in order to follow cloud fragmentation, developing a shearing-sheet MHD code which includes self-gravity and externally-imposed gravity to follow the evolution of clouds in the Galactic potential, and developing radiative transfer models to evaluate the internal ionization of clumpy clouds exposed to external photoionizing UV and CR radiation. Gammie's work at UIUC focused on the radiative transfer aspects of this program.

  3. Atmospheric Retrievals from Exoplanet Observations and Simulations with BART

    NASA Astrophysics Data System (ADS)

    Harrington, Joseph

    This project will determine the observing plans needed to retrieve exoplanet atmospheric composition and thermal profiles over a broad range of planets, stars, instruments, and observing modes. Characterizing exoplanets is hard. The dim planets orbit bright stars, giving orders of magnitude more relative noise than for solar-system planets. Advanced statistical techniques are needed to determine what the data can - and more importantly cannot - say. We therefore developed Bayesian Atmospheric Radiative Transfer (BART). BART explores the parameter space of atmospheric chemical abundances and thermal profiles using Differential-Evolution Markov-Chain Monte Carlo. It generates thousands of candidate spectra, integrates over observational bandpasses, and compares to data, generating a statistical model for an atmosphere's composition and thermal structure. At best, it gives abundances and thermal profiles with uncertainties. At worst, it shows what kinds of planets the data allow. It also gives parameter correlations. BART is open-source, designed for community use and extension (http://github.com/exosports/BART). Three arXived PhD theses (papers in publication) provide technical documentation, tests, and application to Spitzer and HST data. There are detailed user and programmer manuals and community support forums. Exoplanet analysis techniques must be tested against synthetic data, where the answer is known, and vetted by statisticians. Unfortunately, this has rarely been done, and never sufficiently. Several recent papers question the entire body of Spitzer exoplanet observations, because different analyses of the same data give different results. The latest method, pixel-level decorrelation, produces results that diverge from an emerging consensus. We do not know the retrieval problem's strengths and weaknesses relative to low SNR, red noise, low resolution, instrument systematics, or incomplete spectral line lists. In observing eclipses and transits, we assume the planet has uniform composition and the same temperature profile everywhere. We do not know this assumption's impact. While Spitzer and HST have few exoplanet observing modes, JWST will have over 20. Given the signal challenges and the complexity of retrieval, modeling the observations and data analysis is the best way to optimize an observing plan. Our project solves all of these problems. Using only open-source codes, with tools available to the community for their immediate application in JWST and HST proposals and analyses, we will produce a faithful simulator of 2D spectral and photometric frames from each JWST exoplanet mode (WFC3 spatial scan mode works already), including jitter and intrapixel effects. We will extract and calibrate data, analyzing them with BART. Given planetary input spectra for terrestrial, super-Earth, Neptune, and Jupiterclass planets, and a variety of stellar spectra, we will determine the best combination of observations to recover each atmosphere, and the limits where low SNR or spectral coverage produce deceptive results. To facilitate these analyses, we will adapt an existing cloud model to BART, add condensate code now being written to its thermochemical model, include scattering, add a 3D atmosphere module (for dayside occultation mapping and the 1D vs. 3D question), and improve performance and documentation, among other improvements. We will host a web site and community discussions online and at conferences about retrieval issues. We will develop validation tests for radiative-transfer and BART-style retrieval codes, and provide examples to validate others' codes. We will engage the retrieval community in data challenges. We will provide web-enabled tools to specify planets easily for modeling. We will make all of these tools, tests, and comparisons available online so everyone can use them to maximize NASA's investment in high-end observing capabilities to characterize exoplanets.

  4. Regulation control and energy management scheme for wireless power transfer

    DOEpatents

    Miller, John M.

    2015-12-29

    Power transfer rate at a charging facility can be maximized by employing a feedback scheme. The state of charge (SOC) and temperature of the regenerative energy storage system (RESS) pack of a vehicle is monitored to determine the load due to the RESS pack. An optimal frequency that cancels the imaginary component of the input impedance for the output signal from a grid converter is calculated from the load of the RESS pack, and a frequency offset f* is made to the nominal frequency f.sub.0 of the grid converter output based on the resonance frequency of a magnetically coupled circuit. The optimal frequency can maximize the efficiency of the power transfer. Further, an optimal grid converter duty ratio d* can be derived from the charge rate of the RESS pack. The grid converter duty ratio d* regulates wireless power transfer (WPT) power level.

  5. Cognitive/emotional models for human behavior representation in 3D avatar simulations

    NASA Astrophysics Data System (ADS)

    Peterson, James K.

    2004-08-01

    Simplified models of human cognition and emotional response are presented which are based on models of auditory/ visual polymodal fusion. At the core of these models is a computational model of Area 37 of the temporal cortex which is based on new isocortex models presented recently by Grossberg. These models are trained using carefully chosen auditory (musical sequences), visual (paintings) and higher level abstract (meta level) data obtained from studies of how optimization strategies are chosen in response to outside managerial inputs. The software modules developed are then used as inputs to character generation codes in standard 3D virtual world simulations. The auditory and visual training data also enable the development of simple music and painting composition generators which significantly enhance one's ability to validate the cognitive model. The cognitive models are handled as interacting software agents implemented as CORBA objects to allow the use of multiple language coding choices (C++, Java, Python etc) and efficient use of legacy code.

  6. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frambati, S.; Frignani, M.

    2012-07-01

    We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design formore » radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)« less

  7. Integration of GIS and Bim for Indoor Geovisual Analytics

    NASA Astrophysics Data System (ADS)

    Wu, B.; Zhang, S.

    2016-06-01

    This paper presents an endeavour of integration of GIS (Geographical Information System) and BIM (Building Information Modelling) for indoor geovisual analytics. The merits of two types of technologies, GIS and BIM are firstly analysed in the context of indoor environment. GIS has well-developed capabilities of spatial analysis such as network analysis, while BIM has the advantages for indoor 3D modelling and dynamic simulation. This paper firstly investigates the important aspects for integrating GIS and BIM. Different data standards and formats such as the IFC (Industry Foundation Classes) and GML (Geography Markup Language) are discussed. Their merits and limitations in data transformation between GIS and BIM are analysed in terms of semantic and geometric information. An optimized approach for data exchange between GIS and BIM datasets is then proposed. After that, a strategy of using BIM for 3D indoor modelling, GIS for spatial analysis, and BIM again for visualization and dynamic simulation of the analysis results is presented. Based on the developments, this paper selects a typical problem, optimized indoor emergency evacuation, to demonstrate the integration of GIS and BIM for indoor geovisual analytics. The block Z of the Hong Kong Polytechnic University is selected as a test site. Detailed indoor and outdoor 3D models of the block Z are created using a BIM software Revit. The 3D models are transferred to a GIS software ArcGIS to carry out spatial analysis. Optimized evacuation plans considering dynamic constraints are generated based on network analysis in ArcGIS assuming there is a fire accident inside the building. The analysis results are then transferred back to BIM software for visualization and dynamic simulation. The developed methods and results are of significance to facilitate future development of GIS and BIM integrated solutions in various applications.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    H.E. Mynick, N. Pomphrey and P. Xanthopoulos

    Recent progress in reducing turbulent transport in stellarators and tokamaks by 3D shaping using a stellarator optimization code in conjunction with a gyrokinetic code is presented. The original applications of the method focussed on ion temperature gradient transport in a quasi-axisymmetric stellarator design. Here, an examination of both other turbulence channels and other starting configurations is initiated. It is found that the designs evolved for transport from ion temperature gradient turbulence also display reduced transport from other transport channels whose modes are also stabilized by improved curvature, such as electron temperature gradient and ballooning modes. The optimizer is also appliedmore » to evolving from a tokamak, finding appreciable turbulence reduction for these devices as well. From these studies, improved understanding is obtained of why the deformations found by the optimizer are beneficial, and these deformations are related to earlier theoretical work in both stellarators and tokamaks.« less

  9. A one-dimensional heat transfer model for parallel-plate thermoacoustic heat exchangers.

    PubMed

    de Jong, J A; Wijnant, Y H; de Boer, A

    2014-03-01

    A one-dimensional (1D) laminar oscillating flow heat transfer model is derived and applied to parallel-plate thermoacoustic heat exchangers. The model can be used to estimate the heat transfer from the solid wall to the acoustic medium, which is required for the heat input/output of thermoacoustic systems. The model is implementable in existing (quasi-)1D thermoacoustic codes, such as DeltaEC. Examples of generated results show good agreement with literature results. The model allows for arbitrary wave phasing; however, it is shown that the wave phasing does not significantly influence the heat transfer.

  10. Numerical simulation of multicellular natural convection in air-filled vertical cavities

    NASA Astrophysics Data System (ADS)

    Kunaeva, A. I.; Ivanov, N. G.

    2017-11-01

    The paper deals with 2D laminar natural convection in vertical air-filled cavities of aspect ratio 20, 30 and 40 with differentially heated sidewalls. The airflow and heat transfer were simulated numerically with an in-house Navier-Stokes code SINF. The focus is on the appearance of stationary vortex structures, “cat’s eyes”, and their transition to unsteady regime in the Rayleigh number range from 4.8×103 to 1.3×104. The dependence of the predicted flow features and the local and integral heat transfer on the aspect ratio value is analysed.

  11. Postflight aerothermodynamic analysis of Pegasus(tm) using computational fluid dynamic techniques

    NASA Technical Reports Server (NTRS)

    Kuhn, Gary D.

    1992-01-01

    The objective was to validate the computational capability of the NASA Ames Navier-Stokes code, F3D, for flows at high Mach numbers using comparison flight test data from the Pegasus (tm) air launched, winged space booster. Comparisons were made with temperature and heat fluxes estimated from measurements on the wing surfaces and wing-fuselage fairings. Tests were conducted for solution convergence, sensitivity to grid density, and effects of distributing grid points to provide high density near temperature and heat flux sensors. The measured temperatures were from sensors embedded in the ablating thermal protection system. Surface heat fluxes were from plugs fabricated of highly insulative, nonablating material, and mounted level with the surface of the surrounding ablative material. As a preflight design tool, the F3D code produces accurate predictions of heat transfer and other aerodynamic properties, and it can provide detailed data for assessment of boundary layer separation, shock waves, and vortex formation. As a postflight analysis tool, the code provides a way to clarify and interpret the measured results.

  12. Recent advances in the development and transfer of machine vision technologies for space

    NASA Technical Reports Server (NTRS)

    Defigueiredo, Rui J. P.; Pendleton, Thomas

    1991-01-01

    Recent work concerned with real-time machine vision is briefly reviewed. This work includes methodologies and techniques for optimal illumination, shape-from-shading of general (non-Lambertian) 3D surfaces, laser vision devices and technology, high level vision, sensor fusion, real-time computing, artificial neural network design and use, and motion estimation. Two new methods that are currently being developed for object recognition in clutter and for 3D attitude tracking based on line correspondence are discussed.

  13. Next generation sequencing yields the complete mitochondrial genome of the Hornlip mullet Plicomugil labiosus (Teleostei: Mugilidae).

    PubMed

    Shen, Kang-Ning; Chen, Ching-Hung; Hsiao, Chung-Der

    2016-05-01

    In this study, the complete mitogenome sequence of hornlip mullet Plicomugil labiosus (Teleostei: Mugilidae) has been sequenced by next-generation sequencing method. The assembled mitogenome, consisting of 16,829 bp, had the typical vertebrate mitochondrial gene arrangement, including 13 protein coding genes, 22 transfer RNAs, 2 ribosomal RNAs genes and a non-coding control region of D-loop. D-loop contains 1057 bp length is located between tRNA-Pro and tRNA-Phe. The overall base composition of P. labiosus is 28.0% for A, 29.3% for C, 15.5% for G and 27.2% for T. The complete mitogenome may provide essential and important DNA molecular data for further population, phylogenetic and evolutionary analysis for Mugilidae.

  14. Free surface deformation and heat transfer by thermocapillary convection

    NASA Astrophysics Data System (ADS)

    Fuhrmann, Eckart; Dreyer, Michael; Basting, Steffen; Bänsch, Eberhard

    2016-04-01

    Knowing the location of the free liquid/gas surface and the heat transfer from the wall towards the fluid is of paramount importance in the design and the optimization of cryogenic upper stage tanks for launchers with ballistic phases, where residual accelerations are smaller by up to four orders of magnitude compared to the gravity acceleration on earth. This changes the driving forces drastically: free surfaces become capillary dominated and natural or free convection is replaced by thermocapillary convection if a non-condensable gas is present. In this paper we report on a sounding rocket experiment that provided data of a liquid free surface with a nonisothermal boundary condition, i.e. a preheated test cell was filled with a cold but storable liquid in low gravity. The corresponding thermocapillary convection (driven by the temperature dependence of the surface tension) created a velocity field directed away from the hot wall towards the colder liquid and then in turn back at the bottom towards the wall. A deformation of the free surface resulting in an apparent contact angle rather different from the microscopic one could be observed. The thermocapillary flow convected the heat from the wall to the liquid and increased the heat transfer compared to pure conduction significantly. The paper presents results of the apparent contact angle as a function of the dimensionless numbers (Weber-Marangoni and Reynolds-Marangoni number) as well as heat transfer data in the form of a Nusselt number. Experimental results are complemented by corresponding numerical simulations with the commercial software Flow3D and the inhouse code Navier.

  15. Assessment of the antireflection property of moth wings by three-dimensional transfer-matrix optical simulations

    NASA Astrophysics Data System (ADS)

    Deparis, Olivier; Khuzayim, Nadia; Parker, Andrew; Vigneron, Jean Pol

    2009-04-01

    The wings of the moth Cacostatia ossa (Ctenuchinae) are covered on both sides by non-close-packed nipple arrays which are known to act as broadband antireflection coatings. Experimental evaluation of the antireflection property of these biological structures is problematic because of the lack of a proper reference for reflectance measurements, i.e., a smooth surface made of the same material as the wing. Theoretical evaluation, on the other hand, is much more reliable provided that optical simulations are carried out on a realistic structural model of the wing. Based on detailed morphological characterizations, we established a three-dimensional (3D) model of the wing and used 3D transfer-matrix optical simulations in order to demonstrate the broadband antireflection property of the wings of Cacostatia ossa. Differences between hemispherical and specular reflectance spectra revealed that diffraction effects were not negligible for this structure although they did not jeopardize the antireflection efficiency. The influences of the backside corrugation and of the material’s absorption on the reflectance spectrum were also studied. In addition, simulations based on an effective-medium model of the wing were carried out using a multilayer thin-film code. In comparison with the latter simulations, the 3D transfer-matrix simulations were found to be more accurate for evaluating the antireflection property.

  16. Improvement of quality of 3D printed objects by elimination of microscopic structural defects in fused deposition modeling.

    PubMed

    Gordeev, Evgeniy G; Galushko, Alexey S; Ananikov, Valentine P

    2018-01-01

    Additive manufacturing with fused deposition modeling (FDM) is currently optimized for a wide range of research and commercial applications. The major disadvantage of FDM-created products is their low quality and structural defects (porosity), which impose an obstacle to utilizing them in functional prototyping and direct digital manufacturing of objects intended to contact with gases and liquids. This article describes a simple and efficient approach for assessing the quality of 3D printed objects. Using this approach it was shown that the wall permeability of a printed object depends on its geometric shape and is gradually reduced in a following series: cylinder > cube > pyramid > sphere > cone. Filament feed rate, wall geometry and G-code-defined wall structure were found as primary parameters that influence the quality of 3D-printed products. Optimization of these parameters led to an overall increase in quality and improvement of sealing properties. It was demonstrated that high quality of 3D printed objects can be achieved using routinely available printers and standard filaments.

  17. Code Optimization Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MAGEE,GLEN I.

    Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flightmore » modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.« less

  18. Interface requirements to couple thermal-hydraulic codes to 3D neutronic codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langenbuch, S.; Austregesilo, H.; Velkov, K.

    1997-07-01

    The present situation of thermalhydraulics codes and 3D neutronics codes is briefly described and general considerations for coupling of these codes are discussed. Two different basic approaches of coupling are identified and their relative advantages and disadvantages are discussed. The implementation of the coupling for 3D neutronics codes in the system ATHLET is presented. Meanwhile, this interface is used for coupling three different 3D neutronics codes.

  19. HITEMP Material and Structural Optimization Technology Transfer

    NASA Technical Reports Server (NTRS)

    Collier, Craig S.; Arnold, Steve (Technical Monitor)

    2001-01-01

    The feasibility of adding viscoelasticity and the Generalized Method of Cells (GMC) for micromechanical viscoelastic behavior into the commercial HyperSizer structural analysis and optimization code was investigated. The viscoelasticity methodology was developed in four steps. First, a simplified algorithm was devised to test the iterative time stepping method for simple one-dimensional multiple ply structures. Second, GMC code was made into a callable subroutine and incorporated into the one-dimensional code to test the accuracy and usability of the code. Third, the viscoelastic time-stepping and iterative scheme was incorporated into HyperSizer for homogeneous, isotropic viscoelastic materials. Finally, the GMC was included in a version of HyperSizer. MS Windows executable files implementing each of these steps is delivered with this report, as well as source code. The findings of this research are that both viscoelasticity and GMC are feasible and valuable additions to HyperSizer and that the door is open for more advanced nonlinear capability, such as viscoplasticity.

  20. A new version of code Java for 3D simulation of the CCA model

    NASA Astrophysics Data System (ADS)

    Zhang, Kebo; Xiong, Hailing; Li, Chao

    2016-07-01

    In this paper we present a new version of the program of CCA model. In order to benefit from the advantages involved in the latest technologies, we migrated the running environment from JDK1.6 to JDK1.7. And the old program was optimized into a new framework, so promoted extendibility.

  1. A 3DHZETRN Code in a Spherical Uniform Sphere with Monte Carlo Verification

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2014-01-01

    The computationally efficient HZETRN code has been used in recent trade studies for lunar and Martian exploration and is currently being used in the engineering development of the next generation of space vehicles, habitats, and extra vehicular activity equipment. A new version (3DHZETRN) capable of transporting High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation is under development. In the present report, new algorithms for light ion and neutron propagation with well-defined convergence criteria in 3D objects is developed and tested against Monte Carlo simulations to verify the solution methodology. The code will be available through the software system, OLTARIS, for shield design and validation and provides a basis for personal computer software capable of space shield analysis and optimization.

  2. A Cost-Effectiveness Analysis of Alternative Guided Media for the Backbone Cable Plant Portion of the Base Information Transfer System

    DTIC Science & Technology

    1991-03-01

    Applications, Garland STPM Press, 1979. 60 INITIAL DISTRIBUTION LIST 1 . Defense Technical Information Center 2 Cameron Station Alexandria, Virginia...AD-A242 688 V, III II lIlIIlI III I I NAVAL POSTGRADUATE SCHOOL Monterey, California D I; D ’C,ST...,, Z i 1 3 i THESIS A COST-EFFECTIVENESS...author and do not reflect the official policy or position of the Depart- ment of Defense or the US Government. 17 COSATI CODES 18 SUBJECT TERMS

  3. A mixed valence zinc dithiolene system with spectator metal and reactor ligands.

    PubMed

    Ratvasky, Stephen C; Mogesa, Benjamin; van Stipdonk, Michael J; Basu, Partha

    2016-08-16

    Neutral complexes of zinc with N,N'-diisopropylpiperazine-2,3-dithione ( i Pr 2 Dt 0 ) and N,N'-dimethylpiperazine-2,3-dithione (Me 2 Dt 0 ) with chloride or maleonitriledithiolate (mnt 2- ) as coligands have been synthesized and characterized. The molecular structures of these zinc complexes have been determined using single crystal X-ray diffractometry. Complexes recrystallize in monoclinic P type systems with zinc adopting a distorted tetrahedral geometry. Two zinc complexes with mixed-valent dithiolene ligands exhibit ligand-to-ligand charge transfer bands. Optimized geometries, molecular vibrations and electronic structures of charge-transfer complexes were calculated using density functional theory (B3LYP/6-311G+(d,p) level). Redox orbitals are shown to be almost exclusively ligand in nature, with a HOMO based heavily on the electron-rich maleonitriledithiolate ligand, and a LUMO comprised mostly of the electron-deficient dithione ligand. Charge transfer is thus believed to proceed from dithiolate HOMO to dithione LUMO, showing ligand-to-ligand redox interplay across a d 10 metal.

  4. 3D equilibrium reconstruction with islands

    NASA Astrophysics Data System (ADS)

    Cianciosa, M.; Hirshman, S. P.; Seal, S. K.; Shafer, M. W.

    2018-04-01

    This paper presents the development of a 3D equilibrium reconstruction tool and the results of the first-ever reconstruction of an island equilibrium. The SIESTA non-nested equilibrium solver has been coupled to the V3FIT 3D equilibrium reconstruction code. Computed from a coupled VMEC and SIESTA model, synthetic signals are matched to measured signals by finding an optimal set of equilibrium parameters. By using the normalized pressure in place of normalized flux, non-equilibrium quantities needed by diagnostic signals can be efficiently mapped to the equilibrium. The effectiveness of this tool is demonstrated by reconstructing an island equilibrium of a DIII-D inner wall limited L-mode case with an n = 1 error field applied. Flat spots in Thomson and ECE temperature diagnostics show the reconstructed islands have the correct size and phase. ).

  5. SU-E-T-254: Optimization of GATE and PHITS Monte Carlo Code Parameters for Uniform Scanning Proton Beam Based On Simulation with FLUKA General-Purpose Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurosu, K; Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka; Takashina, M

    Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximummore » step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health, Labor and Welfare of Japan, Grants-in-Aid for Scientific Research (No. 23791419), and JSPS Core-to-Core program (No. 23003). The authors have no conflict of interest.« less

  6. Aerodynamic and heat transfer analysis of the low aspect ratio turbine using a 3D Navier-Stokes code

    NASA Astrophysics Data System (ADS)

    Choi, D.; Knight, C. J.

    1991-06-01

    The single-stage, high-pressure ratio Garrett Low Aspect Ratio Turbine (LART) test data obtained in a shock tunnel are employed as a basis for evaluating a new three-dimensional Navier Stokes code based on the O-H grid system. It uses Coakley's two-equation turbulence modeling with viscous sublayer resolution. For the nozzle guide vanes, calculations were made based on two grid zones: an O-grid zone wrapping around airfoil and an H-grid zone outside of the O-grid zone, including the regions upstream of the leadig edge and downstream of the trailing edge. For the rotor blade row, a third O-grid zone was added for the tip-gap region leakage flow. The computational results compare well with experiment. These comparisons include heat transfer distributions on the airfoils and end-walls. The leakage flow through the tip-gap clearance is well resolved.

  7. Numerical investigation of heat transfer on film-cooled turbine blades.

    PubMed

    Ginibre, P; Lefebvre, M; Liamis, N

    2001-05-01

    The accurate heat transfer prediction of film-cooled blades is a key issue for the aerothermal turbine design. For this purpose, advanced numerical methods have been developed at Snecma Moteurs. The goal of this paper is the assessment of a three-dimensional Navier-Stokes solver, based on the ONERA CANARI-COMET code, devoted to the steady aerothermal computations of film-cooled blades. The code uses a multidomain approach to discretize the blade to blade channel with overlapping structured meshes for the injection holes. The turbulence closure is done by means of either Michel mixing length model or Spalart-Allmaras one transport equation model. Computations of thin 3D slices of three film-cooled nozzle guide vane blades with multiple injections are performed. Aerothermal predictions are compared to experiments carried out by the von Karman Institute. The behavior of the turbulence models is discussed, and velocity and temperature injection profiles are investigated.

  8. Applications Performance Under MPL and MPI on NAS IBM SP2

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Simon, Horst D.; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    On July 5, 1994, an IBM Scalable POWER parallel System (IBM SP2) with 64 nodes, was installed at the Numerical Aerodynamic Simulation (NAS) Facility Each node of NAS IBM SP2 is a "wide node" consisting of a RISC 6000/590 workstation module with a clock of 66.5 MHz which can perform four floating point operations per clock with a peak performance of 266 Mflop/s. By the end of 1994, 64 nodes of IBM SP2 will be upgraded to 160 nodes with a peak performance of 42.5 Gflop/s. An overview of the IBM SP2 hardware is presented. The basic understanding of architectural details of RS 6000/590 will help application scientists the porting, optimizing, and tuning of codes from other machines such as the CRAY C90 and the Paragon to the NAS SP2. Optimization techniques such as quad-word loading, effective utilization of two floating point units, and data cache optimization of RS 6000/590 is illustrated, with examples giving performance gains at each optimization step. The conversion of codes using Intel's message passing library NX to codes using native Message Passing Library (MPL) and the Message Passing Interface (NMI) library available on the IBM SP2 is illustrated. In particular, we will present the performance of Fast Fourier Transform (FFT) kernel from NAS Parallel Benchmarks (NPB) under MPL and MPI. We have also optimized some of Fortran BLAS 2 and BLAS 3 routines, e.g., the optimized Fortran DAXPY runs at 175 Mflop/s and optimized Fortran DGEMM runs at 230 Mflop/s per node. The performance of the NPB (Class B) on the IBM SP2 is compared with the CRAY C90, Intel Paragon, TMC CM-5E, and the CRAY T3D.

  9. High Temperature Gas Energy Transfer.

    DTIC Science & Technology

    1980-08-12

    Office of Naval Research 1 Code 260 Code AFRPL MKPAArlingon, VA 22217 Edwards AFB, CA 93523Attn: Mr. D. Siegel Attn: Dr. F. Roberto Office of Naval...ResearchOffice of Naval Research Directorate of Aero-San Francisco Area Office space SciencesOne allidie Plaza Suite 601 Bolling Air Force BaseSan Francisco, CA

  10. Simulation of underwater explosion benchmark experiments with ALE3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Couch, R.; Faux, D.

    1997-05-19

    Some code improvements have been made during the course of this study. One immediately obvious need was for more flexibility in the constitutive representation for materials in shell elements. To remedy this situation, a model with a tabular representation of stress versus strain and rate dependent effects was implemented. This was required in order to obtain reasonable results in the IED cylinder simulation. Another deficiency was in the ability to extract and plot variables associated with shell elements. The pipe whip analysis required the development of a scheme to tally and plot time dependent shell quantities such as stresses andmore » strains. This capability had previously existed only for solid elements. Work was initiated to provide the same range of plotting capability for structural elements that exist with the DYNA3D/TAURUS tools. One of the characteristics of these problems is the disparity in zoning required in the vicinity of the charge and bubble compared to that needed in the far field. This disparity can cause the equipotential relaxation logic to provide a less than optimal solution. Various approaches were utilized to bias the relaxation to obtain more optimal meshing during relaxation. Extensions of these techniques have been developed to provide more powerful options, but more work still needs to be done. The results presented here are representative of what can be produced with an ALE code structured like ALE3D. They are not necessarily the best results that could have been obtained. More experience in assessing sensitivities to meshing and boundary conditions would be very useful. A number of code deficiencies discovered in the course of this work have been corrected and are available for any future investigations.« less

  11. Modifications to MacCormack’s 2-D Navier-Stokes Compression Ramp Code for Application to Flows with Axes of Symmetry and Wall Mass Transfer

    DTIC Science & Technology

    1981-01-01

    in two dimensions have been studied experimentally by Gray and Rhudy (Ref. 3) and theoretically by Bloy and Georgeff (Ref. 4) and Carter (Ref. 5...Layer Separation at Supersonic and Hypersonic Speeds." AEDC-TR-70-235, March 1971. 4. Bloy , A. W. and Georgeff, M. P. "The Hypersonic Laminar

  12. Theory and Computation of Optimal Low- and Medium- Thrust Orbit Transfers

    NASA Technical Reports Server (NTRS)

    Goodson, Troy D.; Chuang, Jason C. H.; Ledsinger, Laura A.

    1996-01-01

    This report presents new theoretical results which lead to new algorithms for the computation of fuel-optimal multiple-burn orbit transfers of low and medium thrust. Theoretical results introduced herein show how to add burns to an optimal trajectory and show that the traditional set of necessary conditions may be replaced with a much simpler set of equations. Numerical results are presented to demonstrate the utility of the theoretical results and the new algorithms. Two indirect methods from the literature are shown to be effective for the optimal orbit transfer problem with relatively small numbers of burns. These methods are the Minimizing Boundary Condition Method (MBCM) and BOUNDSCO. Both of these methods make use of the first-order necessary conditions exactly as derived by optimal control theory. Perturbations due to Earth's oblateness and atmospheric drag are considered. These perturbations are of greatest interest for transfers that take place between low Earth orbit altitudes and geosynchronous orbit altitudes. Example extremal solutions including these effects and computed by the aforementioned methods are presented. An investigation is also made into a suboptimal multiple-burn guidance scheme. The FORTRAN code developed for this study has been collected together in a package named ORBPACK. ORBPACK's user manual is provided as an appendix to this report.

  13. Assessment of polarization effect on aerosol retrievals from MODIS

    NASA Astrophysics Data System (ADS)

    Korkin, S.; Lyapustin, A.

    2010-12-01

    Light polarization affects the total intensity of scattered radiation. In this work, we compare aerosol retrievals performed by code MAIAC [1] with and without taking polarization into account. The MAIAC retrievals are based on the look-up tables (LUT). For this work, MAIAC was run using two different LUTs, the first one generated using the scalar code SHARM [2], and the second one generated with the vector code Modified Vector Discrete Ordinates Method (MVDOM). MVDOM is a new code suitable for computations with highly anisotropic phase functions, including cirrus clouds and snow [3]. To this end, the solution of the vector radiative transfer equation (VRTE) is represented as a sum of anisotropic and regular components. The anisotropic component is evaluated in the Small Angle Modification of the Spherical Harmonics Method (MSH) [4]. The MSH is formulated in the frame of reference of the solar beam where z-axis lies along the solar beam direction. In this case, the MSH solution for anisotropic part is nearly symmetric in azimuth, and is computed analytically. In scalar case, this solution coincides with the Goudsmit-Saunderson small-angle approximation [5]. To correct for an analytical separation of the anisotropic part of the signal, the transfer equation for the regular part contains a correction source function term [6]. Several examples of polarization impact on aerosol retrievals over different surface types will be presented. 1. Lyapustin A., Wang Y., Laszlo I., Kahn R., Korkin S., Remer L., Levy R., and Reid J. S. Multi-Angle Implementation of Atmospheric Correction (MAIAC): Part 2. Aerosol Algorithm. J. Geophys. Res., submitted (2010). 2. Lyapustin A., Muldashev T., Wang Y. Code SHARM: fast and accurate radiative transfer over spatially variable anisotropic surfaces. In: Light Scattering Reviews 5. Chichester: Springer, 205 - 247 (2010). 3. Budak, V.P., Korkin S.V. On the solution of a vectorial radiative transfer equation in an arbitrary three-dimensional turbid medium with anisotropic scattering. JQSRT, 109, 220-234 (2008). 4. Budak V.P., Sarmin S.E. Solution of radiative transfer equation by the method of spherical harmonics in the small angle modification. Atmospheric and Oceanic Optics, 3, 898-903 (1990). 5. Goudsmit S., Saunderson J.L. Multiple scattering of electrons. Phys. Rev., 57, 24-29 (1940). 6. Budak V.P, Klyuykov D.A., Korkin S.V. Convergence acceleration of radiative transfer equation solution at strongly anisotropic scattering. In: Light Scattering Reviews 5. Chichester: Springer, 147 - 204 (2010).

  14. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    NASA Astrophysics Data System (ADS)

    Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.

    2016-01-01

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We therefore conclude that customization parameters must be set with reference to the optimized parameters of the corresponding irradiation technique in order to render them useful for achieving artifact-free MC simulation for use in computational experiments and clinical treatments.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fung, Jeffrey; Masset, Frédéric; Velasco, David

    Planetary migration is inherently a three-dimensional (3D) problem, because Earth-size planetary cores are deeply embedded in protoplanetary disks. Simulations of these 3D disks remain challenging due to the steep resolution requirements. Using two different hydrodynamics codes, FARGO3D and PEnGUIn, we simulate disk–planet interaction for a one to five Earth-mass planet embedded in an isentropic disk. We measure the torque on the planet and ensure that the measurements are converged both in resolution and between the two codes. We find that the torque is independent of the smoothing length of the planet’s potential ( r {sub s}), and that it hasmore » a weak dependence on the adiabatic index of the gaseous disk ( γ ). The torque values correspond to an inward migration rate qualitatively similar to previous linear calculations. We perform additional simulations with explicit radiative transfer using FARGOCA, and again find agreement between 3D simulations and existing torque formulae. We also present the flow pattern around the planets that show active flow is present within the planet’s Hill sphere, and meridional vortices are shed downstream. The vertical flow speed near the planet is faster for a smaller r {sub s} or γ , up to supersonic speeds for the smallest r {sub s} and γ in our study.« less

  16. Genetic Algorithm Optimization of a Film Cooling Array on a Modern Turbine Inlet Vane

    DTIC Science & Technology

    2012-09-01

    heater is typically higher than the test section temperature since there is a lag due to heat transfer to the piping between the heater and test... flexible substrate 301 used 50 microns thick and the gauges themselves are a platinum metal layer 500-Å thick. When subjected to a change in heat ...more advanced gas turbine cooling design methods that factor in the 3-D flowfield and heat transfer characteristics, this study involves the

  17. The complete mitochondrial genome of the Feral Rock Pigeon (Columba livia breed feral).

    PubMed

    Li, Chun-Hong; Liu, Fang; Wang, Li

    2014-10-01

    Abstract In the present work, we report the complete mitochondrial genome sequence of feral rock pigeon for the first time. The total length of the mitogenome was 17,239 bp with the base composition of 30.3% for A, 24.0% for T, 31.9% for C, and 13.8% for G and an A-T (54.3 %)-rich feature was detected. It harbored 13 protein-coding genes, 2 ribosomal RNA genes, 22 transfer RNA genes and 1 non-coding control region (D-loop region). The arrangement of all genes was identical to the typical mitochondrial genomes of pigeon. The complete mitochondrial genome sequence of feral rock pigeon would serve as an important data set of the germplasm resources for further study.

  18. Photonic band structures solved by a plane-wave-based transfer-matrix method.

    PubMed

    Li, Zhi-Yuan; Lin, Lan-Lan

    2003-04-01

    Transfer-matrix methods adopting a plane-wave basis have been routinely used to calculate the scattering of electromagnetic waves by general multilayer gratings and photonic crystal slabs. In this paper we show that this technique, when combined with Bloch's theorem, can be extended to solve the photonic band structure for 2D and 3D photonic crystal structures. Three different eigensolution schemes to solve the traditional band diagrams along high-symmetry lines in the first Brillouin zone of the crystal are discussed. Optimal rules for the Fourier expansion over the dielectric function and electromagnetic fields with discontinuities occurring at the boundary of different material domains have been employed to accelerate the convergence of numerical computation. Application of this method to an important class of 3D layer-by-layer photonic crystals reveals the superior convergency of this different approach over the conventional plane-wave expansion method.

  19. Computer code for predicting coolant flow and heat transfer in turbomachinery

    NASA Technical Reports Server (NTRS)

    Meitner, Peter L.

    1990-01-01

    A computer code was developed to analyze any turbomachinery coolant flow path geometry that consist of a single flow passage with a unique inlet and exit. Flow can be bled off for tip-cap impingement cooling, and a flow bypass can be specified in which coolant flow is taken off at one point in the flow channel and reintroduced at a point farther downstream in the same channel. The user may either choose the coolant flow rate or let the program determine the flow rate from specified inlet and exit conditions. The computer code integrates the 1-D momentum and energy equations along a defined flow path and calculates the coolant's flow rate, temperature, pressure, and velocity and the heat transfer coefficients along the passage. The equations account for area change, mass addition or subtraction, pumping, friction, and heat transfer.

  20. The complete mitochondrial genome of the Border Collie dog.

    PubMed

    Wu, An-Quan; Zhang, Yong-Liang; Li, Li-Li; Chen, Long; Yang, Tong-Wen

    2016-01-01

    Border Collie dog is one of the famous breed of dog. In the present work we report the complete mitochondrial genome sequence of Border Collie dog for the first time. The total length of the mitogenome was 16,730 bp with the base composition of 31.6% for A, 28.7% for T, 25.5% for C, and 14.2% for G and an A-T (60.3%)-rich feature was detected. It harbored 13 protein-coding genes, two ribosomal RNA genes, 22 transfer RNA genes and one non-coding control region (D-loop region). The arrangement of all genes was identical to the typical mitochondrial genomes of dogs.

  1. General Mission Analysis Tool (GMAT) Acceptance Test Plan [Draft

    NASA Technical Reports Server (NTRS)

    Dove, Edwin; Hughes, Steve

    2007-01-01

    The information presented in this Acceptance Test Plan document shows the current status of the General Mission Analysis Tool (GMAT). GMAT is a software system developed by NASA Goddard Space Flight Center (GSFC) in collaboration with the private sector. The GMAT development team continuously performs acceptance tests in order to verify that the software continues to operate properly after updates are made. The GMAT Development team consists of NASA/GSFC Code 583 software developers, NASA/GSFC Code 595 analysts, and contractors of varying professions. GMAT was developed to provide a development approach that maintains involvement from the private sector and academia, encourages collaborative funding from multiple government agencies and the private sector, and promotes the transfer of technology from government funded research to the private sector. GMAT contains many capabilities, such as integrated formation flying modeling and MATLAB compatibility. The propagation capabilities in GMAT allow for fully coupled dynamics modeling of multiple spacecraft, in any flight regime. Other capabilities in GMAT inclucle: user definable coordinate systems, 3-D graphics in any coordinate system GMAT can calculate, 2-D plots, branch commands, solvers, optimizers, GMAT functions, planetary ephemeris sources including DE405, DE200, SLP and analytic models, script events, impulsive and finite maneuver models, and many more. GMAT runs on Windows, Mac, and Linux platforms. Both the Graphical User Interface (GUI) and the GMAT engine were built and tested on all of the mentioned platforms. GMAT was designed for intuitive use from both the GUI and with an importable script language similar to that of MATLAB.

  2. Electron Transfer Activity of a de Novo Designed Copper Center in a Three-Helix Bundle Fold

    PubMed Central

    Plegaria, Jefferson S.; Herrero, Christian; Quaranta, Annamaria; Pecoraro, Vincent L.

    2017-01-01

    In this work, we characterized the intermolecular ET property of a de novo designed metallopeptide using laser-flash photolysis. α3D-CH3 is three-helix bundle peptide that was designed to contain a copper ET site found in the β-barrel fold of native cupredoxins. The ET activity of Cuα3D-CH3 was determined using five different photosensitizers. By exhibiting a complete depletion of the photo-oxidant and the successive formation of a Cu(II) species at 400 nm, the transient and generated spectra demonstrated an ET transfer reaction between the photo-oxidant and Cu(I)α3D-CH3. This observation illustrated our success in integrating an ET center within a de novo designed scaffold. From the kinetic traces at 400 nm, first-order and bimolecular rate constants of 105 s−1 and 108 M−1 s−1 were derived. Moreover, a Marcus equation analysis on the rate versus driving force study produced a reorganization energy of 1.1 eV, demonstrating that the helical fold of α3D requires further structural optimization to efficiently perform ET. PMID:26427552

  3. Optimization of wavefront coding imaging system using heuristic algorithms

    NASA Astrophysics Data System (ADS)

    González-Amador, E.; Padilla-Vivanco, A.; Toxqui-Quitl, C.; Zermeño-Loreto, O.

    2017-08-01

    Wavefront Coding (WFC) systems make use of an aspheric Phase-Mask (PM) and digital image processing to extend the Depth of Field (EDoF) of computational imaging systems. For years, several kinds of PM have been designed to produce a point spread function (PSF) near defocus-invariant. In this paper, the optimization of the phase deviation parameter is done by means of genetic algorithms (GAs). In this, the merit function minimizes the mean square error (MSE) between the diffraction limited Modulated Transfer Function (MTF) and the MTF of the system that is wavefront coded with different misfocus. WFC systems were simulated using the cubic, trefoil, and 4 Zernike polynomials phase-masks. Numerical results show defocus invariance aberration in all cases. Nevertheless, the best results are obtained by using the trefoil phase-mask, because the decoded image is almost free of artifacts.

  4. Optimization of atmospheric transport models on HPC platforms

    NASA Astrophysics Data System (ADS)

    de la Cruz, Raúl; Folch, Arnau; Farré, Pau; Cabezas, Javier; Navarro, Nacho; Cela, José María

    2016-12-01

    The performance and scalability of atmospheric transport models on high performance computing environments is often far from optimal for multiple reasons including, for example, sequential input and output, synchronous communications, work unbalance, memory access latency or lack of task overlapping. We investigate how different software optimizations and porting to non general-purpose hardware architectures improve code scalability and execution times considering, as an example, the FALL3D volcanic ash transport model. To this purpose, we implement the FALL3D model equations in the WARIS framework, a software designed from scratch to solve in a parallel and efficient way different geoscience problems on a wide variety of architectures. In addition, we consider further improvements in WARIS such as hybrid MPI-OMP parallelization, spatial blocking, auto-tuning and thread affinity. Considering all these aspects together, the FALL3D execution times for a realistic test case running on general-purpose cluster architectures (Intel Sandy Bridge) decrease by a factor between 7 and 40 depending on the grid resolution. Finally, we port the application to Intel Xeon Phi (MIC) and NVIDIA GPUs (CUDA) accelerator-based architectures and compare performance, cost and power consumption on all the architectures. Implications on time-constrained operational model configurations are discussed.

  5. Uncertainty Analysis in 3D Equilibrium Reconstruction

    DOE PAGES

    Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.

    2018-02-21

    Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less

  6. Uncertainty Analysis in 3D Equilibrium Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.

    Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less

  7. Seven-Tesla Magnetization Transfer Imaging to Detect Multiple Sclerosis White Matter Lesions.

    PubMed

    Chou, I-Jun; Lim, Su-Yin; Tanasescu, Radu; Al-Radaideh, Ali; Mougin, Olivier E; Tench, Christopher R; Whitehouse, William P; Gowland, Penny A; Constantinescu, Cris S

    2018-03-01

    Fluid-attenuated inversion recovery (FLAIR) imaging at 3 Tesla (T) field strength is the most sensitive modality for detecting white matter lesions in multiple sclerosis. While 7T FLAIR is effective in detecting cortical lesions, it has not been fully optimized for visualization of white matter lesions and thus has not been used for delineating lesions in quantitative magnetic resonance imaging (MRI) studies of the normal appearing white matter in multiple sclerosis. Therefore, we aimed to evaluate the sensitivity of 7T magnetization-transfer-weighted (MT w ) images in the detection of white matter lesions compared with 3T-FLAIR. Fifteen patients with clinically isolated syndrome, 6 with multiple sclerosis, and 10 healthy participants were scanned with 7T 3-dimensional (D) MT w and 3T-2D-FLAIR sequences on the same day. White matter lesions visible on either sequence were delineated. Of 662 lesions identified on 3T-2D-FLAIR images, 652 were detected on 7T-3D-MT w images (sensitivity, 98%; 95% confidence interval, 97% to 99%). The Spearman correlation coefficient between lesion loads estimated by the two sequences was .910. The intrarater and interrater reliability for 7T-3D-MT w images was good with an intraclass correlation coefficient (ICC) of 98.4% and 81.8%, which is similar to that for 3T-2D-FLAIR images (ICC 96.1% and 96.7%). Seven-Tesla MT w sequences detected most of the white matter lesions identified by FLAIR at 3T. This suggests that 7T-MT w imaging is a robust alternative for detecting demyelinating lesions in addition to 3T-FLAIR. Future studies need to compare the roles of optimized 7T-FLAIR and of 7T-MT w imaging. © 2017 The Authors. Journal of Neuroimaging published by Wiley Periodicals, Inc. on behalf of American Society of Neuroimaging.

  8. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    PubMed Central

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388

  9. Comparison of Predicted and Measured Turbine Vane Rough Surface Heat Transfer

    NASA Technical Reports Server (NTRS)

    Boyle, R. J.; Spuckler, C. M.; Lucci, B. L.

    2000-01-01

    The proposed paper compares predicted turbine vane heat transfer for a rough surface over a wide range of test conditions with experimental data. Predictions were made for the entire vane surface. However, measurements were made only over the suction surface of the vane, and the leading edge region of the pressure surface. Comparisons are shown for a wide range of test conditions. Inlet pressures varied between 3 and 15 psia, and exit Mach numbers ranged between 0.3 and 0.9. Thus, while a single roughened vane was used for the tests, the effective rougness,(k(sup +)), varied by more than a factor of ten. Results were obtained for freestream turbulence levels of 1 and 10%. Heat transfer predictions were obtained using the Navier-Stokes computer code RVCQ3D. Two turbulence models, suitable for rough surface analysis, are incorporated in this code. The Cebeci-Chang roughness model is part of the algebraic turbulence model. The k-omega turbulence model accounts for the effect of roughness in the application of the boundary condition. Roughness causes turbulent flow over the vane surface. Even after accounting for transition, surface roughness significantly increased heat transfer compared to a smooth surface. The k-omega results agreed better with the data than the Cebeci-Chang model. However, the low Reynolds number k-omega model did not accurately account for roughness when the freestream turbulence level was low. The high Reynolds number version of this model was more suitable when the freestream turbulence was low.

  10. Exploring the parameter space of the coarse-grained UNRES force field by random search: selecting a transferable medium-resolution force field.

    PubMed

    He, Yi; Xiao, Yi; Liwo, Adam; Scheraga, Harold A

    2009-10-01

    We explored the energy-parameter space of our coarse-grained UNRES force field for large-scale ab initio simulations of protein folding, to obtain good initial approximations for hierarchical optimization of the force field with new virtual-bond-angle bending and side-chain-rotamer potentials which we recently introduced to replace the statistical potentials. 100 sets of energy-term weights were generated randomly, and good sets were selected by carrying out replica-exchange molecular dynamics simulations of two peptides with a minimal alpha-helical and a minimal beta-hairpin fold, respectively: the tryptophan cage (PDB code: 1L2Y) and tryptophan zipper (PDB code: 1LE1). Eight sets of parameters produced native-like structures of these two peptides. These eight sets were tested on two larger proteins: the engrailed homeodomain (PDB code: 1ENH) and FBP WW domain (PDB code: 1E0L); two sets were found to produce native-like conformations of these proteins. These two sets were tested further on a larger set of nine proteins with alpha or alpha + beta structure and found to locate native-like structures of most of them. These results demonstrate that, in addition to finding reasonable initial starting points for optimization, an extensive search of parameter space is a powerful method to produce a transferable force field. Copyright 2009 Wiley Periodicals, Inc.

  11. Assessment of inlet efficiency through a 3D simulation: numerical and experimental comparison.

    PubMed

    Gómez, Manuel; Recasens, Joan; Russo, Beniamino; Martínez-Gomariz, Eduardo

    2016-10-01

    Inlet efficiency is a requirement for characterizing the flow transfers between surface and sewer flow during rain events. The dual drainage approach is based on the joint analysis of both upper and lower drainage levels, and the flow transfer is one of the relevant elements to define properly this joint behaviour. This paper presents the results of an experimental and numerical investigation about the inlet efficiency definition. A full scale (1:1) test platform located in the Technical University of Catalonia (UPC) reproduces both the runoff process in streets and the water entering the inlet. Data from tests performed on this platform allow the inlet efficiency to be estimated as a function of significant hydraulic and geometrical parameters. A reproduction of these tests through a numerical three-dimensional code (Flow-3D) has been carried out simulating this type of flow by solving the RANS equations. The aim of the work was to reproduce the hydraulic performance of a previously tested grated inlet under several flow and geometric conditions using Flow-3D as a virtual laboratory. This will allow inlet efficiencies to be obtained without previous experimental tests. Moreover, the 3D model allows a better understanding of the hydraulics of the flow interception and the flow patterns approaching the inlet.

  12. Advanced Applications of Adifor 3.0 for Efficient Calculation of First-and Second-Order CFD Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III

    2004-01-01

    This final report will document the accomplishments of the work of this project. 1) The incremental-iterative (II) form of the reverse-mode (adjoint) method for computing first-order (FO) aerodynamic sensitivity derivatives (SDs) has been successfully implemented and tested in a 2D CFD code (called ANSERS) using the reverse-mode capability of ADIFOR 3.0. These preceding results compared very well with similar SDS computed via a black-box (BB) application of the reverse-mode capability of ADIFOR 3.0, and also with similar SDs calculated via the method of finite differences. 2) Second-order (SO) SDs have been implemented in the 2D ASNWERS code using the very efficient strategy that was originally proposed (but not previously tested) of Reference 3, Appendix A. Furthermore, these SO SOs have been validated for accuracy and computational efficiency. 3) Studies were conducted in Quasi-1D and 2D concerning the smoothness (or lack of smoothness) of the FO and SO SD's for flows with shock waves. The phenomenon is documented in the publications of this study (listed subsequently), however, the specific numerical mechanism which is responsible for this unsmoothness phenomenon was not discovered. 4) The FO and SO derivatives for Quasi-1D and 2D flows were applied to predict aerodynamic design uncertainties, and were also applied in robust design optimization studies.

  13. Operation of the Defense Acquisition System

    DTIC Science & Technology

    2008-12-08

    United States Code (l) DoD Directive 8320.02, “Data Sharing in a Net-Centric Department of Defense,” December 2, 2004 (m) DoD Instruction 5200.39...2004 (t) ISO 15418-1999- “EAN/ UCC Application Identifiers and Fact Data Identifiers and Maintenance” (u) ISO 15434-1999 – “Transfer Syntax for High...Acquisition Knowledge Sharing System7 (y) Section 644 of title 15, United States Code, “Procurement strategies; contract bundling” (z) Public Law 101-576

  14. Electromagnetic Simulations for Aerospace Application Final Report CRADA No. TC-0376-92

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madsen, N.; Meredith, S.

    Electromagnetic (EM) simulation tools play an important role in the design cycle, allowing optimization of a design before it is fabricated for testing. The purpose of this cooperative project was to provide Lockheed with state-of-the-art electromagnetic (EM) simulation software that will enable the optimal design of the next generation of low-observable (LO) military aircraft through the VHF regime. More particularly, the project was principally code development and validation, its goal to produce a 3-D, conforming grid,time-domain (TD) EM simulation tool, consisting of a mesh generator, a DS13D-based simulation kernel, and an RCS postprocessor, which was useful in the optimization ofmore » LO aircraft, both for full-aircraft simulations run on a massively parallel computer and for small scale problems run on a UNIX workstation.« less

  15. Linear energy transfer in water phantom within SHIELD-HIT transport code

    NASA Astrophysics Data System (ADS)

    Ergun, A.; Sobolevsky, N.; Botvina, A. S.; Buyukcizmeci, N.; Latysheva, L.; Ogul, R.

    2017-02-01

    The effect of irradiation in tissue is important in hadron therapy for the dose measurement and treatment planning. This biological effect is defined by an equivalent dose H which depends on the Linear Energy Transfer (LET). Usually, H can be expressed in terms of the absorbed dose D and the quality factor K of the radiation under consideration. In literature, various types of transport codes have been used for modeling and simulation of the interaction of the beams of protons and heavier ions with tissue-equivalent materials. In this presentation we used SHIELD-HIT code to simulate decomposition of the absorbed dose by LET in water for 16O beams. A more detailed description of capabilities of the SHIELD-HIT code can be found in the literature.

  16. Efficient 3D movement-based kernel density estimator and application to wildlife ecology

    USGS Publications Warehouse

    Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.

    2014-01-01

    We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.

  17. GFSSP Training Course Lectures

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok K.

    2008-01-01

    GFSSP has been extended to model conjugate heat transfer Fluid Solid Network Elements include: a) Fluid nodes and Flow Branches; b) Solid Nodes and Ambient Nodes; c) Conductors connecting Fluid-Solid, Solid-Solid and Solid-Ambient Nodes. Heat Conduction Equations are solved simultaneously with Fluid Conservation Equations for Mass, Momentum, Energy and Equation of State. The extended code was verified by comparing with analytical solution for simple conduction-convection problem The code was applied to model: a) Pressurization of Cryogenic Tank; b) Freezing and Thawing of Metal; c) Chilldown of Cryogenic Transfer Line; d) Boil-off from Cryogenic Tank.

  18. HAL/S-FC compiler system functional specification

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Compiler organization is discussed, including overall compiler structure, internal data transfer, compiler development, and code optimization. The user, system, and SDL interfaces are described, along with compiler system requirements. Run-time software support package and restrictions and dependencies are also considered of the HAL/S-FC system.

  19. A statistical approach for inferring the 3D structure of the genome.

    PubMed

    Varoquaux, Nelle; Ay, Ferhat; Noble, William Stafford; Vert, Jean-Philippe

    2014-06-15

    Recent technological advances allow the measurement, in a single Hi-C experiment, of the frequencies of physical contacts among pairs of genomic loci at a genome-wide scale. The next challenge is to infer, from the resulting DNA-DNA contact maps, accurate 3D models of how chromosomes fold and fit into the nucleus. Many existing inference methods rely on multidimensional scaling (MDS), in which the pairwise distances of the inferred model are optimized to resemble pairwise distances derived directly from the contact counts. These approaches, however, often optimize a heuristic objective function and require strong assumptions about the biophysics of DNA to transform interaction frequencies to spatial distance, and thereby may lead to incorrect structure reconstruction. We propose a novel approach to infer a consensus 3D structure of a genome from Hi-C data. The method incorporates a statistical model of the contact counts, assuming that the counts between two loci follow a Poisson distribution whose intensity decreases with the physical distances between the loci. The method can automatically adjust the transfer function relating the spatial distance to the Poisson intensity and infer a genome structure that best explains the observed data. We compare two variants of our Poisson method, with or without optimization of the transfer function, to four different MDS-based algorithms-two metric MDS methods using different stress functions, a non-metric version of MDS and ChromSDE, a recently described, advanced MDS method-on a wide range of simulated datasets. We demonstrate that the Poisson models reconstruct better structures than all MDS-based methods, particularly at low coverage and high resolution, and we highlight the importance of optimizing the transfer function. On publicly available Hi-C data from mouse embryonic stem cells, we show that the Poisson methods lead to more reproducible structures than MDS-based methods when we use data generated using different restriction enzymes, and when we reconstruct structures at different resolutions. A Python implementation of the proposed method is available at http://cbio.ensmp.fr/pastis. © The Author 2014. Published by Oxford University Press.

  20. Solid object visualization of 3D ultrasound data

    NASA Astrophysics Data System (ADS)

    Nelson, Thomas R.; Bailey, Michael J.

    2000-04-01

    Visualization of volumetric medical data is challenging. Rapid-prototyping (RP) equipment producing solid object prototype models of computer generated structures is directly applicable to visualization of medical anatomic data. The purpose of this study was to develop methods for transferring 3D Ultrasound (3DUS) data to RP equipment for visualization of patient anatomy. 3DUS data were acquired using research and clinical scanning systems. Scaling information was preserved and the data were segmented using threshold and local operators to extract features of interest, converted from voxel raster coordinate format to a set of polygons representing an iso-surface and transferred to the RP machine to create a solid 3D object. Fabrication required 30 to 60 minutes depending on object size and complexity. After creation the model could be touched and viewed. A '3D visualization hardcopy device' has advantages for conveying spatial relations compared to visualization using computer display systems. The hardcopy model may be used for teaching or therapy planning. Objects may be produced at the exact dimension of the original object or scaled up (or down) to facilitate matching the viewers reference frame more optimally. RP models represent a useful means of communicating important information in a tangible fashion to patients and physicians.

  1. NRA8-21 Cycle 2 RBCC Turbopump Risk Reduction

    NASA Technical Reports Server (NTRS)

    Ferguson, Thomas V.; Williams, Morgan; Marcu, Bogdan

    2004-01-01

    This project was composed of three sub-tasks. The objective of the first task was to use the CFD code INS3D to generate both on- and off-design predictions for the consortium optimized impeller flowfield. The results of the flow simulations are given in the first section. The objective of the second task was to construct a turbomachinery testing database comprised of measurements made on several different impellers, an inducer and a diffuser. The data was in the form of static pressure measurements as well as laser velocimeter measurements of velocities and flow angles within the stated components. Several databases with this information were created for these components. The third subtask objective was two-fold: first, to validate the Enigma CFD code for pump diffuser analysis, and secondly, to perform steady and unsteady analyses on some wide flow range diffuser concepts using Enigma. The code was validated using the consortium optimized impeller database and then applied to two different concepts for wide flow diffusers.

  2. Vector processing efficiency of plasma MHD codes by use of the FACOM 230-75 APU

    NASA Astrophysics Data System (ADS)

    Matsuura, T.; Tanaka, Y.; Naraoka, K.; Takizuka, T.; Tsunematsu, T.; Tokuda, S.; Azumi, M.; Kurita, G.; Takeda, T.

    1982-06-01

    In the framework of pipelined vector architecture, the efficiency of vector processing is assessed with respect to plasma MHD codes in nuclear fusion research. By using a vector processor, the FACOM 230-75 APU, the limit of the enhancement factor due to parallelism of current vector machines is examined for three numerical codes based on a fluid model. Reasonable speed-up factors of approximately 6,6 and 4 times faster than the highly optimized scalar version are obtained for ERATO (linear stability code), AEOLUS-R1 (nonlinear stability code) and APOLLO (1-1/2D transport code), respectively. Problems of the pipelined vector processors are discussed from the viewpoint of restructuring, optimization and choice of algorithms. In conclusion, the important concept of "concurrency within pipelined parallelism" is emphasized.

  3. Noninvasive, three-dimensional full-field body sensor for surface deformation monitoring of human body in vivo

    NASA Astrophysics Data System (ADS)

    Chen, Zhenning; Shao, Xinxing; He, Xiaoyuan; Wu, Jialin; Xu, Xiangyang; Zhang, Jinlin

    2017-09-01

    Noninvasive, three-dimensional (3-D), full-field surface deformation measurements of the human body are important for biomedical investigations. We proposed a 3-D noninvasive, full-field body sensor based on stereo digital image correlation (stereo-DIC) for surface deformation monitoring of the human body in vivo. First, by applying an improved water-transfer printing (WTP) technique to transfer optimized speckle patterns onto the skin, the body sensor was conveniently and harmlessly fabricated directly onto the human body. Then, stereo-DIC was used to achieve 3-D noncontact and noninvasive surface deformation measurements. The accuracy and efficiency of the proposed body sensor were verified and discussed by considering different complexions. Moreover, the fabrication of speckle patterns on human skin, which has always been considered a challenging problem, was shown to be feasible, effective, and harmless as a result of the improved WTP technique. An application of the proposed stereo-DIC-based body sensor was demonstrated by measuring the pulse wave velocity of human carotid artery.

  4. Topology optimization of natural convection: Flow in a differentially heated cavity

    NASA Astrophysics Data System (ADS)

    Saglietti, Clio; Schlatter, Philipp; Berggren, Martin; Henningson, Dan

    2017-11-01

    The goal of the present work is to develop methods for optimization of the design of natural convection cooled heat sinks, using resolved simulation of both fluid flow and heat transfer. We rely on mathematical programming techniques combined with direct numerical simulations in order to iteratively update the topology of a solid structure towards optimality, i.e. until the design yielding the best performance is found, while satisfying a specific set of constraints. The investigated test case is a two-dimensional differentially heated cavity, in which the two vertical walls are held at different temperatures. The buoyancy force induces a swirling convective flow around a solid structure, whose topology is optimized to maximize the heat flux through the cavity. We rely on the spectral-element code Nek5000 to compute a high-order accurate solution of the natural convection flow arising from the conjugate heat transfer in the cavity. The laminar, steady-state solution of the problem is evaluated with a time-marching scheme that has an increased convergence rate; the actual iterative optimization is obtained using a steepest-decent algorithm, and the gradients are conveniently computed using the continuous adjoint equations for convective heat transfer.

  5. Tetrahedral Hohlraum Visualization and Pointings

    NASA Astrophysics Data System (ADS)

    Klare, K. A.; Wallace, J. M.; Drake, D.

    1997-11-01

    In designing experiments for Omega, the tetrahedral hohlraum (a sphere with four holes) can make full use of all 60 beams. There are some complications: the beams must clear the laser entrance hole (LEH), must miss a central capsule, absolutely must not go out the other LEHs, and should distribute in the interior of the hohlraum to maximize the uniformity of irradiation on the capsule while keeping reasonable laser spot sizes. We created a 15-offset coordinate system with which an IDL program computes clearances, writes a file for QuickDraw 3D (QD3D) visualization, and writes input for the viewfactor code RAYNA IV. Visualizing and adjusting the parameters by eye gave more reliable results than computer optimization. QD3D images permitted quick live rotations to determine offsets. The clearances obtained insured safe operation and good physics. The viewfactor code computes the initial irradiation of the hohlraum and capsule or of a uniform hohlraum source with the loss through the four LEHs and shows a high degree of uniformity with both, better for lasers because this deposits more energy near the LEHs to compensate for the holes.

  6. Radiation and polarization signatures of the 3D multizone time-dependent hadronic blazar model

    DOE PAGES

    Zhang, Haocheng; Diltz, Chris; Bottcher, Markus

    2016-09-23

    We present a newly developed time-dependent three-dimensional multizone hadronic blazar emission model. By coupling a Fokker–Planck-based lepto-hadronic particle evolution code, 3DHad, with a polarization-dependent radiation transfer code, 3DPol, we are able to study the time-dependent radiation and polarization signatures of a hadronic blazar model for the first time. Our current code is limited to parameter regimes in which the hadronic γ-ray output is dominated by proton synchrotron emission, neglecting pion production. Our results demonstrate that the time-dependent flux and polarization signatures are generally dominated by the relation between the synchrotron cooling and the light-crossing timescale, which is largely independent ofmore » the exact model parameters. We find that unlike the low-energy polarization signatures, which can vary rapidly in time, the high-energy polarization signatures appear stable. Lastly, future high-energy polarimeters may be able to distinguish such signatures from the lower and more rapidly variable polarization signatures expected in leptonic models.« less

  7. 3D equilibrium reconstruction with islands

    DOE PAGES

    Cianciosa, M.; Hirshman, S. P.; Seal, S. K.; ...

    2018-02-15

    This study presents the development of a 3D equilibrium reconstruction tool and the results of the first-ever reconstruction of an island equilibrium. The SIESTA non-nested equilibrium solver has been coupled to the V3FIT 3D equilibrium reconstruction code. Computed from a coupled VMEC and SIESTA model, synthetic signals are matched to measured signals by finding an optimal set of equilibrium parameters. By using the normalized pressure in place of normalized flux, non-equilibrium quantities needed by diagnostic signals can be efficiently mapped to the equilibrium. The effectiveness of this tool is demonstrated by reconstructing an island equilibrium of a DIII-D inner wallmore » limited L-mode case with an n = 1 error field applied. Finally, flat spots in Thomson and ECE temperature diagnostics show the reconstructed islands have the correct size and phase.« less

  8. 3D equilibrium reconstruction with islands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cianciosa, M.; Hirshman, S. P.; Seal, S. K.

    This study presents the development of a 3D equilibrium reconstruction tool and the results of the first-ever reconstruction of an island equilibrium. The SIESTA non-nested equilibrium solver has been coupled to the V3FIT 3D equilibrium reconstruction code. Computed from a coupled VMEC and SIESTA model, synthetic signals are matched to measured signals by finding an optimal set of equilibrium parameters. By using the normalized pressure in place of normalized flux, non-equilibrium quantities needed by diagnostic signals can be efficiently mapped to the equilibrium. The effectiveness of this tool is demonstrated by reconstructing an island equilibrium of a DIII-D inner wallmore » limited L-mode case with an n = 1 error field applied. Finally, flat spots in Thomson and ECE temperature diagnostics show the reconstructed islands have the correct size and phase.« less

  9. Radiative transfer codes for atmospheric correction and aerosol retrieval: intercomparison study.

    PubMed

    Kotchenova, Svetlana Y; Vermote, Eric F; Levy, Robert; Lyapustin, Alexei

    2008-05-01

    Results are summarized for a scientific project devoted to the comparison of four atmospheric radiative transfer codes incorporated into different satellite data processing algorithms, namely, 6SV1.1 (second simulation of a satellite signal in the solar spectrum, vector, version 1.1), RT3 (radiative transfer), MODTRAN (moderate resolution atmospheric transmittance and radiance code), and SHARM (spherical harmonics). The performance of the codes is tested against well-known benchmarks, such as Coulson's tabulated values and a Monte Carlo code. The influence of revealed differences on aerosol optical thickness and surface reflectance retrieval is estimated theoretically by using a simple mathematical approach. All information about the project can be found at http://rtcodes.ltdri.org.

  10. Radiative transfer codes for atmospheric correction and aerosol retrieval: intercomparison study

    NASA Astrophysics Data System (ADS)

    Kotchenova, Svetlana Y.; Vermote, Eric F.; Levy, Robert; Lyapustin, Alexei

    2008-05-01

    Results are summarized for a scientific project devoted to the comparison of four atmospheric radiative transfer codes incorporated into different satellite data processing algorithms, namely, 6SV1.1 (second simulation of a satellite signal in the solar spectrum, vector, version 1.1), RT3 (radiative transfer), MODTRAN (moderate resolution atmospheric transmittance and radiance code), and SHARM (spherical harmonics). The performance of the codes is tested against well-known benchmarks, such as Coulson's tabulated values and a Monte Carlo code. The influence of revealed differences on aerosol optical thickness and surface reflectance retrieval is estimated theoretically by using a simple mathematical approach. All information about the project can be found at http://rtcodes.ltdri.org.

  11. A comprehensive study of MPI parallelism in three-dimensional discrete element method (DEM) simulation of complex-shaped granular particles

    NASA Astrophysics Data System (ADS)

    Yan, Beichuan; Regueiro, Richard A.

    2018-02-01

    A three-dimensional (3D) DEM code for simulating complex-shaped granular particles is parallelized using message-passing interface (MPI). The concepts of link-block, ghost/border layer, and migration layer are put forward for design of the parallel algorithm, and theoretical scalability function of 3-D DEM scalability and memory usage is derived. Many performance-critical implementation details are managed optimally to achieve high performance and scalability, such as: minimizing communication overhead, maintaining dynamic load balance, handling particle migrations across block borders, transmitting C++ dynamic objects of particles between MPI processes efficiently, eliminating redundant contact information between adjacent MPI processes. The code executes on multiple US Department of Defense (DoD) supercomputers and tests up to 2048 compute nodes for simulating 10 million three-axis ellipsoidal particles. Performance analyses of the code including speedup, efficiency, scalability, and granularity across five orders of magnitude of simulation scale (number of particles) are provided, and they demonstrate high speedup and excellent scalability. It is also discovered that communication time is a decreasing function of the number of compute nodes in strong scaling measurements. The code's capability of simulating a large number of complex-shaped particles on modern supercomputers will be of value in both laboratory studies on micromechanical properties of granular materials and many realistic engineering applications involving granular materials.

  12. 77 FR 8209 - Quality Assurance Requirements for Continuous Opacity Monitoring Systems at Stationary Sources

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-14

    ... Transfer and Advancement Act Section 12(d) of the National Technology Transfer and Advancement Act of 1995... Division, Measurement Technology Group (Mail Code: E143-02), Research Triangle Park, NC 27711; telephone... significant economic impact on a substantial number of small entities. Small entities include small businesses...

  13. A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong

    2013-01-01

    Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.

  14. Deterministic and unambiguous dense coding

    NASA Astrophysics Data System (ADS)

    Wu, Shengjun; Cohen, Scott M.; Sun, Yuqing; Griffiths, Robert B.

    2006-04-01

    Optimal dense coding using a partially-entangled pure state of Schmidt rank Dmacr and a noiseless quantum channel of dimension D is studied both in the deterministic case where at most Ld messages can be transmitted with perfect fidelity, and in the unambiguous case where when the protocol succeeds (probability τx ) Bob knows for sure that Alice sent message x , and when it fails (probability 1-τx ) he knows it has failed. Alice is allowed any single-shot (one use) encoding procedure, and Bob any single-shot measurement. For Dmacr ⩽D a bound is obtained for Ld in terms of the largest Schmidt coefficient of the entangled state, and is compared with published results by Mozes [Phys. Rev. A71, 012311 (2005)]. For Dmacr >D it is shown that Ld is strictly less than D2 unless Dmacr is an integer multiple of D , in which case uniform (maximal) entanglement is not needed to achieve the optimal protocol. The unambiguous case is studied for Dmacr ⩽D , assuming τx>0 for a set of Dmacr D messages, and a bound is obtained for the average ⟨1/τ⟩ . A bound on the average ⟨τ⟩ requires an additional assumption of encoding by isometries (unitaries when Dmacr =D ) that are orthogonal for different messages. Both bounds are saturated when τx is a constant independent of x , by a protocol based on one-shot entanglement concentration. For Dmacr >D it is shown that (at least) D2 messages can be sent unambiguously. Whether unitary (isometric) encoding suffices for optimal protocols remains a major unanswered question, both for our work and for previous studies of dense coding using partially-entangled states, including noisy (mixed) states.

  15. Comparison of a 3-D multi-group SN particle transport code with Monte Carlo for intracavitary brachytherapy of the cervix uteri.

    PubMed

    Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas

    2009-12-03

    A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations.

  16. Comparison of a 3D multi‐group SN particle transport code with Monte Carlo for intercavitary brachytherapy of the cervix uteri

    PubMed Central

    Wareing, Todd A.; Failla, Gregory; Horton, John L.; Eifel, Patricia J.; Mourtada, Firas

    2009-01-01

    A patient dose distribution was calculated by a 3D multi‐group SN particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs‐137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi‐group SN particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within ±3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than ±1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs‐137 CT‐based patient geometry. Our data showed that a three‐group cross‐section set is adequate for Cs‐137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations. PACS number: 87.53.Jw

  17. Designing patient-specific 3D printed craniofacial implants using a novel topology optimization method.

    PubMed

    Sutradhar, Alok; Park, Jaejong; Carrau, Diana; Nguyen, Tam H; Miller, Michael J; Paulino, Glaucio H

    2016-07-01

    Large craniofacial defects require efficient bone replacements which should not only provide good aesthetics but also possess stable structural function. The proposed work uses a novel multiresolution topology optimization method to achieve the task. Using a compliance minimization objective, patient-specific bone replacement shapes can be designed for different clinical cases that ensure revival of efficient load transfer mechanisms in the mid-face. In this work, four clinical cases are introduced and their respective patient-specific designs are obtained using the proposed method. The optimized designs are then virtually inserted into the defect to visually inspect the viability of the design . Further, once the design is verified by the reconstructive surgeon, prototypes are fabricated using a 3D printer for validation. The robustness of the designs are mechanically tested by subjecting them to a physiological loading condition which mimics the masticatory activity. The full-field strain result through 3D image correlation and the finite element analysis implies that the solution can survive the maximum mastication of 120 lb. Also, the designs have the potential to restore the buttress system and provide the structural integrity. Using the topology optimization framework in designing the bone replacement shapes would deliver surgeons new alternatives for rather complicated mid-face reconstruction.

  18. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, V.; Farvardin, N.

    1986-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  19. Photoinduced charge-transfer electronic excitation of tetracyanoethylene/tetramethylethylene complex in dichloromethane

    NASA Astrophysics Data System (ADS)

    Xu, Long-Kun; Bi, Ting-Jun; Ming, Mei-Jun; Wang, Jing-Bo; Li, Xiang-Yuan

    2017-07-01

    Based on the previous work on nonequilibrium solvation model by the authors, Intermolecular charge-transfer electronic excitation of tetracyanoethylene (TCE)/tetramethylethylene (TME) π -stacked complex in dichloromethane (DCM) has been investigated. For weak interaction correction, dispersion corrected functional DFT-D3 is adopted for geometry optimization. In order to identify the excitation metric, dipole moment components of each Cartesian direction, atomic charge, charge separation and Δr index are analyzed for TCE/TME complex. Calculation shows that the calculated excitation energy is dependent on the functional choice, when conjuncted with suitable time-dependent density functional, the modified nonequilibrium expression gives satisfied results for intermolecular charge-transfer electronic excitation.

  20. Data Sciences Summer Institute Topology Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watts, Seth

    DSSI_TOPOPT is a 2D topology optimization code that designs stiff structures made of a single linear elastic material and void space. The code generates a finite element mesh of a rectangular design domain on which the user specifies displacement and load boundary conditions. The code iteratively designs a structure that minimizes the compliance (maximizes the stiffness) of the structure under the given loading, subject to an upper bound on the amount of material used. Depending on user options, the code can evaluate the performance of a user-designed structure, or create a design from scratch. Output includes the finite element mesh,more » design, and visualizations of the design.« less

  1. Version 2.0 Visual Sample Plan (VSP): UXO Module Code Description and Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, Richard O.; Wilson, John E.; O'Brien, Robert F.

    2003-05-06

    The Pacific Northwest National Laboratory (PNNL) is developing statistical methods for determining the amount of geophysical surveys conducted along transects (swaths) that are needed to achieve specified levels of confidence of finding target areas (TAs) of anomalous readings and possibly unexploded ordnance (UXO) at closed, transferring and transferred (CTT) Department of Defense (DoD) ranges and other sites. The statistical methods developed by PNNL have been coded into the UXO module of the Visual Sample Plan (VSP) software code that is being developed by PNNL with support from the DoD, the U.S. Department of Energy (DOE, and the U.S. Environmental Protectionmore » Agency (EPA). (The VSP software and VSP Users Guide (Hassig et al, 2002) may be downloaded from http://dqo.pnl.gov/vsp.) This report describes and documents the statistical methods developed and the calculations and verification testing that have been conducted to verify that VSPs implementation of these methods is correct and accurate.« less

  2. A Method to Predict the Reliability of Military Ground Vehicles Using High Performance Computing

    DTIC Science & Technology

    2006-11-01

    Krayterman U.S. Army RDECOM-TARDEC Warren, MI 48397 K.K. Choi, Ed Hardee University of Iowa Coralville , IA 52242 Byeng D. Youn Michigan...University of Iowa , performed an optimization of the design for an A-arm on a military ground vehicle (a Stryker), using no sources of uncertainty...LSF for the queueing system. 3.3 Reliability/Fatigue Analysis software We used several pieces of propriety code from the University of Iowa

  3. Laser-plasma interactions and implosion symmetry in rugby hohlraums

    NASA Astrophysics Data System (ADS)

    Michel, Pierre; Berger, R. L.; Lasinski, B. F.; Ross, J. S.; Divol, L.; Williams, E. A.; Meeker, D.; Langdon, B. A.; Park, H.; Amendt, P.

    2011-10-01

    Cross-beam energy transfer is studied in the context of ``rugby''-hohlraum experiments at the Omega laser facility in FY11, in preparation for future NIF experiments. The transfer acts in opposite direction between rugby and cylinder hohlraums due to the different beam pointing geometries and flow patterns. Its interaction with backscatter is also different as both happen in similar regions inside rugby hohlraums. We will analyze the effects of non-linearities and temporal beam smoothing on energy transfer using the code pF3d. Calculations will be compared to experiments at Omega; analysis of future rugby hohlraum experiments on NIF will also be presented. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  4. Multi-dimensional Core-Collapse Supernova Simulations with Neutrino Transport

    NASA Astrophysics Data System (ADS)

    Pan, Kuo-Chuan; Liebendörfer, Matthias; Hempel, Matthias; Thielemann, Friedrich-Karl

    We present multi-dimensional core-collapse supernova simulations using the Isotropic Diffusion Source Approximation (IDSA) for the neutrino transport and a modified potential for general relativity in two different supernova codes: FLASH and ELEPHANT. Due to the complexity of the core-collapse supernova explosion mechanism, simulations require not only high-performance computers and the exploitation of GPUs, but also sophisticated approximations to capture the essential microphysics. We demonstrate that the IDSA is an elegant and efficient neutrino radiation transfer scheme, which is portable to multiple hydrodynamics codes and fast enough to investigate long-term evolutions in two and three dimensions. Simulations with a 40 solar mass progenitor are presented in both FLASH (1D and 2D) and ELEPHANT (3D) as an extreme test condition. It is found that the black hole formation time is delayed in multiple dimensions and we argue that the strong standing accretion shock instability before black hole formation will lead to strong gravitational waves.

  5. Biodegradation of paint stripper solvents in a modified gas lift loop bioreactor.

    PubMed

    Vanderberg-Twary, L; Steenhoudt, K; Travis, B J; Hanners, J L; Foreman, T M; Brainard, J R

    1997-07-05

    Paint stripping wastes generated during the decontamination and decommissioning of former nuclear facilities contain paint stripping organics (dichloromethane, 2-propanol, and methanol) and bulk materials containing paint pigments. It is desirable to degrade the organic residues as part of an integrated chemical-biological treatment system. We have developed a modified gas lift loop bioreactor employing a defined consortium of Rhodococcus rhodochrous strain OFS and Hyphomicrobium sp. DM-2 that degrades paint stripper organics. Mass transfer coefficients and kinetic constants for biodegradation in the system were determined. It was found that transfer of organic substrates from surrogate waste into the air and further into the liquid medium in the bioreactor were rapid processes, occurring within minutes. Monod kinetics was employed to model the biodegradation of paint stripping organics. Analysis of the bioreactor process was accomplished with BIOLAB, a mathematical code that simulates coupled mass transfer and biodegradation processes. This code was used to fit experimental data to Monod kinetics and to determine kinetic parameters. The BIOLAB code was also employed to compare activities in the bioreactor of individual microbial cultures to the activities of combined cultures in the bioreactor. This code is of benefit for further optimization and scale-up of the bioreactor for treatment of paint stripping and other volatile organic wastes in bulk materials.

  6. Joint-layer encoder optimization for HEVC scalable extensions

    NASA Astrophysics Data System (ADS)

    Tsai, Chia-Ming; He, Yuwen; Dong, Jie; Ye, Yan; Xiu, Xiaoyu; He, Yong

    2014-09-01

    Scalable video coding provides an efficient solution to support video playback on heterogeneous devices with various channel conditions in heterogeneous networks. SHVC is the latest scalable video coding standard based on the HEVC standard. To improve enhancement layer coding efficiency, inter-layer prediction including texture and motion information generated from the base layer is used for enhancement layer coding. However, the overall performance of the SHVC reference encoder is not fully optimized because rate-distortion optimization (RDO) processes in the base and enhancement layers are independently considered. It is difficult to directly extend the existing joint-layer optimization methods to SHVC due to the complicated coding tree block splitting decisions and in-loop filtering process (e.g., deblocking and sample adaptive offset (SAO) filtering) in HEVC. To solve those problems, a joint-layer optimization method is proposed by adjusting the quantization parameter (QP) to optimally allocate the bit resource between layers. Furthermore, to make more proper resource allocation, the proposed method also considers the viewing probability of base and enhancement layers according to packet loss rate. Based on the viewing probability, a novel joint-layer RD cost function is proposed for joint-layer RDO encoding. The QP values of those coding tree units (CTUs) belonging to lower layers referenced by higher layers are decreased accordingly, and the QP values of those remaining CTUs are increased to keep total bits unchanged. Finally the QP values with minimal joint-layer RD cost are selected to match the viewing probability. The proposed method was applied to the third temporal level (TL-3) pictures in the Random Access configuration. Simulation results demonstrate that the proposed joint-layer optimization method can improve coding performance by 1.3% for these TL-3 pictures compared to the SHVC reference encoder without joint-layer optimization.

  7. The complete mitochondrial genome of Pholis nebulosus (Perciformes: Pholidae).

    PubMed

    Wang, Zhongquan; Qin, Kaili; Liu, Jingxi; Song, Na; Han, Zhiqiang; Gao, Tianxiang

    2016-11-01

    In this study, the complete mitochondrial genome (mitogenome) sequence of Pholis nebulosus has been determined by long polymerase chain reaction and primer-walking methods. The mitogenome is a circular molecule of 16 524 bp in length, including the typical structure of 13 protein-coding genes, 2 ribosomal RNA genes, 22 transfer RNA genes and 2 non-coding regions (L-strand replication origin and control region), the gene contents of which are identical to those observed in most bony fishes. Within the control region, we identified the termination-associated sequence domain (TAS), and the conserved sequence block domain (CSB-F, CSB-E, CSB-D, CSB-C, CSB-B, CSB-A, CSB-1, CSB-2, CSB-3).

  8. Unstructured Polyhedral Mesh Thermal Radiation Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmer, T.S.; Zika, M.R.; Madsen, N.K.

    2000-07-27

    Unstructured mesh particle transport and diffusion methods are gaining wider acceptance as mesh generation, scientific visualization and linear solvers improve. This paper describes an algorithm that is currently being used in the KULL code at Lawrence Livermore National Laboratory to solve the radiative transfer equations. The algorithm employs a point-centered diffusion discretization on arbitrary polyhedral meshes in 3D. We present the results of a few test problems to illustrate the capabilities of the radiation diffusion module.

  9. Neighboring block based disparity vector derivation for multiview compatible 3D-AVC

    NASA Astrophysics Data System (ADS)

    Kang, Jewon; Chen, Ying; Zhang, Li; Zhao, Xin; Karczewicz, Marta

    2013-09-01

    3D-AVC being developed under Joint Collaborative Team on 3D Video Coding (JCT-3V) significantly outperforms the Multiview Video Coding plus Depth (MVC+D) which simultaneously encodes texture views and depth views with the multiview extension of H.264/AVC (MVC). However, when the 3D-AVC is configured to support multiview compatibility in which texture views are decoded without depth information, the coding performance becomes significantly degraded. The reason is that advanced coding tools incorporated into the 3D-AVC do not perform well due to the lack of a disparity vector converted from the depth information. In this paper, we propose a disparity vector derivation method utilizing only the information of texture views. Motion information of neighboring blocks is used to determine a disparity vector for a macroblock, so that the derived disparity vector is efficiently used for the coding tools in 3D-AVC. The proposed method significantly improves a coding gain of the 3D-AVC in the multiview compatible mode about 20% BD-rate saving in the coded views and 26% BD-rate saving in the synthesized views on average.

  10. Next-generation sequencing yields the complete mitochondrial genome of the flathead mullet, Mugil cephalus cryptic species in East Australia (Teleostei: Mugilidae).

    PubMed

    Shen, Kang-Ning; Chen, Ching-Hung; Hsiao, Chung-Der; Durand, Jean-Dominique

    2016-09-01

    In this study, the complete mitogenome sequence of a cryptic species from East Australia (Mugil sp. H) belonging to the worldwide Mugil cephalus species complex (Teleostei: Mugilidae) has been sequenced by next-generation sequencing method. The assembled mitogenome, consisting of 16,845 bp, had the typical vertebrate mitochondrial gene arrangement, including 13 protein-coding genes, 22 transfer RNAs, 2 ribosomal RNAs genes and a non-coding control region of D-loop. D-loop consists of 1067 bp length, and is located between tRNA-Pro and tRNA-Phe. The overall base composition of East Australia M. cephalus is 28.4% for A, 29.3% for C, 15.4% for G and 26.9% for T. The complete mitogenome may provide essential and important DNA molecular data for further phylogenetic and evolutionary analysis for flathead mullet species complex.

  11. Next generation sequencing yields the complete mitochondrial genome of the flathead mullet, Mugil cephalus cryptic species NWP2 (Teleostei: Mugilidae).

    PubMed

    Shen, Kang-Ning; Yen, Ta-Chi; Chen, Ching-Hung; Li, Huei-Ying; Chen, Pei-Lung; Hsiao, Chung-Der

    2016-05-01

    In this study, the complete mitogenome sequence of Northwestern Pacific 2 (NWP2) cryptic species of flathead mullet, Mugil cephalus (Teleostei: Mugilidae) has been amplified by long-range PCR and sequenced by next-generation sequencing method. The assembled mitogenome, consisting of 16,686 bp, had the typical vertebrate mitochondrial gene arrangement, including 13 protein-coding genes, 22 transfer RNAs, 2 ribosomal RNAs genes and a non-coding control region of D-loop. D-loop was 909 bp length and was located between tRNA-Pro and tRNA-Phe. The overall base composition of NWP2 M. cephalus was 28.4% for A, 29.8% for C, 26.5% for T and 15.3% for G. The complete mitogenome may provide essential and important DNA molecular data for further phylogenetic and evolutionary analysis for flathead mullet species complex.

  12. GENIE - Generation of computational geometry-grids for internal-external flow configurations

    NASA Technical Reports Server (NTRS)

    Soni, B. K.

    1988-01-01

    Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.

  13. Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.

    PubMed

    Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik

    2014-06-16

    Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.

  14. Optimization of compressive 4D-spatio-spectral snapshot imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Xia; Feng, Weiyi; Lin, Lihua; Su, Wu; Xu, Guoqing

    2017-10-01

    In this paper, a modified 3D computational reconstruction method in the compressive 4D-spectro-volumetric snapshot imaging system is proposed for better sensing spectral information of 3D objects. In the design of the imaging system, a microlens array (MLA) is used to obtain a set of multi-view elemental images (EIs) of the 3D scenes. Then, these elemental images with one dimensional spectral information and different perspectives are captured by the coded aperture snapshot spectral imager (CASSI) which can sense the spectral data cube onto a compressive 2D measurement image. Finally, the depth images of 3D objects at arbitrary depths, like a focal stack, are computed by inversely mapping the elemental images according to geometrical optics. With the spectral estimation algorithm, the spectral information of 3D objects is also reconstructed. Using a shifted translation matrix, the contrast of the reconstruction result is further enhanced. Numerical simulation results verify the performance of the proposed method. The system can obtain both 3D spatial information and spectral data on 3D objects using only one single snapshot, which is valuable in the agricultural harvesting robots and other 3D dynamic scenes.

  15. 23 CFR 710.601 - Federal land transfer.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 23 Highways 1 2012-04-01 2012-04-01 false Federal land transfer. 710.601 Section 710.601 Highways... highway projects that are eligible for Federal-aid under Chapters 1 and 2 of title 23, of the United... Stat. 1808, as amended). (b) Sections 107(d) and 317 of title 23, of the United States Code provide for...

  16. 3D Modeling of Ultrasonic Wave Interaction with Disbonds and Weak Bonds

    NASA Technical Reports Server (NTRS)

    Leckey, C.; Hinders, M.

    2011-01-01

    Ultrasonic techniques, such as the use of guided waves, can be ideal for finding damage in the plate and pipe-like structures used in aerospace applications. However, the interaction of waves with real flaw types and geometries can lead to experimental signals that are difficult to interpret. 3-dimensional (3D) elastic wave simulations can be a powerful tool in understanding the complicated wave scattering involved in flaw detection and for optimizing experimental techniques. We have developed and implemented parallel 3D elastodynamic finite integration technique (3D EFIT) code to investigate Lamb wave scattering from realistic flaws. This paper discusses simulation results for an aluminum-aluminum diffusion disbond and an aluminum-epoxy disbond and compares results from the disbond case to the common artificial flaw type of a flat-bottom hole. The paper also discusses the potential for extending the 3D EFIT equations to incorporate physics-based weak bond models for simulating wave scattering from weak adhesive bonds.

  17. Layered rare-earth hydroxide and oxide nanoplates of the Y/Tb/Eu system: phase-controlled processing, structure characterization and color-tunable photoluminescence via selective excitation and efficient energy transfer.

    PubMed

    Wu, Xiaoli; Li, Ji-Guang; Li, Jinkai; Zhu, Qi; Li, Xiaodong; Sun, Xudong; Sakka, Yoshio

    2013-02-01

    Well-crystallized (Y 0.97- x Tb 0.03 Eu x ) 2 (OH) 5 NO 3 · n H 2 O ( x = 0-0.03) layered rare-earth hydroxide (LRH) nanoflakes of a pure high-hydration phase have been produced by autoclaving from the nitrate/NH 4 OH reaction system under the optimized conditions of 100 °C and pH ∼7.0. The flakes were then converted into (Y 0.97- x Tb 0.03 Eu x ) 2 O 3 phosphor nanoplates with color-tunable photoluminescence. Detailed structural characterizations confirmed that LRH solid solutions contained NO 3 - anions intercalated between the layers. Characteristic Tb 3+ and Eu 3+ emissions were detected in the ternary LRHs by selectively exciting the two types of activators, and the energy transfer from Tb 3+ to Eu 3+ was observed. Annealing the LRHs at 1100 °C produced cubic-lattice (Y 0.97- x Tb 0.03 Eu x ) 2 O 3 solid-solution nanoplates with exposed 222 facets. Multicolor, intensity-adjustable luminescence was attained by varying the excitation wavelength from ∼249 nm (the charge transfer excitation band of Eu 3+ ) to 278 nm (the 4f 8 -4f 7 5d 1 transition of Tb 3+ ). Unitizing the efficient Tb 3+ to Eu 3+ energy transfer, the emission color of (Y 0.97- x Tb 0.03 Eu x ) 2 O 3 was tuned from approximately green to yellowish-orange by varying the Eu 3+ /Tb 3+ ratio. At the optimal Eu 3+ content of x = 0.01, the efficiency of energy transfer was ∼91% and the transfer mechanism was suggested to be electric multipole interactions. The phosphor nanoplates developed in this work may be incorporated in luminescent films and find various lighting and display applications.

  18. Implementation of Soft X-ray Tomography on NSTX

    NASA Astrophysics Data System (ADS)

    Tritz, K.; Stutman, D.; Finkenthal, M.; Granetz, R.; Menard, J.; Park, W.

    2003-10-01

    A set of poloidal ultrasoft X-ray arrays is operated by the Johns Hopkins group on NSTX. To enable MHD mode analysis independent of the magnetic reconstruction, the McCormick-Granetz tomography code developed at MIT is being adapted to the NSTX geometry. Tests of the code using synthetic data show that that present X-ray system is adequate for m=1 tomography. In addition, we have found that spline basis functions may be better suited than Bessel functions for the reconstruction of radially localized phenomena in NSTX. The tomography code was also used to determine the necessary array expansion and optimal array placement for the characterization of higher m modes (m=2,3) in the future. Initial reconstruction of experimental soft X-ray data has been performed for m=1 internal modes, which are often encountered in high beta NSTX discharges. The reconstruction of these modes will be compared to predictions from the M3D code and magnetic measurements.

  19. The STAGGER-grid: A grid of 3D stellar atmosphere models. V. Synthetic stellar spectra and broad-band photometry

    NASA Astrophysics Data System (ADS)

    Chiavassa, A.; Casagrande, L.; Collet, R.; Magic, Z.; Bigot, L.; Thévenin, F.; Asplund, M.

    2018-03-01

    Context. The surface structures and dynamics of cool stars are characterised by the presence of convective motions and turbulent flows which shape the emergent spectrum. Aims: We used realistic three-dimensional (3D) radiative hydrodynamical simulations from the STAGGER-grid to calculate synthetic spectra with the radiative transfer code OPTIM3D for stars with different stellar parameters to predict photometric colours and convective velocity shifts. Methods: We calculated spectra from 1000 to 200 000 Å with a constant resolving power of λ/Δλ = 20 000 and from 8470 and 8710 Å (Gaia Radial Velocity Spectrometer - RVS - spectral range) with a constant resolving power of λ/Δλ = 300 000. Results: We used synthetic spectra to compute theoretical colours in the Johnson-Cousins UBV (RI)C, SDSS, 2MASS, Gaia, SkyMapper, Strömgren systems, and HST-WFC3. Our synthetic magnitudes are compared with those obtained using 1D hydrostatic models. We showed that 1D versus 3D differences are limited to a small percent except for the narrow filters that span the optical and UV region of the spectrum. In addition, we derived the effect of the convective velocity fields on selected Fe I lines. We found the overall convective shift for 3D simulations with respect to the reference 1D hydrostatic models, revealing line shifts of between -0.235 and +0.361 km s-1. We showed a net correlation of the convective shifts with the effective temperature: lower effective temperatures denote redshifts and higher effective temperatures denote blueshifts. We conclude that the extraction of accurate radial velocities from RVS spectra need an appropriate wavelength correction from convection shifts. Conclusions: The use of realistic 3D hydrodynamical stellar atmosphere simulations has a small but significant impact on the predicted photometry compared with classical 1D hydrostatic models for late-type stars. We make all the spectra publicly available for the community through the POLLUX database. Tables 5-8 are only available at the CDS and Table B.1 is also available at the CDS and via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A11

  20. RECOVERY FROM GIANT ERUPTIONS IN VERY MASSIVE STARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kashi, Amit; Davidson, Kris; Humphreys, Roberta M., E-mail: kashi@astro.umn.edu

    2016-01-20

    We use a hydro-and-radiative-transfer code to explore the behavior of a very massive star (VMS) after a giant eruption—i.e., following a supernova impostor event. Beginning with reasonable models for evolved VMSs with masses of 80 M{sub ⊙} and 120 M{sub ⊙}, we simulate the change of state caused by a giant eruption via two methods that explicitly conserve total energy. (1) Synthetically removing outer layers of mass of a few M{sub ⊙} while reducing the energy of the inner layers. (2) Synthetically transferring energy from the core to the outer layers, an operation that automatically causes mass ejection. Our focus is onmore » the aftermath, not the poorly understood eruption itself. Then, using a radiation-hydrodynamic code in 1D with realistic opacities and convection, the interior disequilibrium state is followed for about 200 years. Typically the star develops a ∼400 km s{sup −1} wind with a mass loss rate that begins around 0.1 M{sub ⊙} yr{sup −1} and gradually decreases. This outflow is driven by κ-mechanism radial pulsations. The 1D models have regular pulsations but 3D models will probably be more chaotic. In some cases a plateau in the mass-loss rate may persist about 200 years, while other cases are more like η Car which lost >10 M{sub ⊙} and then had an abnormal mass loss rate for more than a century after its eruption. In our model, the post-eruption outflow carried more mass than the initial eruption. These simulations constitute a useful preliminary reconnaissance for 3D models which will be far more difficult.« less

  1. Defining an optimal surface chemistry for pluripotent stem cell culture in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Zonca, Michael R., Jr.

    Surface chemistry is critical for growing pluripotent stem cells in an undifferentiated state. There is great potential to engineer the surface chemistry at the nanoscale level to regulate stem cell adhesion. However, the challenge is to identify the optimal surface chemistry of the substrata for ES cell attachment and maintenance. Using a high-throughput polymerization and screening platform, a chemically defined, synthetic polymer grafted coating that supports strong attachment and high expansion capacity of pluripotent stem cells has been discovered using mouse embryonic stem (ES) cells as a model system. This optimal substrate, N-[3-(Dimethylamino)propyl] methacrylamide (DMAPMA) that is grafted on 2D synthetic poly(ether sulfone) (PES) membrane, sustains the self-renewal of ES cells (up to 7 passages). DMAPMA supports cell attachment of ES cells through integrin beta1 in a RGD-independent manner and is similar to another recently reported polymer surface. Next, DMAPMA has been able to be transferred to 3D by grafting to synthetic, polymeric, PES fibrous matrices through both photo-induced and plasma-induced polymerization. These 3D modified fibers exhibited higher cell proliferation and greater expression of pluripotency markers of mouse ES cells than 2D PES membranes. Our results indicated that desirable surfaces in 2D can be scaled to 3D and that both surface chemistry and structural dimension strongly influence the growth and differentiation of pluripotent stem cells. Lastly, the feasibility of incorporating DMAPMA into a widely used natural polymer, alginate, has been tested. Novel adhesive alginate hydrogels have been successfully synthesized by either direct polymerization of DMAPMA and methacrylic acid blended with alginate, or photo-induced DMAPMA polymerization on alginate nanofibrous hydrogels. In particular, DMAPMA-coated alginate hydrogels support strong ES cell attachment, exhibiting a concentration dependency of DMAPMA. This research provides a new avenue for stem cell culture and maintenance using an optimal organic-based chemistry.

  2. Assessment of a 3-D boundary layer code to predict heat transfer and flow losses in a turbine

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.

    1984-01-01

    Zonal concepts are utilized to delineate regions of application of three-dimensional boundary layer (DBL) theory. The zonal approach requires three distinct analyses. A modified version of the 3-DBL code named TABLET is used to analyze the boundary layer flow. This modified code solves the finite difference form of the compressible 3-DBL equations in a nonorthogonal surface coordinate system which includes coriolis forces produced by coordinate rotation. These equations are solved using an efficient, implicit, fully coupled finite difference procedure. The nonorthogonal surface coordinate system is calculated using a general analysis based on the transfinite mapping of Gordon which is valid for any arbitrary surface. Experimental data is used to determine the boundary layer edge conditions. The boundary layer edge conditions are determined by integrating the boundary layer edge equations, which are the Euler equations at the edge of the boundary layer, using the known experimental wall pressure distribution. Starting solutions along the inflow boundaries are estimated by solving the appropriate limiting form of the 3-DBL equations.

  3. Modeling and validation of heat and mass transfer in individual coffee beans during the coffee roasting process using computational fluid dynamics (CFD).

    PubMed

    Alonso-Torres, Beatriz; Hernández-Pérez, José Alfredo; Sierra-Espinoza, Fernando; Schenker, Stefan; Yeretzian, Chahan

    2013-01-01

    Heat and mass transfer in individual coffee beans during roasting were simulated using computational fluid dynamics (CFD). Numerical equations for heat and mass transfer inside the coffee bean were solved using the finite volume technique in the commercial CFD code Fluent; the software was complemented with specific user-defined functions (UDFs). To experimentally validate the numerical model, a single coffee bean was placed in a cylindrical glass tube and roasted by a hot air flow, using the identical geometrical 3D configuration and hot air flow conditions as the ones used for numerical simulations. Temperature and humidity calculations obtained with the model were compared with experimental data. The model predicts the actual process quite accurately and represents a useful approach to monitor the coffee roasting process in real time. It provides valuable information on time-resolved process variables that are otherwise difficult to obtain experimentally, but critical to a better understanding of the coffee roasting process at the individual bean level. This includes variables such as time-resolved 3D profiles of bean temperature and moisture content, and temperature profiles of the roasting air in the vicinity of the coffee bean.

  4. Thin-layer and full Navier-Stokes calculations for turbulent supersonic flow over a cone at an angle of attack

    NASA Technical Reports Server (NTRS)

    Smith, Crawford F.; Podleski, Steve D.

    1993-01-01

    The proper use of a computational fluid dynamics code requires a good understanding of the particular code being applied. In this report the application of CFL3D, a thin-layer Navier-Stokes code, is compared with the results obtained from PARC3D, a full Navier-Stokes code. In order to gain an understanding of the use of this code, a simple problem was chosen in which several key features of the code could be exercised. The problem chosen is a cone in supersonic flow at an angle of attack. The issues of grid resolution, grid blocking, and multigridding with CFL3D are explored. The use of multigridding resulted in a significant reduction in the computational time required to solve the problem. Solutions obtained are compared with the results using the full Navier-Stokes equations solver PARC3D. The results obtained with the CFL3D code compared well with the PARC3D solutions.

  5. Compressive Coded-Aperture Multimodal Imaging Systems

    NASA Astrophysics Data System (ADS)

    Rueda-Chacon, Hoover F.

    Multimodal imaging refers to the framework of capturing images that span different physical domains such as space, spectrum, depth, time, polarization, and others. For instance, spectral images are modeled as 3D cubes with two spatial and one spectral coordinate. Three-dimensional cubes spanning just the space domain, are referred as depth volumes. Imaging cubes varying in time, spectra or depth, are referred as 4D-images. Nature itself spans different physical domains, thus imaging our real world demands capturing information in at least 6 different domains simultaneously, giving turn to 3D-spatial+spectral+polarized dynamic sequences. Conventional imaging devices, however, can capture dynamic sequences with up-to 3 spectral channels, in real-time, by the use of color sensors. Capturing multiple spectral channels require scanning methodologies, which demand long time. In general, to-date multimodal imaging requires a sequence of different imaging sensors, placed in tandem, to simultaneously capture the different physical properties of a scene. Then, different fusion techniques are employed to mix all the individual information into a single image. Therefore, new ways to efficiently capture more than 3 spectral channels of 3D time-varying spatial information, in a single or few sensors, are of high interest. Compressive spectral imaging (CSI) is an imaging framework that seeks to optimally capture spectral imagery (tens of spectral channels of 2D spatial information), using fewer measurements than that required by traditional sensing procedures which follows the Shannon-Nyquist sampling. Instead of capturing direct one-to-one representations of natural scenes, CSI systems acquire linear random projections of the scene and then solve an optimization algorithm to estimate the 3D spatio-spectral data cube by exploiting the theory of compressive sensing (CS). To date, the coding procedure in CSI has been realized through the use of ``block-unblock" coded apertures, commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately capturing spatial and spectral material properties. The applications of such a sensor are self-evident in fields such as computer/robotic vision because they would allow an artificial intelligence to make informed decisions about not only the location of objects within a scene but also their material properties.

  6. Red emission enhancement from CaMoO4:Eu3+ by co-doping of Bi3+ for near UV/blue LED pumped white pcLEDs: Energy transfer studies

    NASA Astrophysics Data System (ADS)

    Wangkhem, Ranjoy; Yaba, Takhe; Shanta Singh, N.; Ningthoujam, R. S.

    2018-03-01

    CaMoO4:Eu3+ (3 at. %)/Bi3+ (x at. %) nanophosphors were synthesized hydrothermally. All the samples can be excited by 280, 320, 393, and 464 nm (blue) wavelengths for generation of red color emission. Enhancement in 5D0 → 7F2 (615 nm) emission (f-f transition) of Eu3+ is observed when Bi3+ is incorporated in CaMoO4:Eu3+. This is due to the efficient energy transfer from Bi3+ to Eu3+ ions. Introduction of Bi3+ in the system does not lead to the change of emission wavelength of Eu3+. However, Bi3+ incorporation in the system induces a shift in Mo-O charge transfer band absorption from 295 to 270 nm. This may be due to the increase in electronegativity between Mo and O bond in the presence of Bi3+ leading to change in crystal field environment of Mo6+ in MoO42-. At the optimal concentration of Bi3+, an enhancement in emission by a factor of ˜10 and 4.2 in the respective excitation at 393 (7F0 → 5L6) and 464 nm (7F0 → 5D2) is observed. The energy transfer efficiency from Bi3+ to Eu3+ increases from 75% to 96%. The energy transfer is observed to occur mainly via dipole-dipole interactions. Maximum quantum yield value of 55% is observed from annealed CaMoO4:Eu3+ (3 at. %) when sensitized with Bi3+ (15 at. %) under 464 nm excitation. From Commission International de I'Eclairage chromaticity coordinates, the color (red) saturation is observed to be nearly 100%.

  7. 3-D Inhomogeous Radiative Transfer Model using a Planar-stratified Forward RT Model and Horizontal Perturbation Series

    NASA Astrophysics Data System (ADS)

    Zhang, K.; Gasiewski, A. J.

    2017-12-01

    A horizontally inhomogeneous unified microwave radiative transfer (HI-UMRT) model based upon a nonspherical hydrometeor scattering model is being developed at the University of Colorado at Boulder to facilitate forward radiative simulations for 3-dimensionally inhomogeneous clouds in severe weather. The HI-UMRT 3-D analytical solution is based on incorporating a planar-stratified 1-D UMRT algorithm within a horizontally inhomogeneous iterative perturbation scheme. Single-scattering parameters are computed using the Discrete Dipole Scattering (DDSCAT v7.3) program for hundreds of carefully selected nonspherical complex frozen hydrometeors from the NASA/GSFC DDSCAT database. The required analytic factorization symmetry of transition matrix in a normalized RT equation was analytically proved and validated numerically using the DDSCAT-based full Stokes matrix of randomly oriented hydrometeors. The HI-UMRT model thus inherits the properties of unconditional numerical stability, efficiency, and accuracy from the UMRT algorithm and provides a practical 3-D two-Stokes parameter radiance solution with Jacobian to be used within microwave retrievals and data assimilation schemes. In addition, a fast forward radar reflectivity operator with Jacobian based on DDSCAT backscatter efficiency computed for large hydrometeors is incorporated into the HI-UMRT model to provide applicability to active radar sensors. The HI-UMRT will be validated strategically at two levels: 1) intercomparison of brightness temperature (Tb) results with those of several 1-D and 3-D RT models, including UMRT, CRTM and Monte Carlo models, 2) intercomparison of Tb with observed data from combined passive and active spaceborne sensors (e.g. GPM GMI and DPR). The precise expression for determining the required number of 3-D iterations to achieve an error bound on the perturbation solution will be developed to facilitate the numerical verification of the HI-UMRT code complexity and computation performance.

  8. Three-dimensional data-tracking dynamic optimization simulations of human locomotion generated by direct collocation.

    PubMed

    Lin, Yi-Chung; Pandy, Marcus G

    2017-07-05

    The aim of this study was to perform full-body three-dimensional (3D) dynamic optimization simulations of human locomotion by driving a neuromusculoskeletal model toward in vivo measurements of body-segmental kinematics and ground reaction forces. Gait data were recorded from 5 healthy participants who walked at their preferred speeds and ran at 2m/s. Participant-specific data-tracking dynamic optimization solutions were generated for one stride cycle using direct collocation in tandem with an OpenSim-MATLAB interface. The body was represented as a 12-segment, 21-degree-of-freedom skeleton actuated by 66 muscle-tendon units. Foot-ground interaction was simulated using six contact spheres under each foot. The dynamic optimization problem was to find the set of muscle excitations needed to reproduce 3D measurements of body-segmental motions and ground reaction forces while minimizing the time integral of muscle activations squared. Direct collocation took on average 2.7±1.0h and 2.2±1.6h of CPU time, respectively, to solve the optimization problems for walking and running. Model-computed kinematics and foot-ground forces were in good agreement with corresponding experimental data while the calculated muscle excitation patterns were consistent with measured EMG activity. The results demonstrate the feasibility of implementing direct collocation on a detailed neuromusculoskeletal model with foot-ground contact to accurately and efficiently generate 3D data-tracking dynamic optimization simulations of human locomotion. The proposed method offers a viable tool for creating feasible initial guesses needed to perform predictive simulations of movement using dynamic optimization theory. The source code for implementing the model and computational algorithm may be downloaded at http://simtk.org/home/datatracking. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Numerical prediction of turbulent oscillating flow and associated heat transfer

    NASA Technical Reports Server (NTRS)

    Koehler, W. J.; Patankar, S. V.; Ibele, W. E.

    1991-01-01

    A crucial point for further development of engines is the optimization of its heat exchangers which operate under oscillatory flow conditions. It has been found that the most important thermodynamic uncertainties in the Stirling engine designs for space power are in the heat transfer between gas and metal in all engine components and in the pressure drop across the heat exchanger components. So far, performance codes cannot predict the power output of a Stirling engine reasonably enough if used for a wide variety of engines. Thus, there is a strong need for better performance codes. However, a performance code is not concerned with the details of the flow. This information must be provided externally. While analytical relationships exist for laminar oscillating flow, there has been hardly any information about transitional and turbulent oscillating flow, which could be introduced into the performance codes. In 1986, a survey by Seume and Simon revealed that most Stirling engine heat exchangers operate in the transitional and turbulent regime. Consequently, research has since focused on the unresolved issue of transitional and turbulent oscillating flow and heat transfer. Since 1988, the University of Minnesota oscillating flow facility has obtained experimental data about transitional and turbulent oscillating flow. However, since the experiments in this field are extremely difficult, lengthy, and expensive, it is advantageous to numerically simulate the flow and heat transfer accurately from first principles. Work done at the University of Minnesota on the development of such a numerical simulation is summarized.

  10. Investigation of Advanced Counterrotation Blade Configuration Concepts for High Speed Turboprop Systems. Task 8: Cooling Flow/heat Transfer Analysis User's Manual

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Topp, David A.; Heidegger, Nathan J.; Delaney, Robert A.

    1994-01-01

    The focus of this task was to validate the ADPAC code for heat transfer calculations. To accomplish this goal, the ADPAC code was modified to allow for a Cartesian coordinate system capability and to add boundary conditions to handle spanwise periodicity and transpiration boundaries. This user's manual describes how to use the ADPAC code as developed in Task 5, NAS3-25270, including the modifications made to date in Tasks 7 and 8, NAS3-25270.

  11. Performance tuning Weather Research and Forecasting (WRF) Goddard longwave radiative transfer scheme on Intel Xeon Phi

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2015-10-01

    Next-generation mesoscale numerical weather prediction system, the Weather Research and Forecasting (WRF) model, is a designed for dual use for forecasting and research. WRF offers multiple physics options that can be combined in any way. One of the physics options is radiance computation. The major source for energy for the earth's climate is solar radiation. Thus, it is imperative to accurately model horizontal and vertical distribution of the heating. Goddard solar radiative transfer model includes the absorption duo to water vapor,ozone, ozygen, carbon dioxide, clouds and aerosols. The model computes the interactions among the absorption and scattering by clouds, aerosols, molecules and surface. Finally, fluxes are integrated over the entire longwave spectrum.In this paper, we present our results of optimizing the Goddard longwave radiative transfer scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The optimizations improved the performance of the original Goddard longwave radiative transfer scheme on Xeon Phi 7120P by a factor of 2.2x. Furthermore, the same optimizations improved the performance of the Goddard longwave radiative transfer scheme on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 2.1x compared to the original Goddard longwave radiative transfer scheme code.

  12. CONTINUUM INTENSITY AND [O i] SPECTRAL LINE PROFILES IN SOLAR 3D PHOTOSPHERIC MODELS: THE EFFECT OF MAGNETIC FIELDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fabbian, D.; Moreno-Insertis, F., E-mail: damian@iac.es, E-mail: fmi@iac.es

    2015-04-01

    The importance of magnetic fields in three-dimensional (3D) magnetoconvection models of the Sun’s photosphere is investigated in terms of their influence on the continuum intensity at different viewing inclination angles and on the intensity profile of two [O i] spectral lines. We use the RH numerical radiative transfer code to perform a posteriori spectral synthesis on the same time series of magnetoconvection models used in our publications on the effect of magnetic fields on abundance determination. We obtain a good match of the synthetic disk-center continuum intensity to the absolute continuum values from the Fourier Transform Spectrometer (FTS) observational spectrum; the matchmore » of the center-to-limb variation synthetic data to observations is also good, thanks, in part, to the 3D radiation transfer capabilities of the RH code. The different levels of magnetic flux in the numerical time series do not modify the quality of the match. Concerning the targeted [O i] spectral lines, we find, instead, that magnetic fields lead to nonnegligible changes in the synthetic spectrum, with larger average magnetic flux causing both of the lines to become noticeably weaker. The photospheric oxygen abundance that one would derive if instead using nonmagnetic numerical models would thus be lower by a few to several centidex. The inclusion of magnetic fields is confirmed to be important for improving the current modeling of the Sun, here in particular in terms of spectral line formation and of deriving consistent chemical abundances. These results may shed further light on the still controversial issue regarding the precise value of the solar oxygen abundance.« less

  13. Development of Fuel Shuffling Module for PHISICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allan Mabe; Andrea Alfonsi; Cristian Rabiti

    2013-06-01

    PHISICS (Parallel and Highly Innovative Simulation for the INL Code System) [4] code toolkit has been in development at the Idaho National Laboratory. This package is intended to provide a modern analysis tool for reactor physics investigation. It is designed with the mindset to maximize accuracy for a given availability of computational resources and to give state of the art tools to the modern nuclear engineer. This is obtained by implementing several different algorithms and meshing approaches among which the user will be able to choose, in order to optimize his computational resources and accuracy needs. The software is completelymore » modular in order to simplify the independent development of modules by different teams and future maintenance. The package is coupled with the thermo-hydraulic code RELAP5-3D [3]. In the following the structure of the different PHISICS modules is briefly recalled, focusing on the new shuffling module (SHUFFLE), object of this paper.« less

  14. H-NS Facilitates Sequence Diversification of Horizontally Transferred DNAs during Their Integration in Host Chromosomes

    PubMed Central

    Higashi, Koichi; Tobe, Toru; Kanai, Akinori; Uyar, Ebru; Ishikawa, Shu; Suzuki, Yutaka; Ogasawara, Naotake; Kurokawa, Ken; Oshima, Taku

    2016-01-01

    Bacteria can acquire new traits through horizontal gene transfer. Inappropriate expression of transferred genes, however, can disrupt the physiology of the host bacteria. To reduce this risk, Escherichia coli expresses the nucleoid-associated protein, H-NS, which preferentially binds to horizontally transferred genes to control their expression. Once expression is optimized, the horizontally transferred genes may actually contribute to E. coli survival in new habitats. Therefore, we investigated whether and how H-NS contributes to this optimization process. A comparison of H-NS binding profiles on common chromosomal segments of three E. coli strains belonging to different phylogenetic groups indicated that the positions of H-NS-bound regions have been conserved in E. coli strains. The sequences of the H-NS-bound regions appear to have diverged more so than H-NS-unbound regions only when H-NS-bound regions are located upstream or in coding regions of genes. Because these regions generally contain regulatory elements for gene expression, sequence divergence in these regions may be associated with alteration of gene expression. Indeed, nucleotide substitutions in H-NS-bound regions of the ybdO promoter and coding regions have diversified the potential for H-NS-independent negative regulation among E. coli strains. The ybdO expression in these strains was still negatively regulated by H-NS, which reduced the effect of H-NS-independent regulation under normal growth conditions. Hence, we propose that, during E. coli evolution, the conservation of H-NS binding sites resulted in the diversification of the regulation of horizontally transferred genes, which may have facilitated E. coli adaptation to new ecological niches. PMID:26789284

  15. H-NS Facilitates Sequence Diversification of Horizontally Transferred DNAs during Their Integration in Host Chromosomes.

    PubMed

    Higashi, Koichi; Tobe, Toru; Kanai, Akinori; Uyar, Ebru; Ishikawa, Shu; Suzuki, Yutaka; Ogasawara, Naotake; Kurokawa, Ken; Oshima, Taku

    2016-01-01

    Bacteria can acquire new traits through horizontal gene transfer. Inappropriate expression of transferred genes, however, can disrupt the physiology of the host bacteria. To reduce this risk, Escherichia coli expresses the nucleoid-associated protein, H-NS, which preferentially binds to horizontally transferred genes to control their expression. Once expression is optimized, the horizontally transferred genes may actually contribute to E. coli survival in new habitats. Therefore, we investigated whether and how H-NS contributes to this optimization process. A comparison of H-NS binding profiles on common chromosomal segments of three E. coli strains belonging to different phylogenetic groups indicated that the positions of H-NS-bound regions have been conserved in E. coli strains. The sequences of the H-NS-bound regions appear to have diverged more so than H-NS-unbound regions only when H-NS-bound regions are located upstream or in coding regions of genes. Because these regions generally contain regulatory elements for gene expression, sequence divergence in these regions may be associated with alteration of gene expression. Indeed, nucleotide substitutions in H-NS-bound regions of the ybdO promoter and coding regions have diversified the potential for H-NS-independent negative regulation among E. coli strains. The ybdO expression in these strains was still negatively regulated by H-NS, which reduced the effect of H-NS-independent regulation under normal growth conditions. Hence, we propose that, during E. coli evolution, the conservation of H-NS binding sites resulted in the diversification of the regulation of horizontally transferred genes, which may have facilitated E. coli adaptation to new ecological niches.

  16. Pressure-distribution measurements on a transonic low-aspect ratio wing

    NASA Technical Reports Server (NTRS)

    Keener, E. R.

    1985-01-01

    Experimental surface pressure distributions and oil flow photographs are presented for a 0.90 m semispan model of NASA/Lockheed Wing C, a generic transonic, supercritical, low aspect ratio, highly 3-dimensional configuration. This wing was tested at the design angle of attack of 5 deg over a Mach number range from 0.25 to 0.96, and a Reynolds number range from 3.4 x 1,000,000 to 10 x 1,000,000. Pressures were measured with both the tunnel floor and ceiling suction slots open for most of the tests but taped closed for some tests to simulate solid walls. A comparison is made with the measured pressures from a small model in high Reynolds number facility and with predicted pressures using two three dimesional, transonic full potential flow wing codes: design code FLO22 (nonconservative) and TWING code (conservative). At the given design condition, a small region of flow separation occurred. At a Mach number of 0.82 the flow was unseparated and the surface flow angles were less than 10 deg, indicating that the boundary layer flow was not 3-D. Evidence indicate that wings that are optimized for mild shock waves and mild pressure recovery gradients generally have small 3-D boundary layer flow at design conditions for unseparated flow.

  17. The NASA Neutron Star Grand Challenge: The coalescences of Neutron Star Binary System

    NASA Astrophysics Data System (ADS)

    Suen, Wai-Mo

    1998-04-01

    NASA funded a Grand Challenge Project (9/1996-1999) for the development of a multi-purpose numerical treatment for relativistic astrophysics and gravitational wave astronomy. The coalescence of binary neutron stars is chosen as the model problem for the code development. The institutes involved in it are the Argonne Lab, Livermore lab, Max-Planck Institute at Potsdam, StonyBrook, U of Illinois and Washington U. We have recently succeeded in constructing a highly optimized parallel code which is capable of solving the full Einstein equations coupled with relativistic hydrodynamics, running at over 50 GFLOPS on a T3E (the second milestone point of the project). We are presently working on the head-on collisions of two neutron stars, and the inclusion of realistic equations of state into the code. The code will be released to the relativity and astrophysics community in April of 1998. With the full dynamics of the spacetime, relativistic hydro and microphysics all combined into a unified 3D code for the first time, many interesting large scale calculations in general relativistic astrophysics can now be carried out on massively parallel computers.

  18. Ada Integrated Environment III Computer Program Development Specification. Volume III. Ada Optimizing Compiler.

    DTIC Science & Technology

    1981-12-01

    file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler

  19. Comparing conformal, arc radiotherapy and helical tomotherapy in craniospinal irradiation planning.

    PubMed

    Myers, Pamela A; Mavroidis, Panayiotis; Papanikolaou, Nikos; Stathakis, Sotirios

    2014-09-08

    Currently, radiotherapy treatment plan acceptance is based primarily on dosimetric performance measures. However, use of radiobiological analysis to assess benefit in terms of tumor control and harm in terms of injury to normal tissues can be advantageous. For pediatric craniospinal axis irradiation (CSI) patients, in particular, knowing the technique that will optimize the probabilities of benefit versus injury can lead to better long-term outcomes. Twenty-four CSI pediatric patients (median age 10) were retrospectively planned with three techniques: three-dimensional conformal radiation therapy (3D CRT), volumetric-modulated arc therapy (VMAT), and helical tomotherapy (HT). VMAT plans consisted of one superior and one inferior full arc, and tomotherapy plans were created using a 5.02cm field width and helical pitch of 0.287. Each plan was normalized to 95% of target volume (whole brain and spinal cord) receiving prescription dose 23.4Gy in 13 fractions. Using an in-house MATLAB code and DVH data from each plan, the three techniques were evaluated based on biologically effective uniform dose (D=), the complication-free tumor control probability (P+), and the width of the therapeutically beneficial range. Overall, 3D CRT and VMAT plans had similar values of D= (24.1 and 24.2 Gy), while HT had a D= slightly lower (23.6 Gy). The average values of the P+ index were 64.6, 67.4, and 56.6% for 3D CRT, VMAT, and HT plans, respectively, with the VMAT plans having a statistically significant increase in P+. Optimal values of D= were 28.4, 33.0, and 31.9 Gy for 3D CRT, VMAT, and HT plans, respectively. Although P+ values that correspond to the initial dose prescription were lower for HT, after optimizing the D= prescription level, the optimal P+ became 94.1, 99.5, and 99.6% for 3D CRT, VMAT, and HT, respectively, with the VMAT and HT plans having statistically significant increases in P+. If the optimal dose level is prescribed using a radiobiological evaluation method, as opposed to a purely dosimetric one, the two IMRT techniques, VMAT and HT, will yield largest overall benefit to CSI patients by maximizing tumor control and limiting normal tissue injury. Using VMAT or HT may provide these pediatric patients with better long-term outcomes after radiotherapy.

  20. Analysis of Time-Dependent Tritium Breeding Capability of Water Cooled Ceramic Breeder Blanket for CFETR

    NASA Astrophysics Data System (ADS)

    Gao, Fangfang; Zhang, Xiaokang; Pu, Yong; Zhu, Qingjun; Liu, Songlin

    2016-08-01

    Attaining tritium self-sufficiency is an important mission for the Chinese Fusion Engineering Testing Reactor (CFETR) operating on a Deuterium-Tritium (D-T) fuel cycle. It is necessary to study the tritium breeding ratio (TBR) and breeding tritium inventory variation with operation time so as to provide an accurate data for dynamic modeling and analysis of the tritium fuel cycle. A water cooled ceramic breeder (WCCB) blanket is one candidate of blanket concepts for the CFETR. Based on the detailed 3D neutronics model of CFETR with the WCCB blanket, the time-dependent TBR and tritium surplus were evaluated by a coupling calculation of the Monte Carlo N-Particle Transport Code (MCNP) and the fusion activation code FISPACT-2007. The results indicated that the TBR and tritium surplus of the WCCB blanket were a function of operation time and fusion power due to the Li consumption in breeder and material activation. In addition, by comparison with the results calculated by using the 3D neutronics model and employing the transfer factor constant from 1D to 3D, it is noted that 1D analysis leads to an over-estimation for the time-dependent tritium breeding capability when fusion power is larger than 1000 MW. supported by the National Magnetic Confinement Fusion Science Program of China (Nos. 2013GB108004, 2015GB108002, and 2014GB119000), and by National Natural Science Foundation of China (No. 11175207)

  1. Optimal design of a touch trigger probe

    NASA Astrophysics Data System (ADS)

    Li, Rui-Jun; Xiang, Meng; Fan, Kuang-Chao; Zhou, Hao; Feng, Jian

    2015-02-01

    A tungsten stylus with a ruby ball tip was screwed into a floating plate, which was supported by four leaf springs. The displacement of the tip caused by the contact force in 3D could be transferred into the tilt or vertical displacement of a plane mirror mounted on the floating plate. A quadrant photo detector (QPD) based two dimensional angle sensor was used to detect the tilt or the vertical displacement of the plane mirror. The structural parameters of the probe are optimized for equal sensitivity and equal stiffness in a displacement range of +/-5 μm, and a restricted horizontal size of less than 40 mm. Simulation results indicated that the stiffness was less than 0.6 mN/μm and equal in 3D. Experimental results indicated that the probe could be used to achieve a resolution of 1 nm.

  2. Study of the GPS inter-frequency calibration of timing receivers

    NASA Astrophysics Data System (ADS)

    Defraigne, P.; Huang, W.; Bertrand, B.; Rovera, D.

    2018-02-01

    When calibrating Global Positioning System (GPS) stations dedicated to timing, the hardware delays of P1 and P2, the P(Y)-codes on frequencies L1 and L2, are determined separately. In the international atomic time (TAI) network the GPS stations of the time laboratories are calibrated relatively against reference stations. This paper aims at determining the consistency between the P1 and P2 hardware delays (called dP1 and dP2) of these reference stations, and to look at the stability of the inter-signal hardware delays dP1-dP2 of all the stations in the network. The method consists of determining the dP1-dP2 directly from the GPS pseudorange measurements corrected for the frequency-dependent antenna phase center and the frequency-dependent ionosphere corrections, and then to compare these computed dP1-dP2 to the calibrated values. Our results show that the differences between the computed and calibrated dP1-dP2 are well inside the expected combined uncertainty of the two quantities. Furthermore, the consistency between the calibrated time transfer solution obtained from either single-frequency P1 or dual-frequency P3 for reference laboratories is shown to be about 1.0 ns, well inside the 2.1 ns uB uncertainty of a time transfer link based on GPS P3 or Precise Point Positioning. This demonstrates the good consistency between the P1 and P2 hardware delays of the reference stations used for calibration in the TAI network. The long-term stability of the inter-signal hardware delays is also analysed from the computed dP1-dP2. It is shown that only variations larger than 2 ns can be detected for a particular station, while variations of 200 ps can be detected when differentiating the results between two stations. Finally, we also show that in the differential calibration process as used in the TAI network, using the same antenna phase center or using different positions for L1 and L2 signals gives maximum differences of 200 ps on the hardware delays of the separate codes P1 and P2; however, the final impact on the P3 combination is less than 10 ps.

  3. Optimal low thrust geocentric transfer. [mission analysis computer program

    NASA Technical Reports Server (NTRS)

    Edelbaum, T. N.; Sackett, L. L.; Malchow, H. L.

    1973-01-01

    A computer code which will rapidly calculate time-optimal low thrust transfers is being developed as a mission analysis tool. The final program will apply to NEP or SEP missions and will include a variety of environmental effects. The current program assumes constant acceleration. The oblateness effect and shadowing may be included. Detailed state and costate equations are given for the thrust effect, oblateness effect, and shadowing. A simple but adequate model yields analytical formulas for power degradation due to the Van Allen radiation belts for SEP missions. The program avoids the classical singularities by the use of equinoctial orbital elements. Kryloff-Bogoliuboff averaging is used to facilitate rapid calculation. Results for selected cases using the current program are given.

  4. Real-time colouring and filtering with graphics shaders

    NASA Astrophysics Data System (ADS)

    Vohl, D.; Fluke, C. J.; Barnes, D. G.; Hassan, A. H.

    2017-11-01

    Despite the popularity of the Graphics Processing Unit (GPU) for general purpose computing, one should not forget about the practicality of the GPU for fast scientific visualization. As astronomers have increasing access to three-dimensional (3D) data from instruments and facilities like integral field units and radio interferometers, visualization techniques such as volume rendering offer means to quickly explore spectral cubes as a whole. As most 3D visualization techniques have been developed in fields of research like medical imaging and fluid dynamics, many transfer functions are not optimal for astronomical data. We demonstrate how transfer functions and graphics shaders can be exploited to provide new astronomy-specific explorative colouring methods. We present 12 shaders, including four novel transfer functions specifically designed to produce intuitive and informative 3D visualizations of spectral cube data. We compare their utility to classic colour mapping. The remaining shaders highlight how common computation like filtering, smoothing and line ratio algorithms can be integrated as part of the graphics pipeline. We discuss how this can be achieved by utilizing the parallelism of modern GPUs along with a shading language, letting astronomers apply these new techniques at interactive frame rates. All shaders investigated in this work are included in the open source software shwirl (Vohl 2017).

  5. Active spectroscopic measurements of the bulk deuterium properties in the DIII-D tokamak (invited).

    PubMed

    Grierson, B A; Burrell, K H; Chrystal, C; Groebner, R J; Kaplan, D H; Heidbrink, W W; Muñoz Burgos, J M; Pablant, N A; Solomon, W M; Van Zeeland, M A

    2012-10-01

    The neutral-beam induced D(α) emission spectrum contains a wealth of information such as deuterium ion temperature, toroidal rotation, density, beam emission intensity, beam neutral density, and local magnetic field strength magnitude |B| from the Stark-split beam emission spectrum, and fast-ion D(α) emission (FIDA) proportional to the beam-injected fast ion density. A comprehensive spectral fitting routine which accounts for all photoemission processes is employed for the spectral analysis. Interpretation of the measurements to determine physically relevant plasma parameters is assisted by the use of an optimized viewing geometry and forward modeling of the emission spectra using a Monte-Carlo 3D simulation code.

  6. Effects of the plasma profiles on photon and pair production in ultrahigh intensity laser solid interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Y. X.; Jin, X. L., E-mail: jinxiaolin@uestc.edu.cn; Yan, W. Z.

    The model of photon and pair production in strong field quantum electrodynamics is implemented into our 1D3V particle-in-cell code with Monte Carlo algorithm. Using this code, the evolution of the particles in ultrahigh intensity laser (∼10{sup 23} W/cm{sup 2}) interaction with aluminum foil target is observed. Four different initial plasma profiles are considered in the simulations. The effects of initial plasma profiles on photon and pair production, energy spectra, and energy evolution are analyzed. The results imply that one can set an optimal initial plasma profile to obtain the desired photon distributions.

  7. Efficiency transfer using the GEANT4 code of CERN for HPGe gamma spectrometry.

    PubMed

    Chagren, S; Tekaya, M Ben; Reguigui, N; Gharbi, F

    2016-01-01

    In this work we apply the GEANT4 code of CERN to calculate the peak efficiency in High Pure Germanium (HPGe) gamma spectrometry using three different procedures. The first is a direct calculation. The second corresponds to the usual case of efficiency transfer between two different configurations at constant emission energy assuming a reference point detection configuration and the third, a new procedure, consists on the transfer of the peak efficiency between two detection configurations emitting the gamma ray in different energies assuming a "virtual" reference point detection configuration. No pre-optimization of the detector geometrical characteristics was performed before the transfer to test the ability of the efficiency transfer to reduce the effect of the ignorance on their real magnitude on the quality of the transferred efficiency. The obtained and measured efficiencies were found in good agreement for the two investigated methods of efficiency transfer. The obtained agreement proves that Monte Carlo method and especially the GEANT4 code constitute an efficient tool to obtain accurate detection efficiency values. The second investigated efficiency transfer procedure is useful to calibrate the HPGe gamma detector for any emission energy value for a voluminous source using one point source detection efficiency emitting in a different energy as a reference efficiency. The calculations preformed in this work were applied to the measurement exercise of the EUROMET428 project. A measurement exercise where an evaluation of the full energy peak efficiencies in the energy range 60-2000 keV for a typical coaxial p-type HpGe detector and several types of source configuration: point sources located at various distances from the detector and a cylindrical box containing three matrices was performed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. CFD Analysis of Thermal Control System Using NX Thermal and Flow

    NASA Technical Reports Server (NTRS)

    Fortier, C. R.; Harris, M. F. (Editor); McConnell, S. (Editor)

    2014-01-01

    The Thermal Control Subsystem (TCS) is a key part of the Advanced Plant Habitat (APH) for the International Space Station (ISS). The purpose of this subsystem is to provide thermal control, mainly cooling, to the other APH subsystems. One of these subsystems, the Environmental Control Subsystem (ECS), controls the temperature and humidity of the growth chamber (GC) air to optimize the growth of plants in the habitat. The TCS provides thermal control to the ECS with three cold plates, which use Thermoelectric Coolers (TECs) to heat or cool water as needed to control the air temperature in the ECS system. In order to optimize the TCS design, pressure drop and heat transfer analyses were needed. The analysis for this system was performed in Siemens NX Thermal/Flow software (Version 8.5). NX Thermal/Flow has the ability to perform 1D or 3D flow solutions. The 1D flow solver can be used to represent simple geometries, such as pipes and tubes. The 1D flow method also has the ability to simulate either fluid only or fluid and wall regions. The 3D flow solver is similar to other Computational Fluid Dynamic (CFD) software. TCS performance was analyzed using both the 1D and 3D solvers. Each method produced different results, which will be evaluated and discussed.

  9. A new way to generate cytolytic tumor-specific T cells: electroporation of RNA coding for a T cell receptor into T lymphocytes.

    PubMed

    Schaft, Niels; Dörrie, Jan; Müller, Ina; Beck, Verena; Baumann, Stefanie; Schunder, Tanja; Kämpgen, Eckhart; Schuler, Gerold

    2006-09-01

    Effective T cell receptor (TCR) transfer until now required stable retroviral transduction. However, retroviral transduction poses the threat of irreversible genetic manipulation of autologous cells. We, therefore, used optimized RNA transfection for transient manipulation. The transfection efficiency, using EGFP RNA, was >90%. The electroporation of primary T cells, isolated from blood, with TCR-coding RNA resulted in functional cytotoxic T lymphocytes (CTLs) (>60% killing at an effector to target ratio of 20:1) with the same HLA-A2/gp100-specificity as the parental CTL clone. The TCR-transfected T cells specifically recognized peptide-pulsed T2 cells, or dendritic cells electroporated with gp100-coding RNA, in an IFNgamma-secretion assay and retained this ability, even after cryopreservation, over 3 days. Most importantly, we show here for the first time that the electroporated T cells also displayed cytotoxicity, and specifically lysed peptide-loaded T2 cells and HLA-A2+/gp100+ melanoma cells over a period of at least 72 h. Peptide-titration studies showed that the lytic efficiency of the RNA-transfected T cells was similar to that of retrovirally transduced T cells, and approximated that of the parental CTL clone. Functional TCR transfer by RNA electroporation is now possible without the disadvantages of retroviral transduction, and forms a new strategy for the immunotherapy of cancer.

  10. Three-dimensional simulation of beam propagation and heat transfer in static gas Cs DPALs using wave optics and fluid dynamics models

    NASA Astrophysics Data System (ADS)

    Waichman, Karol; Barmashenko, Boris D.; Rosenwaks, Salman

    2017-10-01

    Analysis of beam propagation, kinetic and fluid dynamic processes in Cs diode pumped alkali lasers (DPALs), using wave optics model and gasdynamic code, is reported. The analysis is based on a three-dimensional, time-dependent computational fluid dynamics (3D CFD) model. The Navier-Stokes equations for momentum, heat and mass transfer are solved by a commercial Ansys FLUENT solver based on the finite volume discretization technique. The CFD code which solves the gas conservation equations includes effects of natural convection and temperature diffusion of the species in the DPAL mixture. The DPAL kinetic processes in the Cs/He/C2H6 gas mixture dealt with in this paper involve the three lowest energy levels of Cs, (1) 62S1/2, (2) 62P1/2 and (3) 62P3/2. The kinetic processes include absorption due to the 1->3 D2 transition followed by relaxation the 3 to 2 fine structure levels and stimulated emission due to the 2->1 D1 transition. Collisional quenching of levels 2 and 3 and spontaneous emission from these levels are also considered. The gas flow conservation equations are coupled to fast-Fourier-transform algorithm for transverse mode propagation to obtain a solution of the scalar paraxial propagation equation for the laser beam. The wave propagation equation is solved by the split-step beam propagation method where the gain and refractive index in the DPAL medium affect the wave amplitude and phase. Using the CFD and beam propagation models, the gas flow pattern and spatial distributions of the pump and laser intensities in the resonator were calculated for end-pumped Cs DPAL. The laser power, DPAL medium temperature and the laser beam quality were calculated as a function of pump power. The results of the theoretical model for laser power were compared to experimental results of Cs DPAL.

  11. OSIRIS - an object-oriented parallel 3D PIC code for modeling laser and particle beam-plasma interaction

    NASA Astrophysics Data System (ADS)

    Hemker, Roy

    1999-11-01

    The advances in computational speed make it now possible to do full 3D PIC simulations of laser plasma and beam plasma interactions, but at the same time the increased complexity of these problems makes it necessary to apply modern approaches like object oriented programming to the development of simulation codes. We report here on our progress in developing an object oriented parallel 3D PIC code using Fortran 90. In its current state the code contains algorithms for 1D, 2D, and 3D simulations in cartesian coordinates and for 2D cylindrically-symmetric geometry. For all of these algorithms the code allows for a moving simulation window and arbitrary domain decomposition for any number of dimensions. Recent 3D simulation results on the propagation of intense laser and electron beams through plasmas will be presented.

  12. Aerodynamic shape optimization of Airfoils in 2-D incompressible flow

    NASA Astrophysics Data System (ADS)

    Rangasamy, Srinivethan; Upadhyay, Harshal; Somasekaran, Sandeep; Raghunath, Sreekanth

    2010-11-01

    An optimization framework was developed for maximizing the region of 2-D airfoil immersed in laminar flow with enhanced aerodynamic performance. It uses genetic algorithm over a population of 125, across 1000 generations, to optimize the airfoil. On a stand-alone computer, a run takes about an hour to obtain a converged solution. The airfoil geometry was generated using two Bezier curves; one to represent the thickness and the other the camber of the airfoil. The airfoil profile was generated by adding and subtracting the thickness curve from the camber curve. The coefficient of lift and drag was computed using potential velocity distribution obtained from panel code, and boundary layer transition prediction code was used to predict the location of onset of transition. The objective function of a particular design is evaluated as the weighted-average of aerodynamic characteristics at various angles of attacks. Optimization was carried out for several objective functions and the airfoil designs obtained were analyzed.

  13. Fast 2D FWI on a multi and many-cores workstation.

    NASA Astrophysics Data System (ADS)

    Thierry, Philippe; Donno, Daniela; Noble, Mark

    2014-05-01

    Following the introduction of x86 co-processors (Xeon Phi) and the performance increase of standard 2-socket workstations using the latest 12 cores E5-v2 x86-64 CPU, we present here a MPI + OpenMP implementation of an acoustic 2D FWI (full waveform inversion) code which simultaneously runs on the CPUs and on the co-processors installed in a workstation. The main advantage of running a 2D FWI on a workstation is to be able to quickly evaluate new features such as more complicated wave equations, new cost functions, finite-difference stencils or boundary conditions. Since the co-processor is made of 61 in-order x86 cores, each of them having up to 4 threads, this many-core can be seen as a shared memory SMP (symmetric multiprocessing) machine with its own IP address. Depending on the vendor, a single workstation can handle several co-processors making the workstation as a personal cluster under the desk. The original Fortran 90 CPU version of the 2D FWI code is just recompiled to get a Xeon Phi x86 binary. This multi and many-core configuration uses standard compilers and associated MPI as well as math libraries under Linux; therefore, the cost of code development remains constant, while improving computation time. We choose to implement the code with the so-called symmetric mode to fully use the capacity of the workstation, but we also evaluate the scalability of the code in native mode (i.e running only on the co-processor) thanks to the Linux ssh and NFS capabilities. Usual care of optimization and SIMD vectorization is used to ensure optimal performances, and to analyze the application performances and bottlenecks on both platforms. The 2D FWI implementation uses finite-difference time-domain forward modeling and a quasi-Newton (with L-BFGS algorithm) optimization scheme for the model parameters update. Parallelization is achieved through standard MPI shot gathers distribution and OpenMP for domain decomposition within the co-processor. Taking advantage of the 16 GB of memory available on the co-processor we are able to keep wavefields in memory to achieve the gradient computation by cross-correlation of forward and back-propagated wavefields needed by our time-domain FWI scheme, without heavy traffic on the i/o subsystem and PCIe bus. In this presentation we will also review some simple methodologies to determine performance expectation compared to real performances in order to get optimization effort estimation before starting any huge modification or rewriting of research codes. The key message is the ease of use and development of this hybrid configuration to reach not the absolute peak performance value but the optimal one that ensures the best balance between geophysical and computer developments.

  14. Auto-Regulatory RNA Editing Fine-Tunes mRNA Re-Coding and Complex Behaviour in Drosophila

    PubMed Central

    Savva, Yiannis A.; Jepson, James E.C; Sahin, Asli; Sugden, Arthur U.; Dorsky, Jacquelyn S.; Alpert, Lauren; Lawrence, Charles; Reenan, Robert A.

    2014-01-01

    Auto-regulatory feedback loops are a common molecular strategy used to optimize protein function. In Drosophila many mRNAs involved in neuro-transmission are re-coded at the RNA level by the RNA editing enzyme dADAR, leading to the incorporation of amino acids that are not directly encoded by the genome. dADAR also re-codes its own transcript, but the consequences of this auto-regulation in vivo are unclear. Here we show that hard-wiring or abolishing endogenous dADAR auto-regulation dramatically remodels the landscape of re-coding events in a site-specific manner. These molecular phenotypes correlate with altered localization of dADAR within the nuclear compartment. Furthermore, auto-editing exhibits sexually dimorphic patterns of spatial regulation and can be modified by abiotic environmental factors. Finally, we demonstrate that modifying dAdar auto-editing affects adaptive complex behaviors. Our results reveal the in vivo relevance of auto-regulatory control over post-transcriptional mRNA re-coding events in fine-tuning brain function and organismal behavior. PMID:22531175

  15. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1990-01-01

    An expurgated upper bound on the event error probability of trellis coded modulation is presented. This bound is used to derive a lower bound on the minimum achievable free Euclidean distance d sub (free) of trellis codes. It is shown that the dominant parameters for both bounds, the expurgated error exponent and the asymptotic d sub (free) growth rate, respectively, can be obtained from the cutoff-rate R sub O of the transmission channel by a simple geometric construction, making R sub O the central parameter for finding good trellis codes. Several constellations are optimized with respect to the bounds.

  16. Extreme ultraviolet emission spectra of Gd and Tb ions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kilbane, D.; O'Sullivan, G.

    2010-11-15

    Theoretical extreme ultraviolet emission spectra of gadolinium and terbium ions calculated with the Cowan suite of codes and the flexible atomic code (FAC) relativistic code are presented. 4d-4f and 4p-4d transitions give rise to unresolved transition arrays in a range of ions. The effects of configuration interaction are investigated for transitions between singly excited configurations. Optimization of emission at 6.775 nm and 6.515 nm is achieved for Gd and Tb ions, respectively, by consideration of plasma effects. The resulting synthetic spectra are compared with experimental spectra recorded using the laser produced plasma technique.

  17. Feasibility and validation of virtual autopsy for dental identification using the Interpol dental codes.

    PubMed

    Franco, Ademir; Thevissen, Patrick; Coudyzer, Walter; Develter, Wim; Van de Voorde, Wim; Oyen, Raymond; Vandermeulen, Dirk; Jacobs, Reinhilde; Willems, Guy

    2013-05-01

    Virtual autopsy is a medical imaging technique, using full body computed tomography (CT), allowing for a noninvasive and permanent observation of all body parts. For dental identification clinically and radiologically observed ante-mortem (AM) and post-mortem (PM) oral identifiers are compared. The study aimed to verify if a PM dental charting can be performed on virtual reconstructions of full-body CT's using the Interpol dental codes. A sample of 103 PM full-body CT's was collected from the forensic autopsy files of the Department of Forensic Medicine University Hospitals, KU Leuven, Belgium. For validation purposes, 3 of these bodies underwent a complete dental autopsy, a dental radiological and a full-body CT examination. The bodies were scanned in a Siemens Definition Flash CT Scanner (Siemens Medical Solutions, Germany). The images were examined on 8- and 12-bit screen resolution as three-dimensional (3D) reconstructions and as axial, coronal and sagittal slices. InSpace(®) (Siemens Medical Solutions, Germany) software was used for 3D reconstruction. The dental identifiers were charted on pink PM Interpol forms (F1, F2), using the related dental codes. Optimal dental charting was obtained by combining observations on 3D reconstructions and CT slices. It was not feasible to differentiate between different kinds of dental restoration materials. The 12-bit resolution enabled to collect more detailed evidences, mainly related to positions within a tooth. Oral identifiers, not implemented in the Interpol dental coding were observed. Amongst these, the observed (3D) morphological features of dental and maxillofacial structures are important identifiers. The latter can become particularly more relevant towards the future, not only because of the inherent spatial features, yet also because of the increasing preventive dental treatment, and the decreasing application of dental restorations. In conclusion, PM full-body CT examinations need to be implemented in the PM dental charting protocols and the Interpol dental codes should be adapted accordingly. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  18. Fiber Optic Microsensor for Receptor-Based Assays

    DTIC Science & Technology

    1988-09-01

    MONITORING ORGANIZATION ORDInc.(if applicable ) 6c. ADDRESS (CWty Sta~te, and ZIP code) 7b. ADDRESS (City, State, an~d ZIP=Cd) Nahant, MA 019081 Sa, NAME OF...yield B-PE B-phycoerythrin 545 575 2,410,000 0.98 R-PE R-phycoerythrin 565 578 11960,000 0.68 CPC C- phycocyanine 620 650 1,690,000 0.51 A-PC...efficient transfer occurred for unit magnification. Figure 3 shows the optical design. Evaluation of the instrument was done with both A- phycocyanine

  19. Pre- and postprocessing techniques for determining goodness of computational meshes

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley; Westermann, T.; Bass, J. M.

    1993-01-01

    Research in error estimation, mesh conditioning, and solution enhancement for finite element, finite difference, and finite volume methods has been incorporated into AUDITOR, a modern, user-friendly code, which operates on 2D and 3D unstructured neutral files to improve the accuracy and reliability of computational results. Residual error estimation capabilities provide local and global estimates of solution error in the energy norm. Higher order results for derived quantities may be extracted from initial solutions. Within the X-MOTIF graphical user interface, extensive visualization capabilities support critical evaluation of results in linear elasticity, steady state heat transfer, and both compressible and incompressible fluid dynamics.

  20. Results of comparative RBMK neutron computation using VNIIEF codes (cell computation, 3D statics, 3D kinetics). Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grebennikov, A.N.; Zhitnik, A.K.; Zvenigorodskaya, O.A.

    1995-12-31

    In conformity with the protocol of the Workshop under Contract {open_quotes}Assessment of RBMK reactor safety using modern Western Codes{close_quotes} VNIIEF performed a neutronics computation series to compare western and VNIIEF codes and assess whether VNIIEF codes are suitable for RBMK type reactor safety assessment computation. The work was carried out in close collaboration with M.I. Rozhdestvensky and L.M. Podlazov, NIKIET employees. The effort involved: (1) cell computations with the WIMS, EKRAN codes (improved modification of the LOMA code) and the S-90 code (VNIIEF Monte Carlo). Cell, polycell, burnup computation; (2) 3D computation of static states with the KORAT-3D and NEUmore » codes and comparison with results of computation with the NESTLE code (USA). The computations were performed in the geometry and using the neutron constants presented by the American party; (3) 3D computation of neutron kinetics with the KORAT-3D and NEU codes. These computations were performed in two formulations, both being developed in collaboration with NIKIET. Formulation of the first problem maximally possibly agrees with one of NESTLE problems and imitates gas bubble travel through a core. The second problem is a model of the RBMK as a whole with imitation of control and protection system controls (CPS) movement in a core.« less

  1. Fully-Implicit Navier-Stokes (FIN-S)

    NASA Technical Reports Server (NTRS)

    Kirk, Benjamin S.

    2010-01-01

    FIN-S is a SUPG finite element code for flow problems under active development at NASA Lyndon B. Johnson Space Center and within PECOS: a) The code is built on top of the libMesh parallel, adaptive finite element library. b) The initial implementation of the code targeted supersonic/hypersonic laminar calorically perfect gas flows & conjugate heat transfer. c) Initial extension to thermochemical nonequilibrium about 9 months ago. d) The technologies in FIN-S have been enhanced through a strongly collaborative research effort with Sandia National Labs.

  2. Noninvasive, three-dimensional full-field body sensor for surface deformation monitoring of human body in vivo.

    PubMed

    Chen, Zhenning; Shao, Xinxing; He, Xiaoyuan; Wu, Jialin; Xu, Xiangyang; Zhang, Jinlin

    2017-09-01

    Noninvasive, three-dimensional (3-D), full-field surface deformation measurements of the human body are important for biomedical investigations. We proposed a 3-D noninvasive, full-field body sensor based on stereo digital image correlation (stereo-DIC) for surface deformation monitoring of the human body in vivo. First, by applying an improved water-transfer printing (WTP) technique to transfer optimized speckle patterns onto the skin, the body sensor was conveniently and harmlessly fabricated directly onto the human body. Then, stereo-DIC was used to achieve 3-D noncontact and noninvasive surface deformation measurements. The accuracy and efficiency of the proposed body sensor were verified and discussed by considering different complexions. Moreover, the fabrication of speckle patterns on human skin, which has always been considered a challenging problem, was shown to be feasible, effective, and harmless as a result of the improved WTP technique. An application of the proposed stereo-DIC-based body sensor was demonstrated by measuring the pulse wave velocity of human carotid artery. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  3. Control of the interaction strength of photonic molecules by nanometer precise 3D fabrication.

    PubMed

    Rawlings, Colin D; Zientek, Michal; Spieser, Martin; Urbonas, Darius; Stöferle, Thilo; Mahrt, Rainer F; Lisunova, Yuliya; Brugger, Juergen; Duerig, Urs; Knoll, Armin W

    2017-11-28

    Applications for high resolution 3D profiles, so-called grayscale lithography, exist in diverse fields such as optics, nanofluidics and tribology. All of them require the fabrication of patterns with reliable absolute patterning depth independent of the substrate location and target materials. Here we present a complete patterning and pattern-transfer solution based on thermal scanning probe lithography (t-SPL) and dry etching. We demonstrate the fabrication of 3D profiles in silicon and silicon oxide with nanometer scale accuracy of absolute depth levels. An accuracy of less than 1nm standard deviation in t-SPL is achieved by providing an accurate physical model of the writing process to a model-based implementation of a closed-loop lithography process. For transfering the pattern to a target substrate we optimized the etch process and demonstrate linear amplification of grayscale patterns into silicon and silicon oxide with amplification ratios of ∼6 and ∼1, respectively. The performance of the entire process is demonstrated by manufacturing photonic molecules of desired interaction strength. Excellent agreement of fabricated and simulated structures has been achieved.

  4. 3D scene reconstruction based on multi-view distributed video coding in the Zernike domain for mobile applications

    NASA Astrophysics Data System (ADS)

    Palma, V.; Carli, M.; Neri, A.

    2011-02-01

    In this paper a Multi-view Distributed Video Coding scheme for mobile applications is presented. Specifically a new fusion technique between temporal and spatial side information in Zernike Moments domain is proposed. Distributed video coding introduces a flexible architecture that enables the design of very low complex video encoders compared to its traditional counterparts. The main goal of our work is to generate at the decoder the side information that optimally blends temporal and interview data. Multi-view distributed coding performance strongly depends on the side information quality built at the decoder. At this aim for improving its quality a spatial view compensation/prediction in Zernike moments domain is applied. Spatial and temporal motion activity have been fused together to obtain the overall side-information. The proposed method has been evaluated by rate-distortion performances for different inter-view and temporal estimation quality conditions.

  5. egs_brachy: a versatile and fast Monte Carlo code for brachytherapy

    NASA Astrophysics Data System (ADS)

    Chamberland, Marc J. P.; Taylor, Randle E. P.; Rogers, D. W. O.; Thomson, Rowan M.

    2016-12-01

    egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm)3 voxels) and eye plaque (with (1 mm)3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.

  6. egs_brachy: a versatile and fast Monte Carlo code for brachytherapy.

    PubMed

    Chamberland, Marc J P; Taylor, Randle E P; Rogers, D W O; Thomson, Rowan M

    2016-12-07

    egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm) 3 voxels) and eye plaque (with (1 mm) 3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.

  7. Bio-optimized energy transfer in densely packed fluorescent protein enables near-maximal luminescence and solid-state lasers.

    PubMed

    Gather, Malte C; Yun, Seok Hyun

    2014-12-08

    Bioluminescent organisms are likely to have an evolutionary drive towards high radiance. As such, bio-optimized materials derived from them hold great promise for photonic applications. Here, we show that biologically produced fluorescent proteins retain their high brightness even at the maximum density in solid state through a special molecular structure that provides optimal balance between high protein concentration and low resonance energy transfer self-quenching. Dried films of green fluorescent protein show low fluorescence quenching (-7 dB) and support strong optical amplification (gnet=22 cm(-1); 96 dB cm(-1)). Using these properties, we demonstrate vertical cavity surface emitting micro-lasers with low threshold (<100 pJ, outperforming organic semiconductor lasers) and self-assembled all-protein ring lasers. Moreover, solid-state blends of different proteins support efficient Förster resonance energy transfer, with sensitivity to intermolecular distance thus allowing all-optical sensing. The design of fluorescent proteins may be exploited for bio-inspired solid-state luminescent molecules or nanoparticles.

  8. Bio-optimized energy transfer in densely packed fluorescent protein enables near-maximal luminescence and solid-state lasers

    PubMed Central

    Gather, Malte C.; Yun, Seok Hyun

    2015-01-01

    Bioluminescent organisms are likely to have an evolutionary drive towards high radiance. As such, bio-optimized materials derived from them hold great promise for photonic applications. Here we show that biologically produced fluorescent proteins retain their high brightness even at the maximum density in solid state through a special molecular structure that provides optimal balance between high protein concentration and low resonance energy transfer self-quenching. Dried films of green fluorescent protein show low fluorescence quenching (−7 dB) and support strong optical amplification (gnet = 22 cm−1; 96 dB cm−1). Using these properties, we demonstrate vertical cavity surface emitting micro-lasers with low threshold (<100 pJ, outperforming organic semiconductor lasers) and self-assembled all-protein ring lasers. Moreover, solid-state blends of different proteins support efficient Förster resonance energy transfer, with sensitivity to intermolecular distance thus allowing all-optical sensing. The design of fluorescent proteins may be exploited for bio-inspired solid-state luminescent molecules or nanoparticles. PMID:25483850

  9. Data assimialation for real-time prediction and reanalysis

    NASA Astrophysics Data System (ADS)

    Shprits, Y.; Kellerman, A. C.; Podladchikova, T.; Kondrashov, D. A.; Ghil, M.

    2015-12-01

    We discuss the how data assimilation can be used for the analysis of individual satellite anomalies, development of long-term evolution reconstruction that can be used for the specification models, and use of data assimilation to improve the now-casting and focusing of the radiation belts. We also discuss advanced data assimilation methods such as parameter estimation and smoothing.The 3D data assimilative VERB allows us to blend together data from GOES, RBSP A and RBSP B. Real-time prediction framework operating on our web site based on GOES, RBSP A, B and ACE data and 3D VERB is presented and discussed. In this paper we present a number of application of the data assimilation with the VERB 3D code. 1) Model with data assimilation allows to propagate data to different pitch angles, energies, and L-shells and blends them together with the physics based VERB code in an optimal way. We illustrate how we use this capability for the analysis of the previous events and for obtaining a global and statistical view of the system. 2) The model predictions strongly depend on initial conditions that are set up for the model. Therefore the model is as good as the initial conditions that it uses. To produce the best possible initial condition data from different sources ( GOES, RBSP A, B, our empirical model predictions based on ACE) are all blended together in an optimal way by means of data assimilation as described above. The resulting initial condition does not have gaps. That allows us to make a more accurate predictions.

  10. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2011-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less

  11. Nuclear fuel management optimization using genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1995-07-01

    The code independent genetic algorithm reactor optimization (CIGARO) system has been developed to optimize nuclear reactor loading patterns. It uses genetic algorithms (GAs) and a code-independent interface, so any reactor physics code (e.g., CASMO-3/SIMULATE-3) can be used to evaluate the loading patterns. The system is compared to other GA-based loading pattern optimizers. Tests were carried out to maximize the beginning of cycle k{sub eff} for a pressurized water reactor core loading with a penalty function to limit power peaking. The CIGARO system performed well, increasing the k{sub eff} after lowering the peak power. Tests of a prototype parallel evaluation methodmore » showed the potential for a significant speedup.« less

  12. Optimization of computations for adjoint field and Jacobian needed in 3D CSEM inversion

    NASA Astrophysics Data System (ADS)

    Dehiya, Rahul; Singh, Arun; Gupta, Pravin K.; Israil, M.

    2017-01-01

    We present the features and results of a newly developed code, based on Gauss-Newton optimization technique, for solving three-dimensional Controlled-Source Electromagnetic inverse problem. In this code a special emphasis has been put on representing the operations by block matrices for conjugate gradient iteration. We show how in the computation of Jacobian, the matrix formed by differentiation of system matrix can be made independent of frequency to optimize the operations at conjugate gradient step. The coarse level parallel computing, using OpenMP framework, is used primarily due to its simplicity in implementation and accessibility of shared memory multi-core computing machine to almost anyone. We demonstrate how the coarseness of modeling grid in comparison to source (comp`utational receivers) spacing can be exploited for efficient computing, without compromising the quality of the inverted model, by reducing the number of adjoint calls. It is also demonstrated that the adjoint field can even be computed on a grid coarser than the modeling grid without affecting the inversion outcome. These observations were reconfirmed using an experiment design where the deviation of source from straight tow line is considered. Finally, a real field data inversion experiment is presented to demonstrate robustness of the code.

  13. Three 3D graphical representations of DNA primary sequences based on the classifications of DNA bases and their applications.

    PubMed

    Xie, Guosen; Mo, Zhongxi

    2011-01-21

    In this article, we introduce three 3D graphical representations of DNA primary sequences, which we call RY-curve, MK-curve and SW-curve, based on three classifications of the DNA bases. The advantages of our representations are that (i) these 3D curves are strictly non-degenerate and there is no loss of information when transferring a DNA sequence to its mathematical representation and (ii) the coordinates of every node on these 3D curves have clear biological implication. Two applications of these 3D curves are presented: (a) a simple formula is derived to calculate the content of the four bases (A, G, C and T) from the coordinates of nodes on the curves; and (b) a 12-component characteristic vector is constructed to compare similarity among DNA sequences from different species based on the geometrical centers of the 3D curves. As examples, we examine similarity among the coding sequences of the first exon of beta-globin gene from eleven species and validate similarity of cDNA sequences of beta-globin gene from eight species. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. Multidisciplinary Optimization for Aerospace Using Genetic Optimization

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Hahn, Edward E.; Herrera, Claudia Y.

    2007-01-01

    In support of the ARMD guidelines NASA's Dryden Flight Research Center is developing a multidisciplinary design and optimization tool This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Optimization has made its way into many mainstream applications. For example NASTRAN(TradeMark) has its solution sequence 200 for Design Optimization, and MATLAB(TradeMark) has an Optimization Tool box. Other packages, such as ZAERO(TradeMark) aeroelastic panel code and the CFL3D(TradeMark) Navier-Stokes solver have no built in optimizer. The goal of the tool development is to generate a central executive capable of using disparate software packages ina cross platform network environment so as to quickly perform optimization and design tasks in a cohesive streamlined manner. A provided figure (Figure 1) shows a typical set of tools and their relation to the central executive. Optimization can take place within each individual too, or in a loop between the executive and the tool, or both.

  15. Modeling and inversion Matlab algorithms for resistivity, induced polarization and seismic data

    NASA Astrophysics Data System (ADS)

    Karaoulis, M.; Revil, A.; Minsley, B. J.; Werkema, D. D.

    2011-12-01

    M. Karaoulis (1), D.D. Werkema (3), A. Revil (1,2), A., B. Minsley (4), (1) Colorado School of Mines, Dept. of Geophysics, Golden, CO, USA. (2) ISTerre, CNRS, UMR 5559, Université de Savoie, Equipe Volcan, Le Bourget du Lac, France. (3) U.S. EPA, ORD, NERL, ESD, CMB, Las Vegas, Nevada, USA . (4) USGS, Federal Center, Lakewood, 10, 80225-0046, CO. Abstract We propose 2D and 3D forward modeling and inversion package for DC resistivity, time domain induced polarization (IP), frequency-domain IP, and seismic refraction data. For the resistivity and IP case, discretization is based on rectangular cells, where each cell has as unknown resistivity in the case of DC modelling, resistivity and chargeability in the time domain IP modelling, and complex resistivity in the spectral IP modelling. The governing partial-differential equations are solved with the finite element method, which can be applied to both real and complex variables that are solved for. For the seismic case, forward modeling is based on solving the eikonal equation using a second-order fast marching method. The wavepaths are materialized by Fresnel volumes rather than by conventional rays. This approach accounts for complicated velocity models and is advantageous because it considers frequency effects on the velocity resolution. The inversion can accommodate data at a single time step, or as a time-lapse dataset if the geophysical data are gathered for monitoring purposes. The aim of time-lapse inversion is to find the change in the velocities or resistivities of each model cell as a function of time. Different time-lapse algorithms can be applied such as independent inversion, difference inversion, 4D inversion, and 4D active time constraint inversion. The forward algorithms are benchmarked against analytical solutions and inversion results are compared with existing ones. The algorithms are packaged as Matlab codes with a simple Graphical User Interface. Although the code is parallelized for multi-core cpus, it is not as fast as machine code. In the case of large datasets, someone should consider transferring parts of the code to C or Fortran through mex files. This code is available through EPA's website on the following link http://www.epa.gov/esd/cmb/GeophysicsWebsite/index.html Although this work was reviewed by EPA and approved for publication, it may not necessarily reflect official Agency policy.

  16. Neutron transport analysis for nuclear reactor design

    DOEpatents

    Vujic, Jasmina L.

    1993-01-01

    Replacing regular mesh-dependent ray tracing modules in a collision/transfer probability (CTP) code with a ray tracing module based upon combinatorial geometry of a modified geometrical module (GMC) provides a general geometry transfer theory code in two dimensions (2D) for analyzing nuclear reactor design and control. The primary modification of the GMC module involves generation of a fixed inner frame and a rotating outer frame, where the inner frame contains all reactor regions of interest, e.g., part of a reactor assembly, an assembly, or several assemblies, and the outer frame, with a set of parallel equidistant rays (lines) attached to it, rotates around the inner frame. The modified GMC module allows for determining for each parallel ray (line), the intersections with zone boundaries, the path length between the intersections, the total number of zones on a track, the zone and medium numbers, and the intersections with the outer surface, which parameters may be used in the CTP code to calculate collision/transfer probability and cross-section values.

  17. Neutron transport analysis for nuclear reactor design

    DOEpatents

    Vujic, J.L.

    1993-11-30

    Replacing regular mesh-dependent ray tracing modules in a collision/transfer probability (CTP) code with a ray tracing module based upon combinatorial geometry of a modified geometrical module (GMC) provides a general geometry transfer theory code in two dimensions (2D) for analyzing nuclear reactor design and control. The primary modification of the GMC module involves generation of a fixed inner frame and a rotating outer frame, where the inner frame contains all reactor regions of interest, e.g., part of a reactor assembly, an assembly, or several assemblies, and the outer frame, with a set of parallel equidistant rays (lines) attached to it, rotates around the inner frame. The modified GMC module allows for determining for each parallel ray (line), the intersections with zone boundaries, the path length between the intersections, the total number of zones on a track, the zone and medium numbers, and the intersections with the outer surface, which parameters may be used in the CTP code to calculate collision/transfer probability and cross-section values. 28 figures.

  18. Quantum optimal control of isomerization dynamics of a one-dimensional reaction-path model dominated by a competing dissociation channel

    NASA Astrophysics Data System (ADS)

    Kurosaki, Yuzuru; Artamonov, Maxim; Ho, Tak-San; Rabitz, Herschel

    2009-07-01

    Quantum wave packet optimal control simulations with intense laser pulses have been carried out for studying molecular isomerization dynamics of a one-dimensional (1D) reaction-path model involving a dominant competing dissociation channel. The 1D intrinsic reaction coordinate model mimics the ozone open→cyclic ring isomerization along the minimum energy path that successively connects the ozone cyclic ring minimum, the transition state (TS), the open (global) minimum, and the dissociative O2+O asymptote on the O3 ground-state A1' potential energy surface. Energetically, the cyclic ring isomer, the TS barrier, and the O2+O dissociation channel lie at ˜0.05, ˜0.086, and ˜0.037 hartree above the open isomer, respectively. The molecular orientation of the modeled ozone is held constant with respect to the laser-field polarization and several optimal fields are found that all produce nearly perfect isomerization. The optimal control fields are characterized by distinctive high temporal peaks as well as low frequency components, thereby enabling abrupt transfer of the time-dependent wave packet over the TS from the open minimum to the targeted ring minimum. The quick transition of the ozone wave packet avoids detrimental leakage into the competing O2+O channel. It is possible to obtain weaker optimal laser fields, resulting in slower transfer of the wave packets over the TS, when a reduced level of isomerization is satisfactory.

  19. FILM-30: A Heat Transfer Properties Code for Water Coolant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MARSHALL, THERON D.

    2001-02-01

    A FORTRAN computer code has been written to calculate the heat transfer properties at the wetted perimeter of a coolant channel when provided the bulk water conditions. This computer code is titled FILM-30 and the code calculates its heat transfer properties by using the following correlations: (1) Sieder-Tate: forced convection, (2) Bergles-Rohsenow: onset to nucleate boiling, (3) Bergles-Rohsenow: partially developed nucleate boiling, (4) Araki: fully developed nucleate boiling, (5) Tong-75: critical heat flux (CHF), and (6) Marshall-98: transition boiling. FILM-30 produces output files that provide the heat flux and heat transfer coefficient at the wetted perimeter as a function ofmore » temperature. To validate FILM-30, the calculated heat transfer properties were used in finite element analyses to predict internal temperatures for a water-cooled copper mockup under one-sided heating from a rastered electron beam. These predicted temperatures were compared with the measured temperatures from the author's 1994 and 1998 heat transfer experiments. There was excellent agreement between the predicted and experimentally measured temperatures, which confirmed the accuracy of FILM-30 within the experimental range of the tests. FILM-30 can accurately predict the CHF and transition boiling regimes, which is an important advantage over current heat transfer codes. Consequently, FILM-30 is ideal for predicting heat transfer properties for applications that feature high heat fluxes produced by one-sided heating.« less

  20. Full-wave and ray-based modeling of cross-beam energy transfer between laser beams with distributed phase plates and polarization smoothing

    DOE PAGES

    Follett, R. K.; Edgell, D. H.; Froula, D. H.; ...

    2017-10-20

    Radiation-hydrodynamic simulations of inertial confinement fusion (ICF) experiments rely on ray-based cross-beam energy transfer (CBET) models to calculate laser energy deposition. The ray-based models assume locally plane-wave laser beams and polarization averaged incoherence between laser speckles for beams with polarization smoothing. The impact of beam speckle and polarization smoothing on crossbeam energy transfer (CBET) are studied using the 3-D wave-based laser-plasma-interaction code LPSE. The results indicate that ray-based models under predict CBET when the assumption of spatially averaged longitudinal incoherence across the CBET interaction region is violated. A model for CBET between linearly-polarized speckled beams is presented that uses raymore » tracing to solve for the real speckle pattern of the unperturbed laser beams within the eikonal approximation and gives excellent agreement with the wavebased calculations. Lastly, OMEGA-scale 2-D LPSE calculations using ICF relevant plasma conditions suggest that the impact of beam speckle on laser absorption calculations in ICF implosions is small (< 1%).« less

  1. Full-wave and ray-based modeling of cross-beam energy transfer between laser beams with distributed phase plates and polarization smoothing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Follett, R. K.; Edgell, D. H.; Froula, D. H.

    Radiation-hydrodynamic simulations of inertial confinement fusion (ICF) experiments rely on ray-based cross-beam energy transfer (CBET) models to calculate laser energy deposition. The ray-based models assume locally plane-wave laser beams and polarization averaged incoherence between laser speckles for beams with polarization smoothing. The impact of beam speckle and polarization smoothing on crossbeam energy transfer (CBET) are studied using the 3-D wave-based laser-plasma-interaction code LPSE. The results indicate that ray-based models under predict CBET when the assumption of spatially averaged longitudinal incoherence across the CBET interaction region is violated. A model for CBET between linearly-polarized speckled beams is presented that uses raymore » tracing to solve for the real speckle pattern of the unperturbed laser beams within the eikonal approximation and gives excellent agreement with the wavebased calculations. Lastly, OMEGA-scale 2-D LPSE calculations using ICF relevant plasma conditions suggest that the impact of beam speckle on laser absorption calculations in ICF implosions is small (< 1%).« less

  2. Detailed analysis of an optimized FPP-based 3D imaging system

    NASA Astrophysics Data System (ADS)

    Tran, Dat; Thai, Anh; Duong, Kiet; Nguyen, Thanh; Nehmetallah, Georges

    2016-05-01

    In this paper, we present detail analysis and a step-by-step implementation of an optimized fringe projection profilometry (FPP) based 3D shape measurement system. First, we propose a multi-frequency and multi-phase shifting sinusoidal fringe pattern reconstruction approach to increase accuracy and sensitivity of the system. Second, phase error compensation caused by the nonlinear transfer function of the projector and camera is performed through polynomial approximation. Third, phase unwrapping is performed using spatial and temporal techniques and the tradeoff between processing speed and high accuracy is discussed in details. Fourth, generalized camera and system calibration are developed for phase to real world coordinate transformation. The calibration coefficients are estimated accurately using a reference plane and several gauge blocks with precisely known heights and by employing a nonlinear least square fitting method. Fifth, a texture will be attached to the height profile by registering a 2D real photo to the 3D height map. The last step is to perform 3D image fusion and registration using an iterative closest point (ICP) algorithm for a full field of view reconstruction. The system is experimentally constructed using compact, portable, and low cost off-the-shelf components. A MATLAB® based GUI is developed to control and synchronize the whole system.

  3. Convection and chemistry effects in CVD: A 3-D analysis for silicon deposition

    NASA Technical Reports Server (NTRS)

    Gokoglu, S. A.; Kuczmarski, M. A.; Tsui, P.; Chait, A.

    1989-01-01

    The computational fluid dynamics code FLUENT has been adopted to simulate the entire rectangular-channel-like (3-D) geometry of an experimental CVD reactor designed for Si deposition. The code incorporated the effects of both homogeneous (gas phase) and heterogeneous (surface) chemistry with finite reaction rates of important species existing in silane dissociation. The experiments were designed to elucidate the effects of gravitationally-induced buoyancy-driven convection flows on the quality of the grown Si films. This goal is accomplished by contrasting the results obtained from a carrier gas mixture of H2/Ar with the ones obtained from the same molar mixture ratio of H2/He, without any accompanying change in the chemistry. Computationally, these cases are simulated in the terrestrial gravitational field and in the absence of gravity. The numerical results compare favorably with experiments. Powerful computational tools provide invaluable insights into the complex physicochemical phenomena taking place in CVD reactors. Such information is essential for the improved design and optimization of future CVD reactors.

  4. Fluid management technology: Liquid slosh dynamics and control

    NASA Technical Reports Server (NTRS)

    Dodge, Franklin T.; Green, Steven T.; Kana, Daniel D.

    1991-01-01

    Flight experiments were defined for the Cryogenic On-Orbit Liquid Depot Storage, Acquisition and Transfer Satellite (COLD-SAT) test bed satellite and the Shuttle middeck to help establish the influence of the gravitational environment on liquid slosh dynamics and control. Several analytical and experimental studies were also conducted to support the experiments and to help understand the anticipated results. Both FLOW-3D and NASA-VOF3D computer codes were utilized to simulate low Bond number, small amplitude sloshing, for which the motions are dominated by surface forces; it was found that neither code provided a satisfactory simulation. Thus, a new analysis of low Bond number sloshing was formulated, using an integral minimization technique that will allow the assumptions made about surface physics phenomena to be modified easily when better knowledge becomes available from flight experiments. Several examples were computed by the innovative use of a finite-element structural code. An existing spherical-pendulum analogy of nonlinear, rotary sloshing was also modified for easier use and extended to low-gravity conditions. Laboratory experiments were conducted to determine the requirements for liquid-vapor interface sensors as a method of resolving liquid surface motions in flight experiments. The feasibility of measuring the small slosh forces anticipated in flight experiments was also investigated.

  5. Fluid management technology: Liquid slosh dynamics and control

    NASA Astrophysics Data System (ADS)

    Dodge, Franklin T.; Green, Steven T.; Kana, Daniel D.

    1991-11-01

    Flight experiments were defined for the Cryogenic On-Orbit Liquid Depot Storage, Acquisition and Transfer Satellite (COLD-SAT) test bed satellite and the Shuttle middeck to help establish the influence of the gravitational environment on liquid slosh dynamics and control. Several analytical and experimental studies were also conducted to support the experiments and to help understand the anticipated results. Both FLOW-3D and NASA-VOF3D computer codes were utilized to simulate low Bond number, small amplitude sloshing, for which the motions are dominated by surface forces; it was found that neither code provided a satisfactory simulation. Thus, a new analysis of low Bond number sloshing was formulated, using an integral minimization technique that will allow the assumptions made about surface physics phenomena to be modified easily when better knowledge becomes available from flight experiments. Several examples were computed by the innovative use of a finite-element structural code. An existing spherical-pendulum analogy of nonlinear, rotary sloshing was also modified for easier use and extended to low-gravity conditions. Laboratory experiments were conducted to determine the requirements for liquid-vapor interface sensors as a method of resolving liquid surface motions in flight experiments. The feasibility of measuring the small slosh forces anticipated in flight experiments was also investigated.

  6. Monte Carlo MCNP-4B-based absorbed dose distribution estimates for patient-specific dosimetry.

    PubMed

    Yoriyaz, H; Stabin, M G; dos Santos, A

    2001-04-01

    This study was intended to verify the capability of the Monte Carlo MCNP-4B code to evaluate spatial dose distribution based on information gathered from CT or SPECT. A new three-dimensional (3D) dose calculation approach for internal emitter use in radioimmunotherapy (RIT) was developed using the Monte Carlo MCNP-4B code as the photon and electron transport engine. It was shown that the MCNP-4B computer code can be used with voxel-based anatomic and physiologic data to provide 3D dose distributions. This study showed that the MCNP-4B code can be used to develop a treatment planning system that will provide such information in a time manner, if dose reporting is suitably optimized. If each organ is divided into small regions where the average energy deposition is calculated with a typical volume of 0.4 cm(3), regional dose distributions can be provided with reasonable central processing unit times (on the order of 12-24 h on a 200-MHz personal computer or modest workstation). Further efforts to provide semiautomated region identification (segmentation) and improvement of marrow dose calculations are needed to supply a complete system for RIT. It is envisioned that all such efforts will continue to develop and that internal dose calculations may soon be brought to a similar level of accuracy, detail, and robustness as is commonly expected in external dose treatment planning. For this study we developed a code with a user-friendly interface that works on several nuclear medicine imaging platforms and provides timely patient-specific dose information to the physician and medical physicist. Future therapy with internal emitters should use a 3D dose calculation approach, which represents a significant advance over dose information provided by the standard geometric phantoms used for more than 20 y (which permit reporting of only average organ doses for certain standardized individuals)

  7. Computerized Dental Comparison: A Critical Review of Dental Coding and Ranking Algorithms Used in Victim Identification.

    PubMed

    Adams, Bradley J; Aschheim, Kenneth W

    2016-01-01

    Comparison of antemortem and postmortem dental records is a leading method of victim identification, especially for incidents involving a large number of decedents. This process may be expedited with computer software that provides a ranked list of best possible matches. This study provides a comparison of the most commonly used conventional coding and sorting algorithms used in the United States (WinID3) with a simplified coding format that utilizes an optimized sorting algorithm. The simplified system consists of seven basic codes and utilizes an optimized algorithm based largely on the percentage of matches. To perform this research, a large reference database of approximately 50,000 antemortem and postmortem records was created. For most disaster scenarios, the proposed simplified codes, paired with the optimized algorithm, performed better than WinID3 which uses more complex codes. The detailed coding system does show better performance with extremely large numbers of records and/or significant body fragmentation. © 2015 American Academy of Forensic Sciences.

  8. Numerical study of the 3-D effect on FEL performance and its application to the APS LEUTL FEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chae, Y.C.

    A Low-Energy Undulator Test Line (LEUTL) is under construction at the Advanced Photon Source (APS). In LEUTL periodic focusing is provided by external quadrupoles. This results in an elliptical beam with its betatron oscillation envelope varying along the undulators. The free-electron laser (FEL) interaction with such a beam will exhibit truly 3-D effects. Thus the investigation of 3-D effects is important in optimizing the FEL performance. The programs GINGER and TDA3D, coupled with theoretically known facts, have been used for this purpose. Both programs are fully 3-D in moving the particle, but model the interaction between particles and axially symmetricmore » electromagnetic waves. Even though TDA3D can include a few azimuthal modes in the interaction, it is still not a fully 3-D FEL code. However, they show that these 2-D programs can still be used for an elliptical beam whose aspect ratio is within certain limits. The author presents numerical results of FEL performance for the circular beam, the elliptical beam, and finally for the beam in the realistic LEUTL lattice.« less

  9. On the optimum signal constellation design for high-speed optical transport networks.

    PubMed

    Liu, Tao; Djordjevic, Ivan B

    2012-08-27

    In this paper, we first describe an optimum signal constellation design algorithm, which is optimum in MMSE-sense, called MMSE-OSCD, for channel capacity achieving source distribution. Secondly, we introduce a feedback channel capacity inspired optimum signal constellation design (FCC-OSCD) to further improve the performance of MMSE-OSCD, inspired by the fact that feedback channel capacity is higher than that of systems without feedback. The constellations obtained by FCC-OSCD are, however, OSNR dependent. The optimization is jointly performed together with regular quasi-cyclic low-density parity-check (LDPC) code design. Such obtained coded-modulation scheme, in combination with polarization-multiplexing, is suitable as both 400 Gb/s and multi-Tb/s optical transport enabling technology. Using large girth LDPC code, we demonstrate by Monte Carlo simulations that a 32-ary signal constellation, obtained by FCC-OSCD, outperforms previously proposed optimized 32-ary CIPQ signal constellation by 0.8 dB at BER of 10(-7). On the other hand, the LDPC-coded 16-ary FCC-OSCD outperforms 16-QAM by 1.15 dB at the same BER.

  10. Construction method of QC-LDPC codes based on multiplicative group of finite field in optical communication

    NASA Astrophysics Data System (ADS)

    Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui

    2016-09-01

    In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.

  11. Numerical optimization of a picosecond pulse driven Ni-like Nb x-ray laser at 20.3 nm

    NASA Astrophysics Data System (ADS)

    Lu, X.; Zhong, J. Y.; Li, Y. J.; Zhang, J.

    2003-07-01

    Detailed simulations of a Ni-like Nb x-ray laser pumped by a nanosecond prepulse followed by a picosecond main pulse are presented. The atomic physics data are obtained using the Cowan code [R. D. Cowan, The Theory of Atomic Structure and Spectra (University of California Press, Berkeley, CA, 1981)]. The optimization calculations are performed in terms of the intensity of prepulse and the time delay between the prepulse and the main pulse. A high gain over 150 cm-1 is obtained for the optimized drive pulse configuration. The ray-tracing calculations suggest that the total pump energy for a saturated x-ray laser can be reduced to less than 1 J.

  12. VERA and VERA-EDU 3.5 Release Notes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sieger, Matt; Salko, Robert K.; Kochunas, Brendan M.

    The Virtual Environment for Reactor Applications components included in this distribution include selected computational tools and supporting infrastructure that solve neutronics, thermal-hydraulics, fuel performance, and coupled neutronics-thermal hydraulics problems. The infrastructure components provide a simplified common user input capability and provide for the physics integration with data transfer and coupled-physics iterative solution algorithms. Neutronics analysis can be performed for 2D lattices, 2D core and 3D core problems for pressurized water reactor geometries that can be used to calculate criticality and fission rate distributions by pin for input fuel compositions. MPACT uses the Method of Characteristics transport approach for 2D problems.more » For 3D problems, MPACT uses the 2D/1D method which uses 2D MOC in a radial plane and diffusion or SPn in the axial direction. MPACT includes integrated cross section capabilities that provide problem-specific cross sections generated using the subgroup methodology. The code can be executed both 2D and 3D problems in parallel to reduce overall run time. A thermal-hydraulics capability is provided with CTF (an updated version of COBRA-TF) that allows thermal-hydraulics analyses for single and multiple assemblies using the simplified VERA common input. This distribution also includes coupled neutronics/thermal-hydraulics capabilities to allow calculations using MPACT coupled with CTF. The VERA fuel rod performance component BISON calculates, on a 2D or 3D basis, fuel rod temperature, fuel rod internal pressure, free gas volume, clad integrity and fuel rod waterside diameter. These capabilities allow simulation of power cycling, fuel conditioning and deconditioning, high burnup performance, power uprate scoping studies, and accident performance. Input/Output capabilities include the VERA Common Input (VERAIn) script which converts the ASCII common input file to the intermediate XML used to drive all of the physics codes in the VERA Core Simulator (VERA-CS). VERA component codes either input the VERA XML format directly, or provide a preprocessor which can convert the XML into native input. VERAView is an interactive graphical interface for the visualization and engineering analyses of output data from VERA. The python-based software is easy to install and intuitive to use, and provides instantaneous 2D and 3D images, 1D plots, and alpha-numeric data from VERA multi-physics simulations. Testing within CASL has focused primarily on Westinghouse four-loop reactor geometries and conditions with example problems included in the distribution.« less

  13. Turbulent AGN tori .

    NASA Astrophysics Data System (ADS)

    Schartmann, M.; Meisenheimer, K.; Klahr, H.; Camenzind, M.; Wolf, S.; Henning, Th.

    Recently, the MID-infrared Interferometric instrument (MIDI) at the VLTI has shown that dust tori in the two nearby Seyfert galaxies NGC 1068 and the Circinus galaxy are geometrically thick and can be well described by a thin, warm central disk, surrounded by a colder and fluffy torus component. By carrying out hydrodynamical simulations with the help of the TRAMP code \\citep{schartmann_Klahr_99}, we follow the evolution of a young nuclear star cluster in terms of discrete mass-loss and energy injection from stellar processes. This naturally leads to a filamentary large scale torus component, where cold gas is able to flow radially inwards. The filaments open out into a dense and very turbulent disk structure. In a post-processing step, we calculate observable quantities like spectral energy distributions or images with the help of the 3D radiative transfer code MC3D \\citep{schartmann_Wolf_03}. Good agreement is found in comparisons with data due to the existence of almost dust-free lines of sight through the large scale component and the large column densities caused by the dense disk.

  14. The astrophysical S-factor of the direct 18O(p, γ)19F capture by the ANC method

    NASA Astrophysics Data System (ADS)

    Burjan, V.; Hons, Z.; Kroha, V.; Mrázek, J.; Piskoř, Š.; Mukhamedzhanov, A. M.; Trache, L.; Tribble, R. E.; La Cognata, M.; Lamia, L.; Pizzone, G. R.; Romano, S.; Spitaleri, C.; Tumino, A.

    2018-01-01

    We attempted to determine the astrophysical S-factor of the direct part of the 18O(p, γ)19F capture by the indirect method of asymptotic normalization coefficients (ANC). We measured the differential cross section of the transfer reaction 18O(3He, d)19F at a 3He energy of 24.6 MeV. The measurement was realized on the cyclotron of the NPI in Řež, Czech Republic, with the gas target consisting of the high purity 18O (99.9 %). The reaction products were measured by eight ΔE-E telescopes composed from thin and thick silicon surface-barrier detectors. The parameters of the optical model for the input channel were deduced by means of the code ECIS and the analysis of transfer reactions to 12 levels of the 19F nucleus up to 8.014 MeV was made by the code FRESCO. The deduced ANCs were then used to specify the direct contribution to the 18O(p, γ)19F capture process and were compared with the mutually different results of two works.

  15. DYNA3D: A computer code for crashworthiness engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hallquist, J.O.; Benson, D.J.

    1986-09-01

    A finite element program with crashworthiness applications has been developed at LLNL. DYNA3D, an explicit, fully vectorized, finite deformation structural dynamics program, has four capabilities that are critical for the efficient and realistic modeling crash phenomena: (1) fully optimized nonlinear solid, shell, and beam elements for representing a structure; (2) a broad range of constitutive models for simulating material behavior; (3) sophisticated contact algorithms for impact interactions; (4) a rigid body capability to represent the bodies away from the impact region at a greatly reduced cost without sacrificing accuracy in the momentum calculations. Basic methodologies of the program are brieflymore » presented along with several crashworthiness calculations. Efficiencies of the Hughes-Liu and Belytschko-Tsay shell formulations are considered.« less

  16. The effects of differential flow between rational surfaces on toroidal resistive MHD modes

    NASA Astrophysics Data System (ADS)

    Brennan, Dylan; Halfmoon, Michael; Rhodes, Dov; Cole, Andrew; Okabayashi, Michio; Paz-Soldan, Carlos; Finn, John

    2016-10-01

    Differential flow between resonant surfaces can strongly affect the coupling and penetration of resonant components of resistive modes, and yet this mechanism is not yet fully understood. This study focuses on the evolution of tearing instabilities and the penetration of imposed resonant magnetic perturbations (RMPs) in tokamak configurations relevant to DIII-D and ITER, including equilibrium flow shear. It has been observed on DIII-D that the onset of tearing instabilities leading to disruption is often coincident with a loss of differential rotation between a higher m/n tearing surface (normally the 4/3 or 3/2) and a lower m/n tearing surface (normally the 2/1). Imposing RMPs can strongly affect this coupling and the torques between the modes. We apply the nonlinear 3-D resistive magnetohydrodynamic (MHD) code NIMROD to study the mechanisms by which these couplings occur. Reduced MHD analyses are applied to study the effects of differential flow between resonant surfaces in the simulations. Interaction between resonant modes can cause significant energy transfer between them, effectively stabilizing one mode while the other grows. The flow mitigates this transfer, but also affects the individual modes. The combination of these effects determines the nonlinear outcome. Supported by US DOE Grants DE-SC0014005 and DE-SC0014119.

  17. Selective bond breaking mediated by state specific vibrational excitation in model HOD molecule through optimized femtosecond IR pulse: a simulated annealing based approach.

    PubMed

    Shandilya, Bhavesh K; Sen, Shrabani; Sahoo, Tapas; Talukder, Srijeeta; Chaudhury, Pinaki; Adhikari, Satrajit

    2013-07-21

    The selective control of O-H/O-D bond dissociation in reduced dimensionality model of HOD molecule has been explored through IR+UV femtosecond pulses. The IR pulse has been optimized using simulated annealing stochastic approach to maximize population of a desired low quanta vibrational state. Since those vibrational wavefunctions of the ground electronic states are preferentially localized either along the O-H or O-D mode, the femtosecond UV pulse is used only to transfer vibrationally excited molecule to the repulsive upper surface to cleave specific bond, O-H or O-D. While transferring from the ground electronic state to the repulsive one, the optimization of the UV pulse is not necessarily required except specific case. The results so obtained are analyzed with respect to time integrated flux along with contours of time evolution of probability density on excited potential energy surface. After preferential excitation from [line]0, 0> ([line]m, n> stands for the state having m and n quanta of excitations in O-H and O-D mode, respectively) vibrational level of the ground electronic state to its specific low quanta vibrational state ([line]1, 0> or [line]0, 1> or [line]2, 0> or [line]0, 2>) by using optimized IR pulse, the dissociation of O-D or O-H bond through the excited potential energy surface by UV laser pulse appears quite high namely, 88% (O-H ; [line]1, 0>) or 58% (O-D ; [line]0, 1>) or 85% (O-H ; [line]2, 0>) or 59% (O-D ; [line]0, 2>). Such selectivity of the bond breaking by UV pulse (if required, optimized) together with optimized IR one is encouraging compared to the normal pulses.

  18. Representation of DNA sequences in genetic codon context with applications in exon and intron prediction.

    PubMed

    Yin, Changchuan

    2015-04-01

    To apply digital signal processing (DSP) methods to analyze DNA sequences, the sequences first must be specially mapped into numerical sequences. Thus, effective numerical mappings of DNA sequences play key roles in the effectiveness of DSP-based methods such as exon prediction. Despite numerous mappings of symbolic DNA sequences to numerical series, the existing mapping methods do not include the genetic coding features of DNA sequences. We present a novel numerical representation of DNA sequences using genetic codon context (GCC) in which the numerical values are optimized by simulation annealing to maximize the 3-periodicity signal to noise ratio (SNR). The optimized GCC representation is then applied in exon and intron prediction by Short-Time Fourier Transform (STFT) approach. The results show the GCC method enhances the SNR values of exon sequences and thus increases the accuracy of predicting protein coding regions in genomes compared with the commonly used 4D binary representation. In addition, this study offers a novel way to reveal specific features of DNA sequences by optimizing numerical mappings of symbolic DNA sequences.

  19. Insensitive Munitions Modeling Improvement Efforts

    DTIC Science & Technology

    2010-10-01

    LLNL) ALE3D . Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to...codes most commonly used by munition designers are CTH and the SIERRA suite of codes produced by Sandia National Labs (SNL) and ALE3D produced by... ALE3D , a LLNL developed code, is also used by various DoD participants. It was however, designed differently than either CTH or Sierra. ALE3D is a

  20. Supercomputers for engineering analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goudreau, G.L.; Benson, D.J.; Hallquist, J.O.

    1986-07-01

    The Cray-1 and Cray X-MP/48 experience in engineering computations at the Lawrence Livermore National Laboratory is surveyed. The fully vectorized explicit DYNA and implicit NIKE finite element codes are discussed with respect to solid and structural mechanics. The main efficiencies for production analyses are currently obtained by simple CFT compiler exploitation of pipeline architecture for inner do-loop optimization. Current developmet of outer-loop multitasking is also discussed. Applications emphasis will be on 3D examples spanning earth penetrator loads analysis, target lethality assessment, and crashworthiness. The use of a vectorized large deformation shell element in both DYNA and NIKE has substantially expandedmore » 3D nonlinear capability. 25 refs., 7 figs.« less

  1. Hydrodynamic models of a cepheid atmosphere. Ph.D. Thesis - Maryland Univ., College Park

    NASA Technical Reports Server (NTRS)

    Karp, A. H.

    1974-01-01

    A method for including the solution of the transfer equation in a standard Henyey type hydrodynamic code was developed. This modified Henyey method was used in an implicit hydrodynamic code to compute deep envelope models of a classical Cepheid with a period of 12(d) including radiative transfer effects in the optically thin zones. It was found that the velocity gradients in the atmosphere are not responsible for the large microturbulent velocities observed in Cepheids but may be responsible for the occurrence of supersonic microturbulence. It was found that the splitting of the cores of the strong lines is due to shock induced temperature inversions in the line forming region. The adopted light, color, and velocity curves were used to study three methods frequently used to determine the mean radii of Cepheids. It is concluded that an accuracy of 10% is possible only if high quality observations are used.

  2. NASA Tech Briefs, March 2009

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Topics covered include: Improved Instrument for Detecting Water and Ice in Soil; Real-Time Detection of Dust Devils from Pressure Readings; Determining Surface Roughness in Urban Areas Using Lidar Data; DSN Data Visualization Suite; Hamming and Accumulator Codes Concatenated with MPSK or QAM; Wide-Angle-Scanning Reflectarray Antennas Actuated by MEMS; Biasable Subharmonic Membrane Mixer for 520 to 600 GHz; Hardware Implementation of Serially Concatenated PPM Decoder; Symbolic Processing Combined with Model-Based Reasoning; Presentation Extensions of the SOAP; Spreadsheets for Analyzing and Optimizing Space Missions; Processing Ocean Images to Detect Large Drift Nets; Alternative Packaging for Back-Illuminated Imagers; Diamond Machining of an Off-Axis Biconic Aspherical Mirror; Laser Ablation Increases PEM/Catalyst Interfacial Area; Damage Detection and Self-Repair in Inflatable/Deployable Structures; Polyimide/Glass Composite High-Temperature Insulation; Nanocomposite Strain Gauges Having Small TCRs; Quick-Connect Windowed Non-Stick Penetrator Tips for Rapid Sampling; Modeling Unsteady Cavitation and Dynamic Loads in Turbopumps; Continuous-Flow System Produces Medical-Grade Water; Discrimination of Spore-Forming Bacilli Using spoIVA; nBn Infrared Detector Containing Graded Absorption Layer; Atomic References for Measuring Small Accelerations; Ultra-Broad-Band Optical Parametric Amplifier or Oscillator; Particle-Image Velocimeter Having Large Depth of Field; Enhancing SERS by Means of Supramolecular Charge Transfer; Improving 3D Wavelet-Based Compression of Hyperspectral Images; Improved Signal Chains for Readout of CMOS Imagers; SOI CMOS Imager with Suppression of Cross-Talk; Error-Rate Bounds for Coded PPM on a Poisson Channel; Biomorphic Multi-Agent Architecture for Persistent Computing; and Using Covariance Analysis to Assess Pointing Performance.

  3. Biomimetic Materials and Fabrication Approaches for Bone Tissue Engineering.

    PubMed

    Kim, Hwan D; Amirthalingam, Sivashanmugam; Kim, Seunghyun L; Lee, Seunghun S; Rangasamy, Jayakumar; Hwang, Nathaniel S

    2017-12-01

    Various strategies have been explored to overcome critically sized bone defects via bone tissue engineering approaches that incorporate biomimetic scaffolds. Biomimetic scaffolds may provide a novel platform for phenotypically stable tissue formation and stem cell differentiation. In recent years, osteoinductive and inorganic biomimetic scaffold materials have been optimized to offer an osteo-friendly microenvironment for the osteogenic commitment of stem cells. Furthermore, scaffold structures with a microarchitecture design similar to native bone tissue are necessary for successful bone tissue regeneration. For this reason, various methods for fabricating 3D porous structures have been developed. Innovative techniques, such as 3D printing methods, are currently being utilized for optimal host stem cell infiltration, vascularization, nutrient transfer, and stem cell differentiation. In this progress report, biomimetic materials and fabrication approaches that are currently being utilized for biomimetic scaffold design are reviewed. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Status of LANL Efforts to Effectively Use Sequoia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nystrom, William David

    2015-05-14

    Los Alamos National Laboratory (LANL) is currently working on 3 new production applications, VPC, xRage, and Pagosa. VPIC was designed to be a 3D relativist, electromagnetic Particle-In-Cell code for plasma simulation. xRage, a 3D AMR mesh amd multi physics hydro code. Pagosa, is a 3D structured mesh and multi physics hydro code.

  5. Fast and Accurate Radiative Transfer Calculations Using Principal Component Analysis for (Exo-)Planetary Retrieval Models

    NASA Astrophysics Data System (ADS)

    Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.

    2015-12-01

    Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work on which is under way.

  6. Institute for High Heat Flux Removal (IHHFR). Phases I, II, and III

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyd, Ronald D.

    2014-08-31

    The IHHFR focused on interdisciplinary applications as it relates to high heat flux engineering issues and problems which arise due to engineering systems being miniaturized, optimized, or requiring increased high heat flux performance. The work in the IHHFR focused on water as a coolant and includes: (1) the development, design, and construction of the high heat flux flow loop and facility; (2) test section development, design, and fabrication; and, (3) single-side heat flux experiments to produce 2-D boiling curves and 3-D conjugate heat transfer measurements for single-side heated test sections. This work provides data for comparisons with previously developed andmore » new single-side heated correlations and approaches that address the single-side heated effect on heat transfer. In addition, this work includes the addition of single-side heated circular TS and a monoblock test section with a helical wire insert. Finally, the present work includes: (1) data base expansion for the monoblock with a helical wire insert (only for the latter geometry), (2) prediction and verification using finite element, (3) monoblock model and methodology development analyses, and (4) an alternate model development for a hypervapotron and related conjugate heat transfer controlling parameters.« less

  7. Final Technical Report for SBIR entitled Four-Dimensional Finite-Orbit-Width Fokker-Planck Code with Sources, for Neoclassical/Anomalous Transport Simulation of Ion and Electron Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harvey, R. W.; Petrov, Yu. V.

    2013-12-03

    Within the US Department of Energy/Office of Fusion Energy magnetic fusion research program, there is an important whole-plasma-modeling need for a radio-frequency/neutral-beam-injection (RF/NBI) transport-oriented finite-difference Fokker-Planck (FP) code with combined capabilities for 4D (2R2V) geometry near the fusion plasma periphery, and computationally less demanding 3D (1R2V) bounce-averaged capabilities for plasma in the core of fusion devices. Demonstration of proof-of-principle achievement of this goal has been carried out in research carried out under Phase I of the SBIR award. Two DOE-sponsored codes, the CQL3D bounce-average Fokker-Planck code in which CompX has specialized, and the COGENT 4D, plasma edge-oriented Fokker-Planck code whichmore » has been constructed by Lawrence Livermore National Laboratory and Lawrence Berkeley Laboratory scientists, where coupled. Coupling was achieved by using CQL3D calculated velocity distributions including an energetic tail resulting from NBI, as boundary conditions for the COGENT code over the two-dimensional velocity space on a spatial interface (flux) surface at a given radius near the plasma periphery. The finite-orbit-width fast ions from the CQL3D distributions penetrated into the peripheral plasma modeled by the COGENT code. This combined code demonstrates the feasibility of the proposed 3D/4D code. By combining these codes, the greatest computational efficiency is achieved subject to present modeling needs in toroidally symmetric magnetic fusion devices. The more efficient 3D code can be used in its regions of applicability, coupled to the more computationally demanding 4D code in higher collisionality edge plasma regions where that extended capability is necessary for accurate representation of the plasma. More efficient code leads to greater use and utility of the model. An ancillary aim of the project is to make the combined 3D/4D code user friendly. Achievement of full-coupling of these two Fokker-Planck codes will advance computational modeling of plasma devices important to the USDOE magnetic fusion energy program, in particular the DIII-D tokamak at General Atomics, San Diego, the NSTX spherical tokamak at Princeton, New Jersey, and the MST reversed-field-pinch Madison, Wisconsin. The validation studies of the code against the experiments will improve understanding of physics important for magnetic fusion, and will increase our design capabilities for achieving the goals of the International Tokamak Experimental Reactor (ITER) project in which the US is a participant and which seeks to demonstrate at least a factor of five in fusion power production divided by input power.« less

  8. Space-time adaptive solution of inverse problems with the discrete adjoint method

    NASA Astrophysics Data System (ADS)

    Alexe, Mihai; Sandu, Adrian

    2014-08-01

    This paper develops a framework for the construction and analysis of discrete adjoint sensitivities in the context of time dependent, adaptive grid, adaptive step models. Discrete adjoints are attractive in practice since they can be generated with low effort using automatic differentiation. However, this approach brings several important challenges. The space-time adjoint of the forward numerical scheme may be inconsistent with the continuous adjoint equations. A reduction in accuracy of the discrete adjoint sensitivities may appear due to the inter-grid transfer operators. Moreover, the optimization algorithm may need to accommodate state and gradient vectors whose dimensions change between iterations. This work shows that several of these potential issues can be avoided through a multi-level optimization strategy using discontinuous Galerkin (DG) hp-adaptive discretizations paired with Runge-Kutta (RK) time integration. We extend the concept of dual (adjoint) consistency to space-time RK-DG discretizations, which are then shown to be well suited for the adaptive solution of time-dependent inverse problems. Furthermore, we prove that DG mesh transfer operators on general meshes are also dual consistent. This allows the simultaneous derivation of the discrete adjoint for both the numerical solver and the mesh transfer logic with an automatic code generation mechanism such as algorithmic differentiation (AD), potentially speeding up development of large-scale simulation codes. The theoretical analysis is supported by numerical results reported for a two-dimensional non-stationary inverse problem.

  9. 3D Space Radiation Transport in a Shielded ICRU Tissue Sphere

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2014-01-01

    A computationally efficient 3DHZETRN code capable of simulating High Charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation was recently developed for a simple homogeneous shield object. Monte Carlo benchmarks were used to verify the methodology in slab and spherical geometry, and the 3D corrections were shown to provide significant improvement over the straight-ahead approximation in some cases. In the present report, the new algorithms with well-defined convergence criteria are extended to inhomogeneous media within a shielded tissue slab and a shielded tissue sphere and tested against Monte Carlo simulation to verify the solution methods. The 3D corrections are again found to more accurately describe the neutron and light ion fluence spectra as compared to the straight-ahead approximation. These computationally efficient methods provide a basis for software capable of space shield analysis and optimization.

  10. Direct G-code manipulation for 3D material weaving

    NASA Astrophysics Data System (ADS)

    Koda, S.; Tanaka, H.

    2017-04-01

    The process of conventional 3D printing begins by first build a 3D model, then convert to the model to G-code via a slicer software, feed the G-code to the printer, and finally start the printing. The most simple and popular 3D printing technique is Fused Deposition Modeling. However, in this method, the printing path that the printer head can take is restricted by the G-code. Therefore the printed 3D models with complex pattern have structural errors like holes or gaps between the printed material lines. In addition, the structural density and the material's position of the printed model are difficult to control. We realized the G-code editing, Fabrix, for making a more precise and functional printed model with both single and multiple material. The models with different stiffness are fabricated by the controlling the printing density of the filament materials with our method. In addition, the multi-material 3D printing has a possibility to expand the physical properties by the material combination and its G-code editing. These results show the new printing method to provide more creative and functional 3D printing techniques.

  11. FBCOT: a fast block coding option for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Taubman, David; Naman, Aous; Mathew, Reji

    2017-09-01

    Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).

  12. Laser generated Ge ions accelerated by additional electrostatic field for implantation technology

    NASA Astrophysics Data System (ADS)

    Rosinski, M.; Gasior, P.; Fazio, E.; Ando, L.; Giuffrida, L.; Torrisi, L.; Parys, P.; Mezzasalma, A. M.; Wolowski, J.

    2013-05-01

    The paper presents research on the optimization of the laser ion implantation method with electrostatic acceleration/deflection including numerical simulations by the means of the Opera 3D code and experimental tests at the IPPLM, Warsaw. To introduce the ablation process an Nd:YAG laser system with repetition rate of 10 Hz, pulse duration of 3.5 ns and pulse energy of 0.5 J has been applied. Ion time of flight diagnostics has been used in situ to characterize concentration and energy distribution in the obtained ion streams while the postmortem analysis of the implanted samples was conducted by the means of XRD, FTIR and Raman Spectroscopy. In the paper the predictions of the Opera 3D code are compared with the results of the ion diagnostics in the real experiment. To give the whole picture of the method, the postmortem results of the XRD, FTIR and Raman characterization techniques are discussed. Experimental results show that it is possible to achieve the development of a micrometer-sized crystalline Ge phase and/or an amorphous one only after a thermal annealing treatment.

  13. Spectral and Structure Modeling of Low and High Mass Young Stars Using a Radiative Trasnfer Code

    NASA Astrophysics Data System (ADS)

    Robson Rocha, Will; Pilling, Sergio

    The spectroscopy data from space telescopes (ISO, Spitzer, Herchel) shows that in addition to dust grains (e.g. silicates), there is also the presence of the frozen molecular species (astrophysical ices, such as H _{2}O, CO, CO _{2}, CH _{3}OH) in the circumstellar environments. In this work we present a study of the modeling of low and high mass young stellar objects (YSOs), where we highlight the importance in the use of the astrophysical ices processed by the radiation (UV, cosmic rays) comes from stars in formation process. This is important to characterize the physicochemical evolution of the ices distributed by the protostellar disk and its envelope in some situations. To perform this analysis, we gathered (i) observational data from Infrared Space Observatory (ISO) related with low mass protostar Elias29 and high mass protostar W33A, (ii) absorbance experimental data in the infrared spectral range used to determinate the optical constants of the materials observed around this objects and (iii) a powerful radiative transfer code to simulate the astrophysical environment (RADMC-3D, Dullemond et al, 2012). Briefly, the radiative transfer calculation of the YSOs was done employing the RADMC-3D code. The model outputs were the spectral energy distribution and theoretical images in different wavelengths of the studied objects. The functionality of this code is based on the Monte Carlo methodology in addition to Mie theory for interaction among radiation and matter. The observational data from different space telescopes was used as reference for comparison with the modeled data. The optical constants in the infrared, used as input in the models, were calculated directly from absorbance data obtained in the laboratory of both unprocessed and processed simulated interstellar samples by using NKABS code (Rocha & Pilling 2014). We show from this study that some absorption bands in the infrared, observed in the spectrum of Elias29 and W33A can arises after the ices around the protostars were processed by the radiation comes from central object. In addition, we were able also to compare the observational data for this two objects with those obtained in the modeling. Authors would like to thanks the agencies FAPESP (JP#2009/18304-0 and PHD#2013/07657-5).

  14. Optimization of the level and range of working temperature of the PCM in the gypsum-microencapsulated PCM thermal energy storage unit for summer conditions in Central Poland

    NASA Astrophysics Data System (ADS)

    Łapka, P.; Jaworski, M.

    2017-10-01

    In this paper thermal energy storage (TES) unit in a form of a ceiling panel made of gypsum-microencapsulated PCM composite with internal U-shaped channels was considered and optimal characteristics of the microencapsulated PCM were determined. This panel may be easily incorporated into, e.g., an office or residential ventilation system in order to reduce daily variations of air temperature during the summer without additional costs related to the consumption of energy for preparing air parameters to the desired level. For the purpose of the analysis of heat transfer in the panel, a novel numerical simulator was developed. The numerical model consists of two coupled parts, i.e., the 1D which deals with the air flowing through the U-shaped channel and the 3D which deals with heat transfer in the body of the panel. The computational tool was validated based on the experimental study performed on the special set-up. Using this tool an optimization of parameters of the gypsum-microencapsulated PCM composite was performed in order to determine its most appropriate properties for the application under study. The analyses were performed for averaged local summer conditions in Warsaw, Poland.

  15. 3D-PDR: Three-dimensional photodissociation region code

    NASA Astrophysics Data System (ADS)

    Bisbas, T. G.; Bell, T. A.; Viti, S.; Yates, J.; Barlow, M. J.

    2018-03-01

    3D-PDR is a three-dimensional photodissociation region code written in Fortran. It uses the Sundials package (written in C) to solve the set of ordinary differential equations and it is the successor of the one-dimensional PDR code UCL_PDR (ascl:1303.004). Using the HEALpix ray-tracing scheme (ascl:1107.018), 3D-PDR solves a three-dimensional escape probability routine and evaluates the attenuation of the far-ultraviolet radiation in the PDR and the propagation of FIR/submm emission lines out of the PDR. The code is parallelized (OpenMP) and can be applied to 1D and 3D problems.

  16. Design of two-dimensional zero reference codes with cross-entropy method.

    PubMed

    Chen, Jung-Chieh; Wen, Chao-Kai

    2010-06-20

    We present a cross-entropy (CE)-based method for the design of optimum two-dimensional (2D) zero reference codes (ZRCs) in order to generate a zero reference signal for a grating measurement system and achieve absolute position, a coordinate origin, or a machine home position. In the absence of diffraction effects, the 2D ZRC design problem is known as the autocorrelation approximation. Based on the properties of the autocorrelation function, the design of the 2D ZRC is first formulated as a particular combination optimization problem. The CE method is then applied to search for an optimal 2D ZRC and thus obtain the desirable zero reference signal. Computer simulation results indicate that there are 15.38% and 14.29% reductions in the second maxima value for the 16x16 grating system with n(1)=64 and the 100x100 grating system with n(1)=300, respectively, where n(1) is the number of transparent pixels, compared with those of the conventional genetic algorithm.

  17. On the reduced-complexity of LDPC decoders for ultra-high-speed optical transmission.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2010-10-25

    We propose two reduced-complexity (RC) LDPC decoders, which can be used in combination with large-girth LDPC codes to enable ultra-high-speed serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.46 dB (at BER of 10(-9)) worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further study the use of RC LDPC decoding algorithms in multilevel coded modulation with coherent detection and show that with RC decoding algorithms we can achieve the net coding gain larger than 11 dB at BERs below 10(-9).

  18. ALE3D: An Arbitrary Lagrangian-Eulerian Multi-Physics Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noble, Charles R.; Anderson, Andrew T.; Barton, Nathan R.

    ALE3D is a multi-physics numerical simulation software tool utilizing arbitrary-Lagrangian- Eulerian (ALE) techniques. The code is written to address both two-dimensional (2D plane and axisymmetric) and three-dimensional (3D) physics and engineering problems using a hybrid finite element and finite volume formulation to model fluid and elastic-plastic response of materials on an unstructured grid. As shown in Figure 1, ALE3D is a single code that integrates many physical phenomena.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prasad, M.K.; Kershaw, D.S.; Shaw, M.J.

    The authors present detailed features of the ICF3D hydrodynamics code used for inertial fusion simulations. This code is intended to be a state-of-the-art upgrade of the well-known fluid code, LASNEX. ICF3D employs discontinuous finite elements on a discrete unstructured mesh consisting of a variety of 3D polyhedra including tetrahedra, prisms, and hexahedra. The authors discussed details of how the ROE-averaged second-order convection was applied on the discrete elements, and how the C++ coding interface has helped to simplify implementing the many physics and numerics modules within the code package. The author emphasized the virtues of object-oriented design in large scalemore » projects such as ICF3D.« less

  20. CFL3D User's Manual (Version 5.0)

    NASA Technical Reports Server (NTRS)

    Krist, Sherrie L.; Biedron, Robert T.; Rumsey, Christopher L.

    1998-01-01

    This document is the User's Manual for the CFL3D computer code, a thin-layer Reynolds-averaged Navier-Stokes flow solver for structured multiple-zone grids. Descriptions of the code's input parameters, non-dimensionalizations, file formats, boundary conditions, and equations are included. Sample 2-D and 3-D test cases are also described, and many helpful hints for using the code are provided.

  1. Numerical Parameter Optimization of the Ignition and Growth Model for HMX Based Plastic Bonded Explosives

    NASA Astrophysics Data System (ADS)

    Gambino, James; Tarver, Craig; Springer, H. Keo; White, Bradley; Fried, Laurence

    2017-06-01

    We present a novel method for optimizing parameters of the Ignition and Growth reactive flow (I&G) model for high explosives. The I&G model can yield accurate predictions of experimental observations. However, calibrating the model is a time-consuming task especially with multiple experiments. In this study, we couple the differential evolution global optimization algorithm to simulations of shock initiation experiments in the multi-physics code ALE3D. We develop parameter sets for HMX based explosives LX-07 and LX-10. The optimization finds the I&G model parameters that globally minimize the difference between calculated and experimental shock time of arrival at embedded pressure gauges. This work was performed under the auspices of the U.S. DOE by LLNL under contract DE-AC52-07NA27344. LLNS, LLC LLNL-ABS- 724898.

  2. A new DWT/MC/DPCM video compression framework based on EBCOT

    NASA Astrophysics Data System (ADS)

    Mei, L. M.; Wu, H. R.; Tan, D. M.

    2005-07-01

    A novel Discrete Wavelet Transform (DWT)/Motion Compensation (MC)/Differential Pulse Code Modulation (DPCM) video compression framework is proposed in this paper. Although the Discrete Cosine Transform (DCT)/MC/DPCM is the mainstream framework for video coders in industry and international standards, the idea of DWT/MC/DPCM has existed for more than one decade in the literature and the investigation is still undergoing. The contribution of this work is twofold. Firstly, the Embedded Block Coding with Optimal Truncation (EBCOT) is used here as the compression engine for both intra- and inter-frame coding, which provides good compression ratio and embedded rate-distortion (R-D) optimization mechanism. This is an extension of the EBCOT application from still images to videos. Secondly, this framework offers a good interface for the Perceptual Distortion Measure (PDM) based on the Human Visual System (HVS) where the Mean Squared Error (MSE) can be easily replaced with the PDM in the R-D optimization. Some of the preliminary results are reported here. They are also compared with benchmarks such as MPEG-2 and MPEG-4 version 2. The results demonstrate that under specified condition the proposed coder outperforms the benchmarks in terms of rate vs. distortion.

  3. SPEXTRA: Optimal extraction code for long-slit spectra in crowded fields

    NASA Astrophysics Data System (ADS)

    Sarkisyan, A. N.; Vinokurov, A. S.; Solovieva, Yu. N.; Sholukhova, O. N.; Kostenkov, A. E.; Fabrika, S. N.

    2017-10-01

    We present a code for the optimal extraction of long-slit 2D spectra in crowded stellar fields. Its main advantage and difference from the existing spectrum extraction codes is the presence of a graphical user interface (GUI) and a convenient visualization system of data and extraction parameters. On the whole, the package is designed to study stars in crowded fields of nearby galaxies and star clusters in galaxies. Apart from the spectrum extraction for several stars which are closely located or superimposed, it allows the spectra of objects to be extracted with subtraction of superimposed nebulae of different shapes and different degrees of ionization. The package can also be used to study single stars in the case of a strong background. In the current version, the optimal extraction of 2D spectra with an aperture and the Gaussian function as PSF (point spread function) is proposed. In the future, the package will be supplemented with the possibility to build a PSF based on a Moffat function. We present the details of GUI, illustrate main features of the package, and show results of extraction of the several interesting spectra of objects from different telescopes.

  4. Numerical optimization of the ramp-down phase with the RAPTOR code

    NASA Astrophysics Data System (ADS)

    Teplukhina, Anna; Sauter, Olivier; Felici, Federico; The Tcv Team; The ASDEX-Upgrade Team; The Eurofusion Mst1 Team

    2017-10-01

    The ramp-down optimization goal in this work is defined as the fastest possible decrease of a plasma current while avoiding any disruptions caused by reaching physical or technical limits. Numerical simulations and preliminary experiments on TCV and AUG have shown that a fast decrease of plasma elongation and an adequate timing of the H-L transition during current ramp-down can help to avoid reaching high values of the plasma internal inductance. The RAPTOR code (F. Felici et al., 2012 PPCF 54; F. Felici, 2011 EPFL PhD thesis), developed for real-time plasma control, has been used for an optimization problem solving. Recently the transport model has been extended to include the ion temperature and electron density transport equations in addition to the electron temperature and current density transport equations, increasing the physical applications of the code. The gradient-based models for the transport coefficients (O. Sauter et al., 2014 PPCF 21; D. Kim et al., 2016 PPCF 58) have been implemented to RAPTOR and tested during this work. Simulations of the AUG and TCV entire plasma discharges will be presented. See the author list of S. Coda et al., Nucl. Fusion 57 2017 102011.

  5. Two-dimensional vocal tracts with three-dimensional behavior in the numerical generation of vowels.

    PubMed

    Arnela, Marc; Guasch, Oriol

    2014-01-01

    Two-dimensional (2D) numerical simulations of vocal tract acoustics may provide a good balance between the high quality of three-dimensional (3D) finite element approaches and the low computational cost of one-dimensional (1D) techniques. However, 2D models are usually generated by considering the 2D vocal tract as a midsagittal cut of a 3D version, i.e., using the same radius function, wall impedance, glottal flow, and radiation losses as in 3D, which leads to strong discrepancies in the resulting vocal tract transfer functions. In this work, a four step methodology is proposed to match the behavior of 2D simulations with that of 3D vocal tracts with circular cross-sections. First, the 2D vocal tract profile becomes modified to tune the formant locations. Second, the 2D wall impedance is adjusted to fit the formant bandwidths. Third, the 2D glottal flow gets scaled to recover 3D pressure levels. Fourth and last, the 2D radiation model is tuned to match the 3D model following an optimization process. The procedure is tested for vowels /a/, /i/, and /u/ and the obtained results are compared with those of a full 3D simulation, a conventional 2D approach, and a 1D chain matrix model.

  6. Two-Way Satellite Time and Frequency Transfer Using 1 MChips/s Codes

    DTIC Science & Technology

    2009-11-01

    Abstract The Ku-band transatlantic and Europe-to-Europe two-way satellite time and frequency transfer ( TWSTFT ) operations used 2.5 MChip/s...pseudo-random codes with 3.5 MHz bandwidth until the end of July 2009. The cost of TWSTFT operation is associated with the bandwidth used on a...geostationary satellite. The transatlantic and Europe-to-Europe TWSTFT operations faced a significant increase in cost for using 3.5 MHz bandwidth on a new

  7. Modeling the Atmosphere of Solar and Other Stars: Radiative Transfer with PHOENIX/3D

    NASA Astrophysics Data System (ADS)

    Baron, Edward

    The chemical composition of stars is an important ingredient in our understanding of the formation, structure, and evolution of both the Galaxy and the Solar System. The composition of the sun itself is an essential reference standard against which the elemental contents of other astronomical objects are compared. Recently, redetermination of the elemental abundances using three-dimensional, time-dependent hydrodynamical models of the solar atmosphere has led to a reduction in the inferred metal abundances, particularly C, N, O, and Ne. However, this reduction in metals reduces the opacity such that models of the Sun no longer agree with the observed results obtained using helioseismology. Three dimensional (3-D) radiative transfer is an important problem in physics, astrophysics, and meteorology. Radiative transfer is extremely computationally complex and it is a natural problem that requires computation on the exascale. We intend to calculate the detailed compositional structure of the Sun and other stars at high resolution with full NLTE, treating the turbulent velocity flows in full detail in order to compare results from hydrodynamics and helioseismology, and understand the nature of the discrepancies found between the two approaches. We propose to perform 3-D high-resolution radiative transfer calculations with the PHOENIX/3D suite of solar and other stars using 3-D hydrodynamic models from different groups. While NLTE radiative transfer has been treated by the groups doing hydrodynamics, they are necessarily limited in their resolution to the consideration of only a few (4-20) frequency bins, whereas we can calculate full NLTE including thousands of wavelength points, resolving the line profiles, and solving the scattering problem with extremely high angular resolution. The code has been used for the analysis of supernova spectra, stellar and planetary spectra, and for time-dependent modeling of transient objects. PHOENIX/3D runs and scales very well on Cray XC-30 and XC-40 machines (tested up to 100,800 CPU cores) and should scale up to several million cores for large simulations. Non-local problems, particularly radiation hydrodynamics problems, are at the forefront of computational astrophysics and we will share our work with the community. Our research program brings a unified modeling strategy to the results of several disparate groups and thus will provide a unifying framework with which to assess the metal abundance of the stars and the chemical evolution of the galaxy. We will bring together 3-D hydrodynamical models, detailed radiative transfer, and astronomical abundance studies. We will also provide results of interest to the atomic physics and plasma physics communities. Our work will use data from NASA telescopes including the Hubble Space Telescope and the James Webb Space telescope. The ability to work with data from the UV to the far IR is crucial from validating our results. Our work will also extend the exascale computational capabilities, which is a national goal.

  8. Serum oestradiol and beta-HCG measurements after day 3 or 5 embryo transfers in interpreting pregnancy outcome.

    PubMed

    Kumbak, Banu; Oral, Engin; Karlikaya, Guvenc; Lacin, Selman; Kahraman, Semra

    2006-10-01

    The aim of this study was to assess the clinical value of serum oestradiol concentration 8 days after embryo transfer (D8E2) and beta-human chorionic gonadotrophin (HCG-beta) concentration 12 days after embryo transfer (D12HCG-beta) in the prediction of pregnancy and the outcome of pregnancy following assisted reproduction, taking into account the day of transfer, which was either day 3 (D3) or day 5 (D5). The objective was to improve patient counselling by giving quantitative and reliable predictive information instead of non-specific uncertainties. A total of 2035 embryo transfer cycles performed between January 2003 and June 2005 were analysed retrospectively. Biochemical pregnancy, ectopic pregnancy and first-trimester abortions were classified as non-viable pregnancies; pregnancies beyond 12 weeks gestation were classified as ongoing pregnancies (OP). Significantly higher D8E2 and D12HCG-beta were obtained in D5 transfers compared with D3 transfers with regard to pregnancy and OP (P

  9. Parallel tiled Nussinov RNA folding loop nest generated using both dependence graph transitive closure and loop skewing.

    PubMed

    Palkowski, Marek; Bielecki, Wlodzimierz

    2017-06-02

    RNA secondary structure prediction is a compute intensive task that lies at the core of several search algorithms in bioinformatics. Fortunately, the RNA folding approaches, such as the Nussinov base pair maximization, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. Polyhedral compilation techniques have proven to be a powerful tool for optimization of dense array codes. However, classical affine loop nest transformations used with these techniques do not optimize effectively codes of dynamic programming of RNA structure predictions. The purpose of this paper is to present a novel approach allowing for generation of a parallel tiled Nussinov RNA loop nest exposing significantly higher performance than that of known related code. This effect is achieved due to improving code locality and calculation parallelization. In order to improve code locality, we apply our previously published technique of automatic loop nest tiling to all the three loops of the Nussinov loop nest. This approach first forms original rectangular 3D tiles and then corrects them to establish their validity by means of applying the transitive closure of a dependence graph. To produce parallel code, we apply the loop skewing technique to a tiled Nussinov loop nest. The technique is implemented as a part of the publicly available polyhedral source-to-source TRACO compiler. Generated code was run on modern Intel multi-core processors and coprocessors. We present the speed-up factor of generated Nussinov RNA parallel code and demonstrate that it is considerably faster than related codes in which only the two outer loops of the Nussinov loop nest are tiled.

  10. Synergia: an accelerator modeling tool with 3-D space charge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amundson, James F.; Spentzouris, P.; /Fermilab

    2004-07-01

    High precision modeling of space-charge effects, together with accurate treatment of single-particle dynamics, is essential for designing future accelerators as well as optimizing the performance of existing machines. We describe Synergia, a high-fidelity parallel beam dynamics simulation package with fully three dimensional space-charge capabilities and a higher order optics implementation. We describe the computational techniques, the advanced human interface, and the parallel performance obtained using large numbers of macroparticles. We also perform code benchmarks comparing to semi-analytic results and other codes. Finally, we present initial results on particle tune spread, beam halo creation, and emittance growth in the Fermilab boostermore » accelerator.« less

  11. Multiple Detector Optimization for Hidden Radiation Source Detection

    DTIC Science & Technology

    2015-03-26

    important in achieving operationally useful methods for optimizing detector emplacement, the 2-D attenuation model approach promises to speed up the...process of hidden source detection significantly. The model focused on detection of the full energy peak of a radiation source. Methods to optimize... radioisotope identification is possible without using a computationally intensive stochastic model such as the Monte Carlo n-Particle (MCNP) code

  12. Aerothermodynamic optimization of Earth entry blunt body heat shields for Lunar and Mars return

    NASA Astrophysics Data System (ADS)

    Johnson, Joshua E.

    A differential evolutionary algorithm has been executed to optimize the hypersonic aerodynamic and stagnation-point heat transfer performance of Earth entry heat shields for Lunar and Mars return manned missions with entry velocities of 11 and 12.5 km/s respectively. The aerothermodynamic performance of heat shield geometries with lift-to-drag ratios up to 1.0 is studied. Each considered heat shield geometry is composed of an axial profile tailored to fit a base cross section. Axial profiles consist of spherical segments, spherically blunted cones, and power laws. Heat shield cross sections include oblate and prolate ellipses, rounded-edge parallelograms, and blendings of the two. Aerothermodynamic models are based on modified Newtonian impact theory with semi-empirical correlations for convection and radiation. Multi-objective function optimization is performed to determine optimal trade-offs between performance parameters. Objective functions consist of minimizing heat load and heat flux and maximizing down range and cross range. Results indicate that skipping trajectories allow for vehicles with L/D = 0.3, 0.5, and 1.0 at lunar return flight conditions to produce maximum cross ranges of 950, 1500, and 3000 km respectively before Qs,tot increases dramatically. Maximum cross range increases by ˜20% with an increase in entry velocity from 11 to 12.5 km/s. Optimal configurations for all three lift-to-drag ratios produce down ranges up to approximately 26,000 km for both lunar and Mars return. Assuming a 10,000 kg mass and L/D = 0.27, the current Orion configuration is projected to experience a heat load of approximately 68 kJ/cm2 for Mars return flight conditions. For both L/D = 0.3 and 0.5, a 30% increase in entry vehicle mass from 10,000 kg produces a 20-30% increase in Qs,tot. For a given L/D, highly-eccentric heat shields do not produce greater cross range or down range. With a 5 g deceleration limit and L/D = 0.3, a highly oblate cross section with an eccentricity of 0.968 produces a 35% reduction in heat load over designs with zero eccentricity due to the eccentric heat shield's greater drag area that allows the vehicle to decelerate higher in the atmosphere. In this case, the heat shield's drag area is traded off with volumetric efficiency while fulfilling the given set of mission requirements. Additionally, the high radius-of-curvature of the spherical segment axial profile provides the best combination of heat transfer and aerodynamic performance for both entry velocities and a 5 g deceleration limit.

  13. Harnessing the power of emerging petascale platforms

    NASA Astrophysics Data System (ADS)

    Mellor-Crummey, John

    2007-07-01

    As part of the US Department of Energy's Scientific Discovery through Advanced Computing (SciDAC-2) program, science teams are tackling problems that require computational simulation and modeling at the petascale. A grand challenge for computer science is to develop software technology that makes it easier to harness the power of these systems to aid scientific discovery. As part of its activities, the SciDAC-2 Center for Scalable Application Development Software (CScADS) is building open source software tools to support efficient scientific computing on the emerging leadership-class platforms. In this paper, we describe two tools for performance analysis and tuning that are being developed as part of CScADS: a tool for analyzing scalability and performance, and a tool for optimizing loop nests for better node performance. We motivate these tools by showing how they apply to S3D, a turbulent combustion code under development at Sandia National Laboratory. For S3D, our node performance analysis tool helped uncover several performance bottlenecks. Using our loop nest optimization tool, we transformed S3D's most costly loop nest to reduce execution time by a factor of 2.94 for a processor working on a 503 domain.

  14. On entanglement-assisted quantum codes achieving the entanglement-assisted Griesmer bound

    NASA Astrophysics Data System (ADS)

    Li, Ruihu; Li, Xueliang; Guo, Luobin

    2015-12-01

    The theory of entanglement-assisted quantum error-correcting codes (EAQECCs) is a generalization of the standard stabilizer formalism. Any quaternary (or binary) linear code can be used to construct EAQECCs under the entanglement-assisted (EA) formalism. We derive an EA-Griesmer bound for linear EAQECCs, which is a quantum analog of the Griesmer bound for classical codes. This EA-Griesmer bound is tighter than known bounds for EAQECCs in the literature. For a given quaternary linear code {C}, we show that the parameters of the EAQECC that EA-stabilized by the dual of {C} can be determined by a zero radical quaternary code induced from {C}, and a necessary condition under which a linear EAQECC may achieve the EA-Griesmer bound is also presented. We construct four families of optimal EAQECCs and then show the necessary condition for existence of EAQECCs is also sufficient for some low-dimensional linear EAQECCs. The four families of optimal EAQECCs are degenerate codes and go beyond earlier constructions. What is more, except four codes, our [[n,k,d_{ea};c

  15. Dynamic ELM and divertor control using resonant toroidal multi-mode magnetic fields in DIII-D and EAST

    NASA Astrophysics Data System (ADS)

    Sun, Youwen

    2017-10-01

    A rotating n = 2 Resonant Magnetic Perturbation (RMP) field combined with a stationary n = 3 RMP field has validated predictions that access to ELM suppression can be improved, while divertor heat and particle flux can also be dynamically controlled in DIII-D. Recent observations in the EAST tokamak indicate that edge magnetic topology changes, due to nonlinear plasma response to magnetic perturbations, play a critical role in accessing ELM suppression. MARS-F code MHD simulations, which include the plasma response to the RMP, indicate the nonlinear transition to ELM suppression is optimized by configuring the RMP coils to drive maximal edge stochasticity. Consequently, mixed toroidal multi-mode RMP fields, which produce more densely packed islands over a range of additional rational surfaces, improve access to ELM suppression, and further spread heat loading on the divertor. Beneficial effects of this multi-harmonic spectrum on ELM suppression have been validated in DIII-D. Here, the threshold current required for ELM suppression with a mixed n spectrum, where part of the n = 3 RMP field is replaced by an n = 2 field, is smaller than the case with pure n = 3 field. An important further benefit of this multi-mode approach is that significant changes of 3D particle flux footprint profiles on the divertor are found in the experiment during the application of a rotating n = 2 RMP field superimposed on a static n = 3 RMP field. This result was predicted by modeling studies of the edge magnetic field structure using the TOP2D code which takes into account plasma response from MARS-F code. These results expand physics understanding and potential effectiveness of the technique for reliably controlling ELMs and divertor power/particle loading distributions in future burning plasma devices such as ITER. Work supported by USDOE under DE-FC02-04ER54698 and NNSF of China under 11475224.

  16. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines

    PubMed Central

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-01-01

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization. PMID:26343660

  17. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines.

    PubMed

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-08-27

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.

  18. Electron Beam Melting and Refining of Metals: Computational Modeling and Optimization

    PubMed Central

    Vutova, Katia; Donchev, Veliko

    2013-01-01

    Computational modeling offers an opportunity for a better understanding and investigation of thermal transfer mechanisms. It can be used for the optimization of the electron beam melting process and for obtaining new materials with improved characteristics that have many applications in the power industry, medicine, instrument engineering, electronics, etc. A time-dependent 3D axis-symmetrical heat model for simulation of thermal transfer in metal ingots solidified in a water-cooled crucible at electron beam melting and refining (EBMR) is developed. The model predicts the change in the temperature field in the casting ingot during the interaction of the beam with the material. A modified Pismen-Rekford numerical scheme to discretize the analytical model is developed. These equation systems, describing the thermal processes and main characteristics of the developed numerical method, are presented. In order to optimize the technological regimes, different criteria for better refinement and obtaining dendrite crystal structures are proposed. Analytical problems of mathematical optimization are formulated, discretized and heuristically solved by cluster methods. Using important for the practice simulation results, suggestions can be made for EBMR technology optimization. The proposed tool is important and useful for studying, control, optimization of EBMR process parameters and improving of the quality of the newly produced materials. PMID:28788351

  19. Elevation of the Yields of Very Long Chain Polyunsaturated Fatty Acids via Minimal Codon Optimization of Two Key Biosynthetic Enzymes

    PubMed Central

    Zheng, Desong; Sun, Quanxi; Liu, Jiang; Li, Yaxiao; Hua, Jinping

    2016-01-01

    Eicosapentaenoic acid (EPA, 20:5Δ5,8,11,14,17) and Docosahexaenoic acid (DHA, 22:6Δ4,7,10,13,16,19) are nutritionally beneficial to human health. Transgenic production of EPA and DHA in oilseed crops by transferring genes originating from lower eukaryotes, such as microalgae and fungi, has been attempted in recent years. However, the low yield of EPA and DHA produced in these transgenic crops is a major hurdle for the commercialization of these transgenics. Many factors can negatively affect transgene expression, leading to a low level of converted fatty acid products. Among these the codon bias between the transgene donor and the host crop is one of the major contributing factors. Therefore, we carried out codon optimization of a fatty acid delta-6 desaturase gene PinD6 from the fungus Phytophthora infestans, and a delta-9 elongase gene, IgASE1 from the microalga Isochrysis galbana for expression in Saccharomyces cerevisiae and Arabidopsis respectively. These are the two key genes encoding enzymes for driving the first catalytic steps in the Δ6 desaturation/Δ6 elongation and the Δ9 elongation/Δ8 desaturation pathways for EPA/DHA biosynthesis. Hence expression levels of these two genes are important in determining the final yield of EPA/DHA. Via PCR-based mutagenesis we optimized the least preferred codons within the first 16 codons at their N-termini, as well as the most biased CGC codons (coding for arginine) within the entire sequences of both genes. An expression study showed that transgenic Arabidopsis plants harbouring the codon-optimized IgASE1 contained 64% more elongated fatty acid products than plants expressing the native IgASE1 sequence, whilst Saccharomyces cerevisiae expressing the codon optimized PinD6 yielded 20 times more desaturated products than yeast expressing wild-type (WT) PinD6. Thus the codon optimization strategy we developed here offers a simple, effective and low-cost alternative to whole gene synthesis for high expression of foreign genes in yeast and Arabidopsis. PMID:27433934

  20. Revisiting Molecular Dynamics on a CPU/GPU system: Water Kernel and SHAKE Parallelization.

    PubMed

    Ruymgaart, A Peter; Elber, Ron

    2012-11-13

    We report Graphics Processing Unit (GPU) and Open-MP parallel implementations of water-specific force calculations and of bond constraints for use in Molecular Dynamics simulations. We focus on a typical laboratory computing-environment in which a CPU with a few cores is attached to a GPU. We discuss in detail the design of the code and we illustrate performance comparable to highly optimized codes such as GROMACS. Beside speed our code shows excellent energy conservation. Utilization of water-specific lists allows the efficient calculations of non-bonded interactions that include water molecules and results in a speed-up factor of more than 40 on the GPU compared to code optimized on a single CPU core for systems larger than 20,000 atoms. This is up four-fold from a factor of 10 reported in our initial GPU implementation that did not include a water-specific code. Another optimization is the implementation of constrained dynamics entirely on the GPU. The routine, which enforces constraints of all bonds, runs in parallel on multiple Open-MP cores or entirely on the GPU. It is based on Conjugate Gradient solution of the Lagrange multipliers (CG SHAKE). The GPU implementation is partially in double precision and requires no communication with the CPU during the execution of the SHAKE algorithm. The (parallel) implementation of SHAKE allows an increase of the time step to 2.0fs while maintaining excellent energy conservation. Interestingly, CG SHAKE is faster than the usual bond relaxation algorithm even on a single core if high accuracy is expected. The significant speedup of the optimized components transfers the computational bottleneck of the MD calculation to the reciprocal part of Particle Mesh Ewald (PME).

  1. Optimization of Kink Stability in High-Beta Quasi-axisymmetric Stellarators

    NASA Astrophysics Data System (ADS)

    Fu, G. Y.; Ku, L.-P.; Manickam, J.; Cooper, W. A.

    1998-11-01

    A key issue for design of Quasi-axisymmetric stellarators( A. Reiman et al, this conference.) (QAS) is the stability of external kink modes driven by pressure-induced bootstrap current. In this work, the 3D MHD stability code TERPSICHORE(W.A. Cooper, Phys. Plasmas 3), 275(1996). is used to calculate the stability of low-n external kink modes in a high-beta QAS. The kink stability is optimized by adjusting plasma boundary shape (i.e., external coil configuration) as well as plasma pressure and current profiles. For this purpose, the TERPSICHORE code has been implemented successfully in an optimizer which maximizes kink stability as well as quasi-symmetry. A key factor for kink stability is rotational transform profile. It is found that the edge magnetic shear is strongly stabilizing. The amount of the shear needed for complete stabilization increases with edge transform. It is also found that the plasma boundary shape plays an important role in the kink stability besides transform profile. The physics mechanisms for the kink stability are being studied by examining the contributions of individual terms in δ W of the energy principle: the field line bending term, the current-driven term, the pressure-driven term, and the vacuum term. Detailed results will be reported.

  2. Optimized mid-infrared thermal emitters for applications in aircraft countermeasures

    NASA Astrophysics Data System (ADS)

    Lorenzo, Simón G.; You, Chenglong; Granier, Christopher H.; Veronis, Georgios; Dowling, Jonathan P.

    2017-12-01

    We introduce an optimized aperiodic multilayer structure capable of broad angle and high temperature thermal emission over the 3 μm to 5 μm atmospheric transmission band. This aperiodic multilayer structure composed of alternating layers of silicon carbide and graphite on top of a tungsten substrate exhibits near maximal emittance in a 2 μm wavelength range centered in the mid-wavelength infrared band traditionally utilized for atmospheric transmission. We optimize the layer thicknesses using a hybrid optimization algorithm coupled to a transfer matrix code to maximize the power emitted in this mid-infrared range normal to the structure's surface. We investigate possible applications for these structures in mimicking 800-1000 K aircraft engine thermal emission signatures and in improving countermeasure effectiveness against hyperspectral imagers. We find these structures capable of matching the Planck blackbody curve in the selected infrared range with relatively sharp cutoffs on either side, leading to increased overall efficiency of the structures. Appropriately optimized multilayer structures with this design could lead to matching a variety of mid-infrared thermal emissions. For aircraft countermeasure applications, this method could yield a flare design capable of mimicking engine spectra and breaking the lock of hyperspectral imaging systems.

  3. Integrated Predictive Tools for Customizing Microstructure and Material Properties of Additively Manufactured Aerospace Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radhakrishnan, Balasubramaniam; Fattebert, Jean-Luc; Gorti, Sarma B.

    Additive Manufacturing (AM) refers to a process by which digital three-dimensional (3-D) design data is converted to build up a component by depositing material layer-by-layer. United Technologies Corporation (UTC) is currently involved in fabrication and certification of several AM aerospace structural components made from aerospace materials. This is accomplished by using optimized process parameters determined through numerous design-of-experiments (DOE)-based studies. Certification of these components is broadly recognized as a significant challenge, with long lead times, very expensive new product development cycles and very high energy consumption. Because of these challenges, United Technologies Research Center (UTRC), together with UTC business unitsmore » have been developing and validating an advanced physics-based process model. The specific goal is to develop a physics-based framework of an AM process and reliably predict fatigue properties of built-up structures as based on detailed solidification microstructures. Microstructures are predicted using process control parameters including energy source power, scan velocity, deposition pattern, and powder properties. The multi-scale multi-physics model requires solution and coupling of governing physics that will allow prediction of the thermal field and enable solution at the microstructural scale. The state-of-the-art approach to solve these problems requires a huge computational framework and this kind of resource is only available within academia and national laboratories. The project utilized the parallel phase-fields codes at Oak Ridge National Laboratory (ORNL) and Lawrence Livermore National Laboratory (LLNL), along with the high-performance computing (HPC) capabilities existing at the two labs to demonstrate the simulation of multiple dendrite growth in threedimensions (3-D). The LLNL code AMPE was used to implement the UTRC phase field model that was previously developed for a model binary alloy, and the simulation results were compared against the UTRC simulation results, followed by extension of the UTRC model to simulate multiple dendrite growth in 3-D. The ORNL MEUMAPPS code was used to simulate dendritic growth in a model ternary alloy with the same equilibrium solidification range as the Ni-base alloy 718 using realistic model parameters, including thermodynamic integration with a Calphad based model for the ternary alloy. Implementation of the UTRC model in AMPE met with several numerical and parametric issues that were resolved and good comparison between the simulation results obtained by the two codes was demonstrated for two dimensional (2-D) dendrites. 3-D dendrite growth was then demonstrated with the AMPE code using nondimensional parameters obtained in 2-D simulations. Multiple dendrite growth in 2-D and 3-D were demonstrated using ORNL’s MEUMAPPS code using simple thermal boundary conditions. MEUMAPPS was then modified to incorporate the complex, time-dependent thermal boundary conditions obtained by UTRC’s thermal modeling of single track AM experiments to drive the phase field simulations. The results were in good agreement with UTRC’s experimental measurements.« less

  4. Computational mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raboin, P J

    1998-01-01

    The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D.more » Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.« less

  5. Tuning the cache memory usage in tomographic reconstruction on standard computers with Advanced Vector eXtensions (AVX)

    PubMed Central

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-01-01

    Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 – Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning. PMID:26217710

  6. Tuning the cache memory usage in tomographic reconstruction on standard computers with Advanced Vector eXtensions (AVX).

    PubMed

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-06-01

    Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 - Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning.

  7. Hybrid-coded 3D structured illumination imaging with Bayesian estimation (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Chen, Hsi-Hsun; Luo, Yuan; Singh, Vijay R.

    2016-03-01

    Light induced fluorescent microscopy has long been developed to observe and understand the object at microscale, such as cellular sample. However, the transfer function of lense-based imaging system limits the resolution so that the fine and detailed structure of sample cannot be identified clearly. The techniques of resolution enhancement are fascinated to break the limit of resolution for objective given. In the past decades, the resolution enhancement imaging has been investigated through variety of strategies, including photoactivated localization microscopy (PALM), stochastic optical reconstruction microscopy (STORM), stimulated emission depletion (STED), and structure illuminated microscopy (SIM). In those methods, only SIM can intrinsically improve the resolution limit for a system without taking the structure properties of object into account. In this paper, we develop a SIM associated with Bayesian estimation, furthermore, with optical sectioning capability rendered from HiLo processing, resulting the high resolution through 3D volume. This 3D SIM can provide the optical sectioning and resolution enhancement performance, and be robust to noise owing to the Data driven Bayesian estimation reconstruction proposed. For validating the 3D SIM, we show our simulation result of algorithm, and the experimental result demonstrating the 3D resolution enhancement.

  8. Methodology, status and plans for development and assessment of Cathare code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bestion, D.; Barre, F.; Faydide, B.

    1997-07-01

    This paper presents the methodology, status and plans for the development, assessment and uncertainty evaluation of the Cathare code. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the status of the code development and assessment is presented. The general strategy used for the development and the assessment of the code is presented. Analytical experiments with separate effect tests, and component tests are used for the development and the validation of closure laws. Successive Revisions of constitutive laws are implemented in successive Versions of the code and assessed. System tests ormore » integral tests are used to validate the general consistency of the Revision. Each delivery of a code Version + Revision is fully assessed and documented. A methodology is being developed to determine the uncertainty on all constitutive laws of the code using calculations of many analytical tests and applying the Discrete Adjoint Sensitivity Method (DASM). At last, the plans for the future developments of the code are presented. They concern the optimization of the code performance through parallel computing - the code will be used for real time full scope plant simulators - the coupling with many other codes (neutronic codes, severe accident codes), the application of the code for containment thermalhydraulics. Also, physical improvements are required in the field of low pressure transients and in the modeling for the 3-D model.« less

  9. Inverse-optimized 3D conformal planning: Minimizing complexity while achieving equivalence with beamlet IMRT in multiple clinical sites

    PubMed Central

    Fraass, Benedick A.; Steers, Jennifer M.; Matuszak, Martha M.; McShan, Daniel L.

    2012-01-01

    Purpose: Inverse planned intensity modulated radiation therapy (IMRT) has helped many centers implement highly conformal treatment planning with beamlet-based techniques. The many comparisons between IMRT and 3D conformal (3DCRT) plans, however, have been limited because most 3DCRT plans are forward-planned while IMRT plans utilize inverse planning, meaning both optimization and delivery techniques are different. This work avoids that problem by comparing 3D plans generated with a unique inverse planning method for 3DCRT called inverse-optimized 3D (IO-3D) conformal planning. Since IO-3D and the beamlet IMRT to which it is compared use the same optimization techniques, cost functions, and plan evaluation tools, direct comparisons between IMRT and simple, optimized IO-3D plans are possible. Though IO-3D has some similarity to direct aperture optimization (DAO), since it directly optimizes the apertures used, IO-3D is specifically designed for 3DCRT fields (i.e., 1–2 apertures per beam) rather than starting with IMRT-like modulation and then optimizing aperture shapes. The two algorithms are very different in design, implementation, and use. The goals of this work include using IO-3D to evaluate how close simple but optimized IO-3D plans come to nonconstrained beamlet IMRT, showing that optimization, rather than modulation, may be the most important aspect of IMRT (for some sites). Methods: The IO-3D dose calculation and optimization functionality is integrated in the in-house 3D planning/optimization system. New features include random point dose calculation distributions, costlet and cost function capabilities, fast dose volume histogram (DVH) and plan evaluation tools, optimization search strategies designed for IO-3D, and an improved, reimplemented edge/octree calculation algorithm. The IO-3D optimization, in distinction to DAO, is designed to optimize 3D conformal plans (one to two segments per beam) and optimizes MLC segment shapes and weights with various user-controllable search strategies which optimize plans without beamlet or pencil beam approximations. IO-3D allows comparisons of beamlet, multisegment, and conformal plans optimized using the same cost functions, dose points, and plan evaluation metrics, so quantitative comparisons are straightforward. Here, comparisons of IO-3D and beamlet IMRT techniques are presented for breast, brain, liver, and lung plans. Results: IO-3D achieves high quality results comparable to beamlet IMRT, for many situations. Though the IO-3D plans have many fewer degrees of freedom for the optimization, this work finds that IO-3D plans with only one to two segments per beam are dosimetrically equivalent (or nearly so) to the beamlet IMRT plans, for several sites. IO-3D also reduces plan complexity significantly. Here, monitor units per fraction (MU/Fx) for IO-3D plans were 22%–68% less than that for the 1 cm × 1 cm beamlet IMRT plans and 72%–84% than the 0.5 cm × 0.5 cm beamlet IMRT plans. Conclusions: The unique IO-3D algorithm illustrates that inverse planning can achieve high quality 3D conformal plans equivalent (or nearly so) to unconstrained beamlet IMRT plans, for many sites. IO-3D thus provides the potential to optimize flat or few-segment 3DCRT plans, creating less complex optimized plans which are efficient and simple to deliver. The less complex IO-3D plans have operational advantages for scenarios including adaptive replanning, cases with interfraction and intrafraction motion, and pediatric patients. PMID:22755717

  10. Optimizing CyberShake Seismic Hazard Workflows for Large HPC Resources

    NASA Astrophysics Data System (ADS)

    Callaghan, S.; Maechling, P. J.; Juve, G.; Vahi, K.; Deelman, E.; Jordan, T. H.

    2014-12-01

    The CyberShake computational platform is a well-integrated collection of scientific software and middleware that calculates 3D simulation-based probabilistic seismic hazard curves and hazard maps for the Los Angeles region. Currently each CyberShake model comprises about 235 million synthetic seismograms from about 415,000 rupture variations computed at 286 sites. CyberShake integrates large-scale parallel and high-throughput serial seismological research codes into a processing framework in which early stages produce files used as inputs by later stages. Scientific workflow tools are used to manage the jobs, data, and metadata. The Southern California Earthquake Center (SCEC) developed the CyberShake platform using USC High Performance Computing and Communications systems and open-science NSF resources.CyberShake calculations were migrated to the NSF Track 1 system NCSA Blue Waters when it became operational in 2013, via an interdisciplinary team approach including domain scientists, computer scientists, and middleware developers. Due to the excellent performance of Blue Waters and CyberShake software optimizations, we reduced the makespan (a measure of wallclock time-to-solution) of a CyberShake study from 1467 to 342 hours. We will describe the technical enhancements behind this improvement, including judicious introduction of new GPU software, improved scientific software components, increased workflow-based automation, and Blue Waters-specific workflow optimizations.Our CyberShake performance improvements highlight the benefits of scientific workflow tools. The CyberShake workflow software stack includes the Pegasus Workflow Management System (Pegasus-WMS, which includes Condor DAGMan), HTCondor, and Globus GRAM, with Pegasus-mpi-cluster managing the high-throughput tasks on the HPC resources. The workflow tools handle data management, automatically transferring about 13 TB back to SCEC storage.We will present performance metrics from the most recent CyberShake study, executed on Blue Waters. We will compare the performance of CPU and GPU versions of our large-scale parallel wave propagation code, AWP-ODC-SGT. Finally, we will discuss how these enhancements have enabled SCEC to move forward with plans to increase the CyberShake simulation frequency to 1.0 Hz.

  11. Impact of spatial resolution on cirrus infrared satellite retrievals in the presence of cloud heterogeneity

    NASA Astrophysics Data System (ADS)

    Fauchez, T.; Platnick, S. E.; Meyer, K.; Zhang, Z.; Cornet, C.; Szczap, F.; Dubuisson, P.

    2015-12-01

    Cirrus clouds are an important part of the Earth radiation budget but an accurate assessment of their role remains highly uncertain. Cirrus optical properties such as Cloud Optical Thickness (COT) and ice crystal effective particle size are often retrieved with a combination of Visible/Near InfraRed (VNIR) and ShortWave-InfraRed (SWIR) reflectance channels. Alternatively, Thermal InfraRed (TIR) techniques, such as the Split Window Technique (SWT), have demonstrated better accuracy for thin cirrus effective radius retrievals with small effective radii. However, current global operational algorithms for both retrieval methods assume that cloudy pixels are horizontally homogeneous (Plane Parallel Approximation (PPA)) and independent (Independent Pixel Approximation (IPA)). The impact of these approximations on ice cloud retrievals needs to be understood and, as far as possible, corrected. Horizontal heterogeneity effects in the TIR spectrum are mainly dominated by the PPA bias that primarily depends on the COT subpixel heterogeneity; for solar reflectance channels, in addition to the PPA bias, the IPA can lead to significant retrieval errors due to a significant photon horizontal transport between cloudy columns, as well as brightening and shadowing effects that are more difficult to quantify. Furthermore TIR retrievals techniques have demonstrated better retrieval accuracy for thin cirrus having small effective radii over solar reflectance techniques. The TIR range is thus particularly relevant in order to characterize, as accurately as possible, thin cirrus clouds. Heterogeneity effects in the TIR are evaluated as a function of spatial resolution in order to estimate the optimal spatial resolution for TIR retrieval applications. These investigations are performed using a cirrus 3D cloud generator (3DCloud), a 3D radiative transfer code (3DMCPOL), and two retrieval algorithms, namely the operational MODIS retrieval algorithm (MOD06) and a research-level SWT algorithm.

  12. A novel construction method of QC-LDPC codes based on CRT for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-05-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.

  13. Multi-Scale Modeling of an Integrated 3D Braided Composite with Applications to Helicopter Arm

    NASA Astrophysics Data System (ADS)

    Zhang, Diantang; Chen, Li; Sun, Ying; Zhang, Yifan; Qian, Kun

    2017-10-01

    A study is conducted with the aim of developing multi-scale analytical method for designing the composite helicopter arm with three-dimensional (3D) five-directional braided structure. Based on the analysis of 3D braided microstructure, the multi-scale finite element modeling is developed. Finite element analysis on the load capacity of 3D five-directional braided composites helicopter arm is carried out using the software ABAQUS/Standard. The influences of the braiding angle and loading condition on the stress and strain distribution of the helicopter arm are simulated. The results show that the proposed multi-scale method is capable of accurately predicting the mechanical properties of 3D braided composites, validated by the comparison the stress-strain curves of meso-scale RVCs. Furthermore, it is found that the braiding angle is an important factor affecting the mechanical properties of 3D five-directional braided composite helicopter arm. Based on the optimized structure parameters, the nearly net-shaped composite helicopter arm is fabricated using a novel resin transfer mould (RTM) process.

  14. Improved Speech Coding Based on Open-Loop Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin; Longman, Richard W.

    2000-01-01

    A nonlinear optimization algorithm for linear predictive speech coding was developed early that not only optimizes the linear model coefficients for the open loop predictor, but does the optimization including the effects of quantization of the transmitted residual. It also simultaneously optimizes the quantization levels used for each speech segment. In this paper, we present an improved method for initialization of this nonlinear algorithm, and demonstrate substantial improvements in performance. In addition, the new procedure produces monotonically improving speech quality with increasing numbers of bits used in the transmitted error residual. Examples of speech encoding and decoding are given for 8 speech segments and signal to noise levels as high as 47 dB are produced. As in typical linear predictive coding, the optimization is done on the open loop speech analysis model. Here we demonstrate that minimizing the error of the closed loop speech reconstruction, instead of the simpler open loop optimization, is likely to produce negligible improvement in speech quality. The examples suggest that the algorithm here is close to giving the best performance obtainable from a linear model, for the chosen order with the chosen number of bits for the codebook.

  15. A computational study of low-head direct chill slab casting of aluminum alloy AA2024

    NASA Astrophysics Data System (ADS)

    Hasan, Mainul; Begum, Latifa

    2016-04-01

    The steady state casting of an industrial-sized AA2024 slab has been modeled for a vertical low-head direct chill caster. The previously verified 3-D CFD code is used to investigate the solidification phenomena of the said long-range alloy by varying the pouring temperature, casting speed and the metal-mold contact heat transfer coefficient from 654 to 702 °C, 60-180 mm/min, and 1.0-4.0 kW/(m2 K), respectively. The important predicted results are presented and thoroughly discussed.

  16. Development of the PARVMEC Code for Rapid Analysis of 3D MHD Equilibrium

    NASA Astrophysics Data System (ADS)

    Seal, Sudip; Hirshman, Steven; Cianciosa, Mark; Wingen, Andreas; Unterberg, Ezekiel; Wilcox, Robert; ORNL Collaboration

    2015-11-01

    The VMEC three-dimensional (3D) MHD equilibrium has been used extensively for designing stellarator experiments and analyzing experimental data in such strongly 3D systems. Recent applications of VMEC include 2D systems such as tokamaks (in particular, the D3D experiment), where application of very small (delB/B ~ 10-3) 3D resonant magnetic field perturbations render the underlying assumption of axisymmetry invalid. In order to facilitate the rapid analysis of such equilibria (for example, for reconstruction purposes), we have undertaken the task of parallelizing the VMEC code (PARVMEC) to produce a scalable and temporally rapidly convergent equilibrium code for use on parallel distributed memory platforms. The parallelization task naturally splits into three distinct parts 1) radial surfaces in the fixed-boundary part of the calculation; 2) two 2D angular meshes needed to compute the Green's function integrals over the plasma boundary for the free-boundary part of the code; and 3) block tridiagonal matrix needed to compute the full (3D) pre-conditioner near the final equilibrium state. Preliminary results show that scalability is achieved for tasks 1 and 3, with task 2 still nearing completion. The impact of this work on the rapid reconstruction of D3D plasmas using PARVMEC in the V3FIT code will be discussed. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.

  17. Memory transfer optimization for a lattice Boltzmann solver on Kepler architecture nVidia GPUs

    NASA Astrophysics Data System (ADS)

    Mawson, Mark J.; Revell, Alistair J.

    2014-10-01

    The Lattice Boltzmann method (LBM) for solving fluid flow is naturally well suited to an efficient implementation for massively parallel computing, due to the prevalence of local operations in the algorithm. This paper presents and analyses the performance of a 3D lattice Boltzmann solver, optimized for third generation nVidia GPU hardware, also known as 'Kepler'. We provide a review of previous optimization strategies and analyse data read/write times for different memory types. In LBM, the time propagation step (known as streaming), involves shifting data to adjacent locations and is central to parallel performance; here we examine three approaches which make use of different hardware options. Two of which make use of 'performance enhancing' features of the GPU; shared memory and the new shuffle instruction found in Kepler based GPUs. These are compared to a standard transfer of data which relies instead on optimized storage to increase coalesced access. It is shown that the more simple approach is most efficient; since the need for large numbers of registers per thread in LBM limits the block size and thus the efficiency of these special features is reduced. Detailed results are obtained for a D3Q19 LBM solver, which is benchmarked on nVidia K5000M and K20C GPUs. In the latter case the use of a read-only data cache is explored, and peak performance of over 1036 Million Lattice Updates Per Second (MLUPS) is achieved. The appearance of a periodic bottleneck in the solver performance is also reported, believed to be hardware related; spikes in iteration-time occur with a frequency of around 11 Hz for both GPUs, independent of the size of the problem.

  18. Comparison of Evolutionary (Genetic) Algorithm and Adjoint Methods for Multi-Objective Viscous Airfoil Optimizations

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Nemec, M.; Holst, T.; Zingg, D. W.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A comparison between an Evolutionary Algorithm (EA) and an Adjoint-Gradient (AG) Method applied to a two-dimensional Navier-Stokes code for airfoil design is presented. Both approaches use a common function evaluation code, the steady-state explicit part of the code,ARC2D. The parameterization of the design space is a common B-spline approach for an airfoil surface, which together with a common griding approach, restricts the AG and EA to the same design space. Results are presented for a class of viscous transonic airfoils in which the optimization tradeoff between drag minimization as one objective and lift maximization as another, produces the multi-objective design space. Comparisons are made for efficiency, accuracy and design consistency.

  19. DYNA3D Code Practices and Developments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, L.; Zywicz, E.; Raboin, P.

    2000-04-21

    DYNA3D is an explicit, finite element code developed to solve high rate dynamic simulations for problems of interest to the engineering mechanics community. The DYNA3D code has been under continuous development since 1976[1] by the Methods Development Group in the Mechanical Engineering Department of Lawrence Livermore National Laboratory. The pace of code development activities has substantially increased in the past five years, growing from one to between four and six code developers. This has necessitated the use of software tools such as CVS (Concurrent Versions System) to help manage multiple version updates. While on-line documentation with an Adobe PDF manualmore » helps to communicate software developments, periodically a summary document describing recent changes and improvements in DYNA3D software is needed. The first part of this report describes issues surrounding software versions and source control. The remainder of this report details the major capability improvements since the last publicly released version of DYNA3D in 1996. Not included here are the many hundreds of bug corrections and minor enhancements, nor the development in DYNA3D between the manual release in 1993[2] and the public code release in 1996.« less

  20. Practical system for recording spatially lifelike 5.1 surround sound and 3D fully periphonic reproduction

    NASA Astrophysics Data System (ADS)

    Miller, Robert E. (Robin)

    2005-04-01

    In acoustic spaces that are played as extensions of musical instruments, tonality is a major contributor to the experience of reality. Tonality is described as a process of integration in our consciousness over the reverberation time of the room of many sonic arrivals in three dimensions, each directionally coded in a learned response by the listeners unique head-related transfer function (HRTF). Preserving this complex 3D directionality is key to lifelike reproduction of a recording. Conventional techniques such as stereo or 5.1-channel surround sound position the listener at the apex of a triangle or the center of a circle, not the center of the sphere of lifelike hearing. A periphonic reproduction system for music and movie entertainment, Virtual Reality, and Training Simulation termed PerAmbio 3D/2D (Pat. pending) is described in theory and subjective tests that capture the 3D sound field with a microphone array and transform the periphonic signals into ordinary 6-channel media for either decoderless 2D replay on 5.1 systems, or lossless 3D replay with decoder and five additional speakers. PerAmbio 3D/2D is described as a practical approach to preserving the spatial perception of reality, where the listening room and speakers disappear, leaving the acoustical impression of the original venue.

  1. Cosubstitution effect on the magnetic, transport, and thermoelectric properties of the electron-doped perovskite manganite CaMnO3

    NASA Astrophysics Data System (ADS)

    Okuda, T.; Fujii, Y.

    2010-11-01

    We have investigated magnetic, transport, and thermoelecric properties of polycrystalline Ca1-xSrxMn1-yMoyO3, and have tried to optimize the n-type thermoelectric response below room temperature. The Sr substitution enlarges a Mn-O-Mn bond angle and increases a crystal symmetry, which enhances one electron transfer of the electrons doped by the Mo substitution. This effect promotes the competition between correlations of a G-type antiferromagnetic (AF) order and a C-type AF order accompanying a 3d3z2-r2 orbital order, leading to the more complicated magnetic phase diagram of Ca0.75Sr0.25Mn1-yMoyO3 than that of CaMn1-yMoyO3. A subtle balance between the effects of the enhanced one electron transfer and the introduced disorder into the A(Ca)-site upon the transport properties enhances a dimensionless thermoelectric figure-of-merit ZT up to 0.03 at room temperature. However, a correlation of the 3d3z2-r2 orbital order is also promoted by the Sr substitution, which bounds a further enhancement of ZT.

  2. Spectroscopic and chemical reactivity analysis of D-Myo-Inositol using quantum chemical approach and its experimental verification

    NASA Astrophysics Data System (ADS)

    Mishra, Devendra P.; Srivastava, Anchal; Shukla, R. K.

    2017-07-01

    This paper describes the spectroscopic (^1H and ^{13}C NMR, FT-IR and UV-Visible), chemical, nonlinear optical and thermodynamic properties of D-Myo-Inositol using quantum chemical technique and its experimental verification. The structural parameters of the compound are determined from the optimized geometry by B3LYP method with 6 {-}311{+}{+}G(d,p) basis set. It was found that the optimized parameters thus obtained are almost in agreement with the experimental ones. A detailed interpretation of the infrared spectra of D-Myo-Inositol is also reported in the present work. After optimization, the proton and carbon NMR chemical shifts of the studied compound are calculated using GIAO and 6 {-}311{+}{+}G(d,p) basis set. The search of organic materials with improved charge transfer properties requires precise quantum chemical calculations of space-charge density distribution, state and transition dipole moments and HOMO-LUMO states. The nature of the transitions in the observed UV-Visible spectrum of the compound has been studied by the time-dependent density functional theory (TD-DFT). The global reactivity descriptors like chemical potential, electronegativity, hardness, softness and electrophilicity index, have been calculated using DFT. The thermodynamic calculation related to the title compound was also performed at B3LYP/ 6 {-}311{+}{+}G(d,p) level of theory. The standard statistical thermodynamic functions like heat capacity at constant pressure, entropy and enthalpy change were obtained from the theoretical harmonic frequencies of the optimized molecule. It is observed that the values of heat capacity, entropy and enthalpy increase with increase in temperature from 100 to 1000 K, which is attributed to the enhancement of molecular vibration with the increase in temperature.

  3. Light Curve and Orbital Period Analysis of VX Lac

    NASA Astrophysics Data System (ADS)

    Yılmaz, M.; Nelson, R. H.; Şenavcı, H. V.; İzci, D.; Özavcı, İ.; Gümüş, D.

    2017-04-01

    In this study, we performed simultaneously light curve and radial velocity, and also period analyses of the eclipsing binary system VX Lac. Four color (BVRI) light curves of the system were analysed using the W-D code. The results imply that VX Lac is a classic Algol-type binary with a mass ratio of q=0.27, of which the less massive secondary component fills its Roche lobe. The orbital period behaviour of the system was analysed by assuming the light time effect (LITE) from a third body. The O-C analysis yielded a mass transfer rate of dM/dt=1.86×10-8M⊙yr-1 and the minimal mass of the third body to be M3=0.31M⊙. The residuals from mass transfer and the third body were also analysed because another cyclic variation is seen in O-C diagram. This periodic variation was examined under the hypotheses of stellar magnetic activity and fourth body.

  4. Recent update of the RPLUS2D/3D codes

    NASA Technical Reports Server (NTRS)

    Tsai, Y.-L. Peter

    1991-01-01

    The development of the RPLUS2D/3D codes is summarized. These codes utilize LU algorithms to solve chemical non-equilibrium flows in a body-fitted coordinate system. The motivation behind the development of these codes is the need to numerically predict chemical non-equilibrium flows for the National AeroSpace Plane Program. Recent improvements include vectorization method, blocking algorithms for geometric flexibility, out-of-core storage for large-size problems, and an LU-SW/UP combination for CPU-time efficiency and solution quality.

  5. Entropy Generation/Availability Energy Loss Analysis Inside MIT Gas Spring and "Two Space" Test Rigs

    NASA Technical Reports Server (NTRS)

    Ebiana, Asuquo B.; Savadekar, Rupesh T.; Patel, Kaushal V.

    2006-01-01

    The results of the entropy generation and availability energy loss analysis under conditions of oscillating pressure and oscillating helium gas flow in two Massachusetts Institute of Technology (MIT) test rigs piston-cylinder and piston-cylinder-heat exchanger are presented. Two solution domains, the gas spring (single-space) in the piston-cylinder test rig and the gas spring + heat exchanger (two-space) in the piston-cylinder-heat exchanger test rig are of interest. Sage and CFD-ACE+ commercial numerical codes are used to obtain 1-D and 2-D computer models, respectively, of each of the two solution domains and to simulate the oscillating gas flow and heat transfer effects in these domains. Second law analysis is used to characterize the entropy generation and availability energy losses inside the two solution domains. Internal and external entropy generation and availability energy loss results predicted by Sage and CFD-ACE+ are compared. Thermodynamic loss analysis of simple systems such as the MIT test rigs are often useful to understand some important features of complex pattern forming processes in more complex systems like the Stirling engine. This study is aimed at improving numerical codes for the prediction of thermodynamic losses via the development of a loss post-processor. The incorporation of loss post-processors in Stirling engine numerical codes will facilitate Stirling engine performance optimization. Loss analysis using entropy-generation rates due to heat and fluid flow is a relatively new technique for assessing component performance. It offers a deep insight into the flow phenomena, allows a more exact calculation of losses than is possible with traditional means involving the application of loss correlations and provides an effective tool for improving component and overall system performance.

  6. Sparse gammatone signal model optimized for English speech does not match the human auditory filters.

    PubMed

    Strahl, Stefan; Mertins, Alfred

    2008-07-18

    Evidence that neurosensory systems use sparse signal representations as well as improved performance of signal processing algorithms using sparse signal models raised interest in sparse signal coding in the last years. For natural audio signals like speech and environmental sounds, gammatone atoms have been derived as expansion functions that generate a nearly optimal sparse signal model (Smith, E., Lewicki, M., 2006. Efficient auditory coding. Nature 439, 978-982). Furthermore, gammatone functions are established models for the human auditory filters. Thus far, a practical application of a sparse gammatone signal model has been prevented by the fact that deriving the sparsest representation is, in general, computationally intractable. In this paper, we applied an accelerated version of the matching pursuit algorithm for gammatone dictionaries allowing real-time and large data set applications. We show that a sparse signal model in general has advantages in audio coding and that a sparse gammatone signal model encodes speech more efficiently in terms of sparseness than a sparse modified discrete cosine transform (MDCT) signal model. We also show that the optimal gammatone parameters derived for English speech do not match the human auditory filters, suggesting for signal processing applications to derive the parameters individually for each applied signal class instead of using psychometrically derived parameters. For brain research, it means that care should be taken with directly transferring findings of optimality for technical to biological systems.

  7. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    NASA Astrophysics Data System (ADS)

    Xu, Chuanfu; Deng, Xiaogang; Zhang, Lilun; Fang, Jianbin; Wang, Guangxue; Jiang, Yi; Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua

    2014-12-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU-GPU collaborative simulations that solve realistic CFD problems with both complex configurations and high-order schemes.

  8. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Chuanfu, E-mail: xuchuanfu@nudt.edu.cn; Deng, Xiaogang; Zhang, Lilun

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations formore » high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations that solve realistic CFD problems with both complex configurations and high-order schemes.« less

  9. Moving from Batch to Field Using the RT3D Reactive Transport Modeling System

    NASA Astrophysics Data System (ADS)

    Clement, T. P.; Gautam, T. R.

    2002-12-01

    The public domain reactive transport code RT3D (Clement, 1997) is a general-purpose numerical code for solving coupled, multi-species reactive transport in saturated groundwater systems. The code uses MODFLOW to simulate flow and several modules of MT3DMS to simulate the advection and dispersion processes. RT3D employs the operator-split strategy which allows the code solve the coupled reactive transport problem in a modular fashion. The coupling between reaction and transport is defined through a separate module where the reaction equations are specified. The code supports a versatile user-defined reaction option that allows users to define their own reaction system through a Fortran-90 subroutine, known as the RT3D-reaction package. Further a utility code, known as BATCHRXN, allows the users to independently test and debug their reaction package. To analyze a new reaction system at a batch scale, users should first run BATCHRXN to test the ability of their reaction package to model the batch data. After testing, the reaction package can simply be ported to the RT3D environment to study the model response under 1-, 2-, or 3-dimensional transport conditions. This paper presents example problems that demonstrate the methods for moving from batch to field-scale simulations using BATCHRXN and RT3D codes. The first example describes a simple first-order reaction system for simulating the sequential degradation of Tetrachloroethene (PCE) and its daughter products. The second example uses a relatively complex reaction system for describing the multiple degradation pathways of Tetrachloroethane (PCA) and its daughter products. References 1) Clement, T.P, RT3D - A modular computer code for simulating reactive multi-species transport in 3-Dimensional groundwater aquifers, Battelle Pacific Northwest National Laboratory Research Report, PNNL-SA-28967, September, 1997. Available at: http://bioprocess.pnl.gov/rt3d.htm.

  10. A novel construction method of QC-LDPC codes based on the subgroup of the finite field multiplicative group for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Zhou, Guang-xiang; Gao, Wen-chun; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-01-01

    According to the requirements of the increasing development for optical transmission systems, a novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on the subgroup of the finite field multiplicative group is proposed. Furthermore, this construction method can effectively avoid the girth-4 phenomena and has the advantages such as simpler construction, easier implementation, lower encoding/decoding complexity, better girth properties and more flexible adjustment for the code length and code rate. The simulation results show that the error correction performance of the QC-LDPC(3 780,3 540) code with the code rate of 93.7% constructed by this proposed method is excellent, its net coding gain is respectively 0.3 dB, 0.55 dB, 1.4 dB and 1.98 dB higher than those of the QC-LDPC(5 334,4 962) code constructed by the method based on the inverse element characteristics in the finite field multiplicative group, the SCG-LDPC(3 969,3 720) code constructed by the systematically constructed Gallager (SCG) random construction method, the LDPC(32 640,30 592) code in ITU-T G.975.1 and the classic RS(255,239) code which is widely used in optical transmission systems in ITU-T G.975 at the bit error rate ( BER) of 10-7. Therefore, the constructed QC-LDPC(3 780,3 540) code is more suitable for optical transmission systems.

  11. Targeted and efficient transfer of multiple value-added genes into wheat varieties

    USDA-ARS?s Scientific Manuscript database

    With an objective to optimize an approach to transfer multiple value added genes to a wheat variety while maintaining and improving agronomic performance, two alleles with mutations in the acetolactate synthase (ALS) gene located on wheat chromosomes 6B and 6D providing tolerance to imidazolinone (I...

  12. Ice Accretion Calculations for a Commercial Transport Using the LEWICE3D, ICEGRID3D and CMARC Programs

    NASA Technical Reports Server (NTRS)

    Bidwell, Colin S.; Pinella, David; Garrison, Peter

    1999-01-01

    Collection efficiency and ice accretion calculations were made for a commercial transport using the NASA Lewis LEWICE3D ice accretion code, the ICEGRID3D grid code and the CMARC panel code. All of the calculations were made on a Windows 95 based personal computer. The ice accretion calculations were made for the nose, wing, horizontal tail and vertical tail surfaces. Ice shapes typifying those of a 30 minute hold were generated. Collection efficiencies were also generated for the entire aircraft using the newly developed unstructured collection efficiency method. The calculations highlight the flexibility and cost effectiveness of the LEWICE3D, ICEGRID3D, CMARC combination.

  13. Speech coding at low to medium bit rates

    NASA Astrophysics Data System (ADS)

    Leblanc, Wilfred Paul

    1992-09-01

    Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.

  14. Exact first order scattering correction for vector radiative transfer in coupled atmosphere and ocean systems

    NASA Astrophysics Data System (ADS)

    Zhai, Peng-Wang; Hu, Yongxiang; Josset, Damien B.; Trepte, Charles R.; Lucker, Patricia L.; Lin, Bing

    2012-06-01

    We have developed a Vector Radiative Transfer (VRT) code for coupled atmosphere and ocean systems based on the successive order of scattering (SOS) method. In order to achieve efficiency and maintain accuracy, the scattering matrix is expanded in terms of the Wigner d functions and the delta fit or delta-M technique is used to truncate the commonly-present large forward scattering peak. To further improve the accuracy of the SOS code, we have implemented the analytical first order scattering treatment using the exact scattering matrix of the medium in the SOS code. The expansion and truncation techniques are kept for higher order scattering. The exact first order scattering correction was originally published by Nakajima and Takana.1 A new contribution of this work is to account for the exact secondary light scattering caused by the light reflected by and transmitted through the rough air-sea interface.

  15. Overall Traveling-Wave-Tube Efficiency Improved By Optimized Multistage Depressed Collector Design

    NASA Technical Reports Server (NTRS)

    Vaden, Karl R.

    2002-01-01

    Depressed Collector Design The microwave traveling wave tube (TWT) is used widely for space communications and high-power airborne transmitting sources. One of the most important features in designing a TWT is overall efficiency. Yet, overall TWT efficiency is strongly dependent on the efficiency of the electron beam collector, particularly for high values of collector efficiency. For these reasons, the NASA Glenn Research Center developed an optimization algorithm based on simulated annealing to quickly design highly efficient multistage depressed collectors (MDC's). Simulated annealing is a strategy for solving highly nonlinear combinatorial optimization problems. Its major advantage over other methods is its ability to avoid becoming trapped in local minima. Simulated annealing is based on an analogy to statistical thermodynamics, specifically the physical process of annealing: heating a material to a temperature that permits many atomic rearrangements and then cooling it carefully and slowly, until it freezes into a strong, minimum-energy crystalline structure. This minimum energy crystal corresponds to the optimal solution of a mathematical optimization problem. The TWT used as a baseline for optimization was the 32-GHz, 10-W, helical TWT developed for the Cassini mission to Saturn. The method of collector analysis and design used was a 2-1/2-dimensional computational procedure that employs two types of codes, a large signal analysis code and an electron trajectory code. The large signal analysis code produces the spatial, energetic, and temporal distributions of the spent beam entering the MDC. An electron trajectory code uses the resultant data to perform the actual collector analysis. The MDC was optimized for maximum MDC efficiency and minimum final kinetic energy of all collected electrons (to reduce heat transfer). The preceding figure shows the geometric and electrical configuration of an optimized collector with an efficiency of 93.8 percent. The results show the improvement in collector efficiency from 89.7 to 93.8 percent, resulting in an increase of three overall efficiency points. In addition, the time to design a highly efficient MDC was reduced from a month to a few days. All work was done in-house at Glenn for the High Rate Data Delivery Program. Future plans include optimizing the MDC and TWT interaction circuit in tandem to further improve overall TWT efficiency.

  16. Mixing in 3D Sparse Multi-Scale Grid Generated Turbulence

    NASA Astrophysics Data System (ADS)

    Usama, Syed; Kopec, Jacek; Tellez, Jackson; Kwiatkowski, Kamil; Redondo, Jose; Malik, Nadeem

    2017-04-01

    Flat 2D fractal grids are known to alter turbulence characteristics downstream of the grid as compared to the regular grids with the same blockage ratio and the same mass inflow rates [1]. This has excited interest in the turbulence community for possible exploitation for enhanced mixing and related applications. Recently, a new 3D multi-scale grid design has been proposed [2] such that each generation of length scale of turbulence grid elements is held in its own frame, the overall effect is a 3D co-planar arrangement of grid elements. This produces a 'sparse' grid system whereby each generation of grid elements produces a turbulent wake pattern that interacts with the other wake patterns downstream. A critical motivation here is that the effective blockage ratio in the 3D Sparse Grid Turbulence (3DSGT) design is significantly lower than in the flat 2D counterpart - typically the blockage ratio could be reduced from say 20% in 2D down to 4% in the 3DSGT. If this idea can be realized in practice, it could potentially greatly enhance the efficiency of turbulent mixing and transfer processes clearly having many possible applications. Work has begun on the 3DSGT experimentally using Surface Flow Image Velocimetry (SFIV) [3] at the European facility in the Max Planck Institute for Dynamics and Self-Organization located in Gottingen, Germany and also at the Technical University of Catalonia (UPC) in Spain, and numerically using Direct Numerical Simulation (DNS) at King Fahd University of Petroleum & Minerals (KFUPM) in Saudi Arabia and in University of Warsaw in Poland. DNS is the most useful method to compare the experimental results with, and we are studying different types of codes such as Imcompact3d, and OpenFoam. Many variables will eventually be investigated for optimal mixing conditions. For example, the number of scale generations, the spacing between frames, the size ratio of grid elements, inflow conditions, etc. We will report upon the first set of findings from the 3DSGT by the time of the conference. {Acknowledgements}: This work has been supported partly by the EuHIT grant, 'Turbulence Generated by Sparse 3D Multi-Scale Grid (M3SG)', 2017. {References} [1] S. Laizet, J. C. Vassilicos. DNS of Fractal-Generated Turbulence. Flow Turbulence Combust 87:673705, (2011). [2] N. A. Malik. Sparse 3D Multi-Scale Grid Turbulence Generator. USPTO Application no. 14/710,531, Patent Pending, (2015). [3] J. Tellez, M. Gomez, B. Russo, J.M. Redondo. Surface Flow Image Velocimetry (SFIV) for hydraulics applications. 18th Int. Symposium on the Application of Laser Imaging Techniques in Fluid Mechanics, Lisbon, Portugal (2016).

  17. Smart photodetector arrays for error control in page-oriented optical memory

    NASA Astrophysics Data System (ADS)

    Schaffer, Maureen Elizabeth

    1998-12-01

    Page-oriented optical memories (POMs) have been proposed to meet high speed, high capacity storage requirements for input/output intensive computer applications. This technology offers the capability for storage and retrieval of optical data in two-dimensional pages resulting in high throughput data rates. Since currently measured raw bit error rates for these systems fall several orders of magnitude short of industry requirements for binary data storage, powerful error control codes must be adopted. These codes must be designed to take advantage of the two-dimensional memory output. In addition, POMs require an optoelectronic interface to transfer the optical data pages to one or more electronic host systems. Conventional charge coupled device (CCD) arrays can receive optical data in parallel, but the relatively slow serial electronic output of these devices creates a system bottleneck thereby eliminating the POM advantage of high transfer rates. Also, CCD arrays are "unintelligent" interfaces in that they offer little data processing capabilities. The optical data page can be received by two-dimensional arrays of "smart" photo-detector elements that replace conventional CCD arrays. These smart photodetector arrays (SPAs) can perform fast parallel data decoding and error control, thereby providing an efficient optoelectronic interface between the memory and the electronic computer. This approach optimizes the computer memory system by combining the massive parallelism and high speed of optics with the diverse functionality, low cost, and local interconnection efficiency of electronics. In this dissertation we examine the design of smart photodetector arrays for use as the optoelectronic interface for page-oriented optical memory. We review options and technologies for SPA fabrication, develop SPA requirements, and determine SPA scalability constraints with respect to pixel complexity, electrical power dissipation, and optical power limits. Next, we examine data modulation and error correction coding for the purpose of error control in the POM system. These techniques are adapted, where possible, for 2D data and evaluated as to their suitability for a SPA implementation in terms of BER, code rate, decoder time and pixel complexity. Our analysis shows that differential data modulation combined with relatively simple block codes known as array codes provide a powerful means to achieve the desired data transfer rates while reducing error rates to industry requirements. Finally, we demonstrate the first smart photodetector array designed to perform parallel error correction on an entire page of data and satisfy the sustained data rates of page-oriented optical memories. Our implementation integrates a monolithic PN photodiode array and differential input receiver for optoelectronic signal conversion with a cluster error correction code using 0.35-mum CMOS. This approach provides high sensitivity, low electrical power dissipation, and fast parallel correction of 2 x 2-bit cluster errors in an 8 x 8 bit code block to achieve corrected output data rates scalable to 102 Gbps in the current technology increasing to 1.88 Tbps in 0.1-mum CMOS.

  18. Sugar Radical Formation by a Proton Coupled Hole Transfer in 2′-Deoxyguanosine Radical Cation (2′-dG•+): A Theoretical Treatment

    PubMed Central

    Kumar, Anil; Sevilla, Michael D.

    2009-01-01

    Previous experimental and theoretical work has established that electronic excitation of a guanine cation radical in nucleosides or in DNA itself leads to sugar radical formation by deprotonation from the dexoxyribose sugar. In this work we investigate a ground electronic state pathway for such sugar radical formation in a hydrated one electron oxidized 2′-deoxyguanosine (dG•+ + 7H2O), using density functional theory (DFT) with the B3LYP functional and the 6-31G* basis set. We follow the stretching of the C5′-H bond in dG•+ to gain an understanding of the energy requirements to transfer the hole from the base to sugar ring and then to deprotonate to proton acceptor sites in solution and on the guanine ring. The geometries of reactant (dG•+ + 7H2O), transition state (TS) for deprotonation of C5′ site and product (dG(•C5′, N7-H+) + 7 H2O) were fully optimized. The zero point energy (ZPE) corrected activation energy (TS) for the proton transfer (PT) from C5′ is calculated to be 9.0 kcal/mol and is achieved by stretching the C5′-H bond by 0.13 Å from its equilibrium bond distance (1.099 Å). Remarkably, this small bond stretch is sufficient to transfer the “hole” (positive charge and spin) from guanine to the C5′ site on the deoxyribose group. Beyond the TS, the proton (H+) spontaneously adds to water to form a hydronium ion (H3O+) as an intermediate. The proton subsequently transfers to the N7 site of the guanine (product). The 9 kcal/mol barrier suggests slow thermal conversion of the cation radical to the sugar radical but also suggests that localized vibrational excitations would be sufficient to induce rapid sugar radical formation in DNA base cation radicals. PMID:19754084

  19. Development of the WRF-CO2 4D-Var assimilation system v1.0

    NASA Astrophysics Data System (ADS)

    Zheng, Tao; French, Nancy H. F.; Baxter, Martin

    2018-05-01

    Regional atmospheric CO2 inversions commonly use Lagrangian particle trajectory model simulations to calculate the required influence function, which quantifies the sensitivity of a receptor to flux sources. In this paper, an adjoint-based four-dimensional variational (4D-Var) assimilation system, WRF-CO2 4D-Var, is developed to provide an alternative approach. This system is developed based on the Weather Research and Forecasting (WRF) modeling system, including the system coupled to chemistry (WRF-Chem), with tangent linear and adjoint codes (WRFPLUS), and with data assimilation (WRFDA), all in version 3.6. In WRF-CO2 4D-Var, CO2 is modeled as a tracer and its feedback to meteorology is ignored. This configuration allows most WRF physical parameterizations to be used in the assimilation system without incurring a large amount of code development. WRF-CO2 4D-Var solves for the optimized CO2 flux scaling factors in a Bayesian framework. Two variational optimization schemes are implemented for the system: the first uses the limited memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) minimization algorithm (L-BFGS-B) and the second uses the Lanczos conjugate gradient (CG) in an incremental approach. WRFPLUS forward, tangent linear, and adjoint models are modified to include the physical and dynamical processes involved in the atmospheric transport of CO2. The system is tested by simulations over a domain covering the continental United States at 48 km × 48 km grid spacing. The accuracy of the tangent linear and adjoint models is assessed by comparing against finite difference sensitivity. The system's effectiveness for CO2 inverse modeling is tested using pseudo-observation data. The results of the sensitivity and inverse modeling tests demonstrate the potential usefulness of WRF-CO2 4D-Var for regional CO2 inversions.

  20. Numerical implementation, verification and validation of two-phase flow four-equation drift flux model with Jacobian-free Newton–Krylov method

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-08-24

    This study presents a numerical investigation on using the Jacobian-free Newton–Krylov (JFNK) method to solve the two-phase flow four-equation drift flux model with realistic constitutive correlations (‘closure models’). The drift flux model is based on Isshi and his collaborators’ work. Additional constitutive correlations for vertical channel flow, such as two-phase flow pressure drop, flow regime map, wall boiling and interfacial heat transfer models, were taken from the RELAP5-3D Code Manual and included to complete the model. The staggered grid finite volume method and fully implicit backward Euler method was used for the spatial discretization and time integration schemes, respectively. Themore » Jacobian-free Newton–Krylov method shows no difficulty in solving the two-phase flow drift flux model with a discrete flow regime map. In addition to the Jacobian-free approach, the preconditioning matrix is obtained by using the default finite differencing method provided in the PETSc package, and consequently the labor-intensive implementation of complex analytical Jacobian matrix is avoided. Extensive and successful numerical verification and validation have been performed to prove the correct implementation of the models and methods. Code-to-code comparison with RELAP5-3D has further demonstrated the successful implementation of the drift flux model.« less

  1. The coupling of MATISSE and the SE-WORKBENCH: a new solution for simulating efficiently the atmospheric radiative transfer and the sea surface radiation

    NASA Astrophysics Data System (ADS)

    Cathala, Thierry; Douchin, Nicolas; Latger, Jean; Caillault, Karine; Fauqueux, Sandrine; Huet, Thierry; Lubarre, Luc; Malherbe, Claire; Rosier, Bernard; Simoneau, Pierre

    2009-05-01

    The SE-WORKBENCH workshop, also called CHORALE (French acceptation for "simulated Optronic Acoustic Radar battlefield") is used by the French DGA (MoD) and several other Defense organizations and companies all around the World to perform multi-sensors simulations. CHORALE enables the user to create virtual and realistic multi spectral 3D scenes that may contain several types of target, and then generate the physical signal received by a sensor, typically an IR sensor. The SE-WORKBENCH can be used either as a collection of software modules through dedicated GUIs or as an API made of a large number of specialized toolkits. The SE-WORKBENCH is made of several functional block: one for geometrically and physically modeling the terrain and the targets, one for building the simulation scenario and one for rendering the synthetic environment, both in real and non real time. Among the modules that the modeling block is composed of, SE-ATMOSPHERE is used to simulate the atmospheric conditions of a Synthetic Environment and then to integrate the impact of these conditions on a scene. This software product generates an exploitable physical atmosphere by the SE WORKBENCH tools generating spectral images. It relies on several external radiative transfer models such as MODTRAN V4.2 in the current version. MATISSE [4,5] is a background scene generator developed for the computation of natural background spectral radiance images and useful atmospheric radiative quantities (radiance and transmission along a line of sight, local illumination, solar irradiance ...). Backgrounds include atmosphere, low and high altitude clouds, sea and land. A particular characteristic of the code is its ability to take into account atmospheric spatial variability (temperatures, mixing ratio, etc) along each line of sight. An Application Programming Interface (API) is included to facilitate its use in conjunction with external codes. MATISSE is currently considered as a new external radiative transfer model to be integrated in SE-ATMOSPHERE as a complement to MODTRAN. Compared to the latter which is used as a whole MATISSE can be used step by step and modularly as an API: this can avoid to pre compute large atmospheric parameters tables as it is done currently with MODTRAN. The use of MATISSE will also enable a real coupling between the ray tracing process of the SEWORKBENCH and the radiative transfer model of MATISSE. This will lead to the improvement of the link between a general atmospheric model and a specific 3D terrain. The paper will demonstrate the advantages for the SE WORKEBNCH of using MATISSE as a new atmospheric code, but also for computing the radiative properties of the sea surface.

  2. Systematic Review and Meta-Analysis of Diagnostic Accuracy of Serum Refractometry and Brix Refractometry for the Diagnosis of Inadequate Transfer of Passive Immunity in Calves.

    PubMed

    Buczinski, S; Gicquel, E; Fecteau, G; Takwoingi, Y; Chigerwe, M; Vandeweerd, J M

    2018-01-01

    Transfer of passive immunity in calves can be assessed by direct measurement of immunoglobulin G (IgG) by methods such as radial immunodiffusion (RID) or turbidimetric immunoassay (TIA). IgG can also be measured indirectly by methods such as serum refractometry (REF) or Brix refractometry (BRIX). To determine the accuracy of REF and BRIX for assessment of inadequate transfer of passive immunity (ITPI) in calves. Systematic review and meta-analysis of diagnostic accuracy studies. Databases (PubMed and CAB Abstract, Searchable Proceedings of Animal Science) and Google Scholar were searched for relevant studies. Studies were eligible if the accuracy (sensitivity and specificity) of REF or BRIX was determined using direct measurement of IgG by RID or turbidimetry as the reference standard. The study population included calves <14 days old that were fed with natural colostrum (colostrum replacement products were excluded). Quality assessment was performed by the QUADAS-2 tool. Hierarchical models were used for meta-analysis. From 1,291 references identified, 13 studies of 3,788 calves were included. Of these, 11 studies evaluated REF and 5 studies evaluated BRIX. The median (range) prevalence of ITPI (defined as calves with IgG <10 g/L by RID or TIA) was 21% (1.3-56%). Risk of bias and applicability concerns were generally low or unclear. For REF, summary estimates were obtained for 2 different cutoffs: 5.2 g/dL (6 studies) and 5.5 g/dL (5 studies). For the 5.2 g/dL cutoff, the summary sensitivity (95% CI) and specificity (95% CI) were 76.1% (63.8-85.2%) and 89.3% (82.3-93.7%), and 88.2% (80.2-93.3%) and 77.9% (74.5-81.0%) for the 5.5 g/dL cutoff. Due to the low number of studies using the same cutoffs, summary estimates could not be obtained for BRIX. Despite their widespread use on dairy farms, evidence about the optimal strategy for using refractometry, including the optimal cutoff, are sparse (especially for BRIX). When using REF to rule out ITPI in herds, the 5.5 g/dL cutoff may be used whereas for ruling in ITPI, the 5.2 g/dL cutoff may be used. Copyright © 2017 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  3. Minimizing stellarator turbulent transport by geometric optimization

    NASA Astrophysics Data System (ADS)

    Mynick, H. E.

    2010-11-01

    Up to now, a transport optimized stellarator has meant one optimized to minimize neoclassical transport,ootnotetextH.E. Mynick, Phys. Plasmas 13, 058102 (2006). while the task of also mitigating turbulent transport, usually the dominant transport channel in such designs, has not been addressed, due to the complexity of plasma turbulence in stellarators. However, with the advent of gyrokinetic codes valid for 3D geometries such as GENE,ootnotetextF. Jenko, W. Dorland, M. Kotschenreuther, B.N. Rogers, Phys. Plasmas 7, 1904 (2000). and stellarator optimization codes such as STELLOPT,ootnotetextA. Reiman, G. Fu, S. Hirshman, L. Ku, et al, Plasma Phys. Control. Fusion 41 B273 (1999). designing stellarators to also reduce turbulent transport has become a realistic possibility. We have been using GENE to characterize the dependence of turbulent transport on stellarator geometry,ootnotetextH.E Mynick, P.A. Xanthopoulos, A.H. Boozer, Phys.Plasmas 16 110702 (2009). and to identify key geometric quantities which control the transport level. From the information obtained from these GENE studies, we are developing proxy functions which approximate the level of turbulent transport one may expect in a machine of a given geometry, and have extended STELLOPT to use these in its cost function, obtaining stellarator configurations with turbulent transport levels substantially lower than those in the original designs.

  4. Parallel CARLOS-3D code development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Putnam, J.M.; Kotulski, J.D.

    1996-02-01

    CARLOS-3D is a three-dimensional scattering code which was developed under the sponsorship of the Electromagnetic Code Consortium, and is currently used by over 80 aerospace companies and government agencies. The code has been extensively validated and runs on both serial workstations and parallel super computers such as the Intel Paragon. CARLOS-3D is a three-dimensional surface integral equation scattering code based on a Galerkin method of moments formulation employing Rao- Wilton-Glisson roof-top basis for triangular faceted surfaces. Fully arbitrary 3D geometries composed of multiple conducting and homogeneous bulk dielectric materials can be modeled. This presentation describes some of the extensions tomore » the CARLOS-3D code, and how the operator structure of the code facilitated these improvements. Body of revolution (BOR) and two-dimensional geometries were incorporated by simply including new input routines, and the appropriate Galerkin matrix operator routines. Some additional modifications were required in the combined field integral equation matrix generation routine due to the symmetric nature of the BOR and 2D operators. Quadrilateral patched surfaces with linear roof-top basis functions were also implemented in the same manner. Quadrilateral facets and triangular facets can be used in combination to more efficiently model geometries with both large smooth surfaces and surfaces with fine detail such as gaps and cracks. Since the parallel implementation in CARLOS-3D is at high level, these changes were independent of the computer platform being used. This approach minimizes code maintenance, while providing capabilities with little additional effort. Results are presented showing the performance and accuracy of the code for some large scattering problems. Comparisons between triangular faceted and quadrilateral faceted geometry representations will be shown for some complex scatterers.« less

  5. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  6. Broadband and wide-angle RCS reduction using a 2-bit coding ultrathin metasurface at terahertz frequencies

    PubMed Central

    Liang, Lanju; Wei, Minggui; Yan, Xin; Wei, Dequan; Liang, Dachuan; Han, Jiaguang; Ding, Xin; Zhang, GaoYa; Yao, Jianquan

    2016-01-01

    A novel broadband and wide-angle 2-bit coding metasurface for radar cross section (RCS) reduction is proposed and characterized at terahertz (THz) frequencies. The ultrathin metasurface is composed of four digital elements based on a metallic double cross line structure. The reflection phase difference of neighboring elements is approximately 90° over a broadband THz frequency. The mechanism of RCS reduction is achieved by optimizing the coding element sequences, which redirects the electromagnetic energies to all directions in broad frequencies. An RCS reduction of less than −10 dB bandwidth from 0.7 THz to 1.3 THz is achieved in the experimental and numerical simulations. The simulation results also show that broadband RCS reduction can be achieved at an incident angle below 60° for TE and TM polarizations under flat and curve coding metasurfaces. These results open a new approach to flexibly control THz waves and may offer widespread applications for novel THz devices. PMID:27982089

  7. Broadband and wide-angle RCS reduction using a 2-bit coding ultrathin metasurface at terahertz frequencies.

    PubMed

    Liang, Lanju; Wei, Minggui; Yan, Xin; Wei, Dequan; Liang, Dachuan; Han, Jiaguang; Ding, Xin; Zhang, GaoYa; Yao, Jianquan

    2016-12-16

    A novel broadband and wide-angle 2-bit coding metasurface for radar cross section (RCS) reduction is proposed and characterized at terahertz (THz) frequencies. The ultrathin metasurface is composed of four digital elements based on a metallic double cross line structure. The reflection phase difference of neighboring elements is approximately 90° over a broadband THz frequency. The mechanism of RCS reduction is achieved by optimizing the coding element sequences, which redirects the electromagnetic energies to all directions in broad frequencies. An RCS reduction of less than -10 dB bandwidth from 0.7 THz to 1.3 THz is achieved in the experimental and numerical simulations. The simulation results also show that broadband RCS reduction can be achieved at an incident angle below 60° for TE and TM polarizations under flat and curve coding metasurfaces. These results open a new approach to flexibly control THz waves and may offer widespread applications for novel THz devices.

  8. Observable Signatures of Wind-driven Chemistry with a Fully Consistent Three-dimensional Radiative Hydrodynamics Model of HD 209458b

    NASA Astrophysics Data System (ADS)

    Drummond, B.; Mayne, N. J.; Manners, J.; Carter, A. L.; Boutle, I. A.; Baraffe, I.; Hébrard, É.; Tremblin, P.; Sing, D. K.; Amundsen, D. S.; Acreman, D.

    2018-03-01

    We present a study of the effect of wind-driven advection on the chemical composition of hot-Jupiter atmospheres using a fully consistent 3D hydrodynamics, chemistry, and radiative transfer code, the Met Office Unified Model (UM). Chemical modeling of exoplanet atmospheres has primarily been restricted to 1D models that cannot account for 3D dynamical processes. In this work, we couple a chemical relaxation scheme to the UM to account for the chemical interconversion of methane and carbon monoxide. This is done consistently with the radiative transfer meaning that departures from chemical equilibrium are included in the heating rates (and emission) and hence complete the feedback between the dynamics, thermal structure, and chemical composition. In this Letter, we simulate the well studied atmosphere of HD 209458b. We find that the combined effect of horizontal and vertical advection leads to an increase in the methane abundance by several orders of magnitude, which is directly opposite to the trend found in previous works. Our results demonstrate the need to include 3D effects when considering the chemistry of hot-Jupiter atmospheres. We calculate transmission and emission spectra, as well as the emission phase curve, from our simulations. We conclude that gas-phase nonequilibrium chemistry is unlikely to explain the model–observation discrepancy in the 4.5 μm Spitzer/IRAC channel. However, we highlight other spectral regions, observable with the James Webb Space Telescope, where signatures of wind-driven chemistry are more prominant.

  9. Nature and potency interactions of the hydrogen bond through the NBO analysis for charge transfer complex between 2-amino-4-hydroxy-6-methylpyrimidine and 2,3-pyrazinedicarboxylic acid

    NASA Astrophysics Data System (ADS)

    Faizan, Mohd; Afroz, Ziya; Alam, Mohammad Jane; Bhat, Sheeraz Ahmad; Ahmad, Shabbir; Ahmad, Afaq

    2018-05-01

    The intermolecular interactions in complex formation between 2-amino-4-hydroxy-6-methylpyrimidine (AHMP) and 2,3-pyrazinedicarboxylicacid (PDCA) have been explored using density functional theory calculations. The isolated 1:1 molecular geometry of proton transfer (PT) complex between AHMP and PDCA has been optimized on a counterpoise corrected potential energy surface (PES) at DFT-B3LYP/6-31G(d,p) level of theory in the gaseous phase. Further, the formation of hydrogen bonded charge transfer (HBCT) complex between PDCA and AHMP has been also discussed. PT energy barrier between two extremes is calculated using potential energy surface (PES) scan by varying bond length. The intermolecular interactions have been analyzed from theoretical perspective of natural bond orbital (NBO) analysis. In addition, the interaction energy between molecular fragments involved in the complex formation has been also computed by counterpoise procedure at same level of theory.

  10. New PDC bit optimizes drilling performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Besson, A.; Gudulec, P. le; Delwiche, R.

    1996-05-01

    The lithology in northwest Argentina contains a major section where polycrystalline diamond compact (PDC) bits have not succeeded in the past. The section consists of dense shales and cemented sandstone stringers with limestone laminations. Conventional PDC bits experienced premature failures in the section. A new generation PDC bit tripled rate of penetration (ROP) and increased by five times the potential footage per bit. Recent improvements in PDC bit technology that enabled the improved performance include: the ability to control the PDC cutter quality; use of an advanced cutter lay out defined by 3D software; using cutter face design code formore » optimized cleaning and cooling; and, mastering vibration reduction features, including spiraled blades.« less

  11. Single-particle cryo-EM-Improved ab initio 3D reconstruction with SIMPLE/PRIME.

    PubMed

    Reboul, Cyril F; Eager, Michael; Elmlund, Dominika; Elmlund, Hans

    2018-01-01

    Cryogenic electron microscopy (cryo-EM) and single-particle analysis now enables the determination of high-resolution structures of macromolecular assemblies that have resisted X-ray crystallography and other approaches. We developed the SIMPLE open-source image-processing suite for analysing cryo-EM images of single-particles. A core component of SIMPLE is the probabilistic PRIME algorithm for identifying clusters of images in 2D and determine relative orientations of single-particle projections in 3D. Here, we extend our previous work on PRIME and introduce new stochastic optimization algorithms that improve the robustness of the approach. Our refined method for identification of homogeneous subsets of images in accurate register substantially improves the resolution of the cluster centers and of the ab initio 3D reconstructions derived from them. We now obtain maps with a resolution better than 10 Å by exclusively processing cluster centers. Excellent parallel code performance on over-the-counter laptops and CPU workstations is demonstrated. © 2017 The Protein Society.

  12. Orion Parachute Riser Cutter Development

    NASA Technical Reports Server (NTRS)

    Oguz, Sirri; Salazar, Frank

    2011-01-01

    This paper presents the tests and analytical approach used on the development of a steel riser cutter for the CEV Parachute Assembly System (CPAS) used on the Orion crew module. Figure 1 shows the riser cutter and the steel riser bundle which consists of six individual cables. Due to the highly compressed schedule, initial unavailability of the riser material and the Orion Forward Bay mechanical constraints, JSC primarily relied on a combination of internal ballistics analysis and LS-DYNA simulation for this project. Various one dimensional internal ballistics codes that use standard equation of state and conservation of energy have commonly used in the development of CAD devices for initial first order estimates and as an enhancement to the test program. While these codes are very accurate for propellant performance prediction, they usually lack a fully defined kinematic model for dynamic predictions. A simple piston device can easily and accurately be modeled using an equation of motion. However, the accuracy of analytical models is greatly reduced on more complicated devices with complex external loads, nonlinear trajectories or unique unlocking features. A 3D finite element model of CAD device with all critical features included can vastly improve the analytical ballistic predictions when it is used as a supplement to the ballistic code. During this project, LS-DYNA structural 3D model was used to predict the riser resisting load that was needed for the ballistic code. A Lagrangian model with eroding elements shown in Figure 2 was used for the blade, steel riser and the anvil. The riser material failure strain was fine tuned by matching the dent depth on the anvil with the actual test data. LS-DYNA model was also utilized to optimize the blade tip design for the most efficient cut. In parallel, the propellant type and the amount were determined by using CADPROG internal ballistics code. Initial test results showed a good match with LS-DYNA and CADPROG simulations. Final paper will present a detailed roadmap from initial ballistic modeling and LS-DYNA simulation to the performance testing. Blade shape optimization study will also be presented.

  13. CFL3D Version 6.4-General Usage and Aeroelastic Analysis

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.; Rumsey, Christopher L.; Biedron, Robert T.

    2006-01-01

    This document contains the course notes on the computational fluid dynamics code CFL3D version 6.4. It is intended to provide from basic to advanced users the information necessary to successfully use the code for a broad range of cases. Much of the course covers capability that has been a part of previous versions of the code, with material compiled from a CFL3D v5.0 manual and from the CFL3D v6 web site prior to the current release. This part of the material is presented to users of the code not familiar with computational fluid dynamics. There is new capability in CFL3D version 6.4 presented here that has not previously been published. There are also outdated features no longer used or recommended in recent releases of the code. The information offered here supersedes earlier manuals and updates outdated usage. Where current usage supersedes older versions, notation of that is made. These course notes also provides hints for usage, code installation and examples not found elsewhere.

  14. Interfacility Transfers to General Pediatric Floors: A Qualitative Study Exploring the Role of Communication.

    PubMed

    Rosenthal, Jennifer L; Okumura, Megumi J; Hernandez, Lenore; Li, Su-Ting T; Rehm, Roberta S

    2016-01-01

    Children with special health care needs often require health services that are only provided at subspecialty centers. Such children who present to nonspecialty hospitals might require a hospital-to-hospital transfer. When transitioning between medical settings, communication is an integral aspect that can affect the quality of patient care. The objectives of the study were to identify barriers and facilitators to effective interfacility pediatric transfer communication to general pediatric floors from the perspectives of referring and accepting physicians, and then develop a conceptual model for effective interfacility transfer communication. This was a single-center qualitative study using grounded theory methodology. Referring and accepting physicians of children with special health care needs were interviewed. Four researchers coded the data using ATLAS.ti (version 7, Scientific Software Development GMBH, Berlin, Germany), using a 2-step process of open coding, followed by focused coding until no new codes emerged. The research team reached consensus on the final major categories and subsequently developed a conceptual model. Eight referring and 9 accepting physicians were interviewed. Theoretical coding resulted in 3 major categories: streamlined transfer process, quality handoff and 2-way communication, and positive relationships between physicians across facilities. The conceptual model unites these categories and shows how these categories contribute to effective interfacility transfer communication. Proposed interventions involved standardizing the communication process and incorporating technology such as telemedicine during transfers. Communication is perceived to be an integral component of interfacility transfers. We recommend that transfer systems be re-engineered to make the process more streamlined, to improve the quality of the handoff and 2-way communication, and to facilitate positive relationships between physicians across facilities. Copyright © 2016 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  15. Lossy to lossless object-based coding of 3-D MRI data.

    PubMed

    Menegaz, Gloria; Thiran, Jean-Philippe

    2002-01-01

    We propose a fully three-dimensional (3-D) object-based coding system exploiting the diagnostic relevance of the different regions of the volumetric data for rate allocation. The data are first decorrelated via a 3-D discrete wavelet transform. The implementation via the lifting steps scheme allows to map integer-to-integer values, enabling lossless coding, and facilitates the definition of the object-based inverse transform. The coding process assigns disjoint segments of the bitstream to the different objects, which can be independently accessed and reconstructed at any up-to-lossless quality. Two fully 3-D coding strategies are considered: embedded zerotree coding (EZW-3D) and multidimensional layered zero coding (MLZC), both generalized for region of interest (ROI)-based processing. In order to avoid artifacts along region boundaries, some extra coefficients must be encoded for each object. This gives rise to an overheading of the bitstream with respect to the case where the volume is encoded as a whole. The amount of such extra information depends on both the filter length and the decomposition depth. The system is characterized on a set of head magnetic resonance images. Results show that MLZC and EZW-3D have competitive performances. In particular, the best MLZC mode outperforms the others state-of-the-art techniques on one of the datasets for which results are available in the literature.

  16. Multi-optimization Criteria-based Robot Behavioral Adaptability and Motion Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, Francois G.

    2002-06-01

    Robotic tasks are typically defined in Task Space (e.g., the 3-D World), whereas robots are controlled in Joint Space (motors). The transformation from Task Space to Joint Space must consider the task objectives (e.g., high precision, strength optimization, torque optimization), the task constraints (e.g., obstacles, joint limits, non-holonomic constraints, contact or tool task constraints), and the robot kinematics configuration (e.g., tools, type of joints, mobile platform, manipulator, modular additions, locked joints). Commercially available robots are optimized for a specific set of tasks, objectives and constraints and, therefore, their control codes are extremely specific to a particular set of conditions. Thus,more » there exist a multiplicity of codes, each handling a particular set of conditions, but none suitable for use on robots with widely varying tasks, objectives, constraints, or environments. On the other hand, most DOE missions and tasks are typically ''batches of one''. Attempting to use commercial codes for such work requires significant personnel and schedule costs for re-programming or adding code to the robots whenever a change in task objective, robot configuration, number and type of constraint, etc. occurs. The objective of our project is to develop a ''generic code'' to implement this Task-space to Joint-Space transformation that would allow robot behavior adaptation, in real time (at loop rate), to changes in task objectives, number and type of constraints, modes of controls, kinematics configuration (e.g., new tools, added module). Our specific goal is to develop a single code for the general solution of under-specified systems of algebraic equations that is suitable for solving the inverse kinematics of robots, is useable for all types of robots (mobile robots, manipulators, mobile manipulators, etc.) with no limitation on the number of joints and the number of controlled Task-Space variables, can adapt to real time changes in number and type of constraints and in task objectives, and can adapt to changes in kinematics configurations (change of module, change of tool, joint failure adaptation, etc.).« less

  17. Optimal boundary conditions for ORCA-2 model

    NASA Astrophysics Data System (ADS)

    Kazantsev, Eugene

    2013-08-01

    A 4D-Var data assimilation technique is applied to ORCA-2 configuration of the NEMO in order to identify the optimal parametrization of boundary conditions on the lateral boundaries as well as on the bottom and on the surface of the ocean. The influence of boundary conditions on the solution is analyzed both within and beyond the assimilation window. It is shown that the optimal bottom and surface boundary conditions allow us to better represent the jet streams, such as Gulf Stream and Kuroshio. Analyzing the reasons of the jets reinforcement, we notice that data assimilation has a major impact on parametrization of the bottom boundary conditions for u and v. Automatic generation of the tangent and adjoint codes is also discussed. Tapenade software is shown to be able to produce the adjoint code that can be used after a memory usage optimization.

  18. Is 3D true non linear traveltime tomography reasonable ?

    NASA Astrophysics Data System (ADS)

    Herrero, A.; Virieux, J.

    2003-04-01

    The data sets requiring 3D analysis tools in the context of seismic exploration (both onshore and offshore experiments) or natural seismicity (micro seismicity surveys or post event measurements) are more and more numerous. Classical linearized tomographies and also earthquake localisation codes need an accurate 3D background velocity model. However, if the medium is complex and a priori information not available, a 1D analysis is not able to provide an adequate background velocity image. Moreover, the design of the acquisition layouts is often intrinsically 3D and renders difficult even 2D approaches, especially in natural seismicity cases. Thus, the solution relies on the use of a 3D true non linear approach, which allows to explore the model space and to identify an optimal velocity image. The problem becomes then practical and its feasibility depends on the available computing resources (memory and time). In this presentation, we show that facing a 3D traveltime tomography problem with an extensive non-linear approach combining fast travel time estimators based on level set methods and optimisation techniques such as multiscale strategy is feasible. Moreover, because management of inhomogeneous inversion parameters is more friendly in a non linear approach, we describe how to perform a jointly non-linear inversion for the seismic velocities and the sources locations.

  19. Monte Carlo calculation for the development of a BNCT neutron source (1eV-10KeV) using MCNP code.

    PubMed

    El Moussaoui, F; El Bardouni, T; Azahra, M; Kamili, A; Boukhal, H

    2008-09-01

    Different materials have been studied in order to produce the epithermal neutron beam between 1eV and 10KeV, which are extensively used to irradiate patients with brain tumors such as GBM. For this purpose, we have studied three different neutrons moderators (H(2)O, D(2)O and BeO) and their combinations, four reflectors (Al(2)O(3), C, Bi, and Pb) and two filters (Cd and Bi). Results of calculation showed that the best obtained assembly configuration corresponds to the combination of the three moderators H(2)O, BeO and D(2)O jointly to Al(2)O(3) reflector and two filter Cd+Bi optimize the spectrum of the epithermal neutron at 72%, and minimize the thermal neutron to 4% and thus it can be used to treat the deep tumor brain. The calculations have been performed by means of the Monte Carlo N (particle code MCNP 5C). Our results strongly encourage further studying of irradiation of the head with epithermal neutron fields.

  20. Growth of zinc selenide single crystals by physical vapor transport in microgravity

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz

    1993-01-01

    The goals of this research were the optimization of growth parameters for large (20 mm diameter and length) zinc selenide single crystals with low structural defect density, and the development of a 3-D numerical model for the transport rates to be expected in physical vapor transport under a given set of thermal and geometrical boundary conditions, in order to provide guidance for an advantageous conduct of the growth experiments. In the crystal growth studies, it was decided to exclusively apply the Effusive Ampoule PVT technique (EAPVT) to the growth of ZnSe. In this technique, the accumulation of transport-limiting gaseous components at the growing crystal is suppressed by continuous effusion to vacuum of part of the vapor contents. This is achieved through calibrated leaks in one of the ground joints of the ampoule. Regarding the PVT transport rates, a 3-D spectral code was modified. After introduction of the proper boundary conditions and subroutines for the composition-dependent transport properties, the code reproduced the experimentally determined transport rates for the two cases with strongest convective flux contributions to within the experimental and numerical error.

Top