Equilibrium Reconstruction on the Large Helical Device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samuel A. Lazerson, D. Gates, D. Monticello, H. Neilson, N. Pomphrey, A. Reiman S. Sakakibara, and Y. Suzuki
Equilibrium reconstruction is commonly applied to axisymmetric toroidal devices. Recent advances in computational power and equilibrium codes have allowed for reconstructions of three-dimensional fields in stellarators and heliotrons. We present the first reconstructions of finite beta discharges in the Large Helical Device (LHD). The plasma boundary and magnetic axis are constrained by the pressure profile from Thomson scattering. This results in a calculation of plasma beta without a-priori assumptions of the equipartition of energy between species. Saddle loop arrays place additional constraints on the equilibrium. These reconstruction utilize STELLOPT, which calls VMEC. The VMEC equilibrium code assumes good nested fluxmore » surfaces. Reconstructed magnetic fields are fed into the PIES code which relaxes this constraint allowing for the examination of the effect of islands and stochastic regions on the magnetic measurements.« less
3D equilibrium reconstruction with islands
NASA Astrophysics Data System (ADS)
Cianciosa, M.; Hirshman, S. P.; Seal, S. K.; Shafer, M. W.
2018-04-01
This paper presents the development of a 3D equilibrium reconstruction tool and the results of the first-ever reconstruction of an island equilibrium. The SIESTA non-nested equilibrium solver has been coupled to the V3FIT 3D equilibrium reconstruction code. Computed from a coupled VMEC and SIESTA model, synthetic signals are matched to measured signals by finding an optimal set of equilibrium parameters. By using the normalized pressure in place of normalized flux, non-equilibrium quantities needed by diagnostic signals can be efficiently mapped to the equilibrium. The effectiveness of this tool is demonstrated by reconstructing an island equilibrium of a DIII-D inner wall limited L-mode case with an n = 1 error field applied. Flat spots in Thomson and ECE temperature diagnostics show the reconstructed islands have the correct size and phase. ).
3D equilibrium reconstruction with islands
Cianciosa, M.; Hirshman, S. P.; Seal, S. K.; ...
2018-02-15
This study presents the development of a 3D equilibrium reconstruction tool and the results of the first-ever reconstruction of an island equilibrium. The SIESTA non-nested equilibrium solver has been coupled to the V3FIT 3D equilibrium reconstruction code. Computed from a coupled VMEC and SIESTA model, synthetic signals are matched to measured signals by finding an optimal set of equilibrium parameters. By using the normalized pressure in place of normalized flux, non-equilibrium quantities needed by diagnostic signals can be efficiently mapped to the equilibrium. The effectiveness of this tool is demonstrated by reconstructing an island equilibrium of a DIII-D inner wallmore » limited L-mode case with an n = 1 error field applied. Finally, flat spots in Thomson and ECE temperature diagnostics show the reconstructed islands have the correct size and phase.« less
3D equilibrium reconstruction with islands
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cianciosa, M.; Hirshman, S. P.; Seal, S. K.
This study presents the development of a 3D equilibrium reconstruction tool and the results of the first-ever reconstruction of an island equilibrium. The SIESTA non-nested equilibrium solver has been coupled to the V3FIT 3D equilibrium reconstruction code. Computed from a coupled VMEC and SIESTA model, synthetic signals are matched to measured signals by finding an optimal set of equilibrium parameters. By using the normalized pressure in place of normalized flux, non-equilibrium quantities needed by diagnostic signals can be efficiently mapped to the equilibrium. The effectiveness of this tool is demonstrated by reconstructing an island equilibrium of a DIII-D inner wallmore » limited L-mode case with an n = 1 error field applied. Finally, flat spots in Thomson and ECE temperature diagnostics show the reconstructed islands have the correct size and phase.« less
Equilibrium Spline Interface (ESI) for magnetic confinement codes
NASA Astrophysics Data System (ADS)
Li, Xujing; Zakharov, Leonid E.
2017-12-01
A compact and comprehensive interface between magneto-hydrodynamic (MHD) equilibrium codes and gyro-kinetic, particle orbit, MHD stability, and transport codes is presented. Its irreducible set of equilibrium data consists of three (in the 2-D case with occasionally one extra in the 3-D case) functions of coordinates and four 1-D radial profiles together with their first and mixed derivatives. The C reconstruction routines, accessible also from FORTRAN, allow the calculation of basis functions and their first derivatives at any position inside the plasma and in its vicinity. After this all vector fields and geometric coefficients, required for the above mentioned types of codes, can be calculated using only algebraic operations with no further interpolation or differentiation.
Equilibrium reconstruction with 3D eddy currents in the Lithium Tokamak eXperiment
Hansen, C.; Boyle, D. P.; Schmitt, J. C.; ...
2017-04-18
Axisymmetric free-boundary equilibrium reconstructions of tokamak plasmas in the Lithium Tokamak eXperiment (LTX) are performed using the PSI-Tri equilibrium code. Reconstructions in LTX are complicated by the presence of long-lived non-axisymmetric eddy currents generated by a vacuum vessel and first wall structures. To account for this effect, reconstructions are performed with additional toroidal current sources in these conducting regions. The eddy current sources are fixed in their poloidal distributions, but their magnitude is adjusted as part of the full reconstruction. Eddy distributions are computed by toroidally averaging currents, generated by coupling to vacuum field coils, from a simplified 3D filamentmore » model of important conducting structures. The full 3D eddy current fields are also used to enable the inclusion of local magnetic field measurements, which have strong 3D eddy current pick-up, as reconstruction constraints. Using this method, equilibrium reconstruction yields good agreement with all available diagnostic signals. Here, an accompanying field perturbation produced by 3D eddy currents on the plasma surface with a primarily n = 2, m = 1 character is also predicted for these equilibria.« less
Real Time Computation of Kinetic Constraints to Support Equilibrium Reconstruction
NASA Astrophysics Data System (ADS)
Eggert, W. J.; Kolemen, E.; Eldon, D.
2016-10-01
A new method for quickly and automatically applying kinetic constraints to EFIT equilibrium reconstructions using readily available data is presented. The ultimate goal is to produce kinetic equilibrium reconstructions in real time and use them to constrain the DCON stability code as part of a disruption avoidance scheme. A first effort presented here replaces CPU-time expensive modules, such as the fast ion pressure profile calculation, with a simplified model. We show with a DIII-D database analysis that we can achieve reasonable predictions for selected applications by modeling the fast ion pressure profile and determining the fit parameters as functions of easily measured quantities including neutron rate and electron temperature on axis. Secondly, we present a strategy for treating Thomson scattering and Charge Exchange Recombination data to automatically form constraints for a kinetic equilibrium reconstruction, a process that historically was performed by hand. Work supported by US DOE DE-AC02-09CH11466 and DE-FC02-04ER54698.
Development of the PARVMEC Code for Rapid Analysis of 3D MHD Equilibrium
NASA Astrophysics Data System (ADS)
Seal, Sudip; Hirshman, Steven; Cianciosa, Mark; Wingen, Andreas; Unterberg, Ezekiel; Wilcox, Robert; ORNL Collaboration
2015-11-01
The VMEC three-dimensional (3D) MHD equilibrium has been used extensively for designing stellarator experiments and analyzing experimental data in such strongly 3D systems. Recent applications of VMEC include 2D systems such as tokamaks (in particular, the D3D experiment), where application of very small (delB/B ~ 10-3) 3D resonant magnetic field perturbations render the underlying assumption of axisymmetry invalid. In order to facilitate the rapid analysis of such equilibria (for example, for reconstruction purposes), we have undertaken the task of parallelizing the VMEC code (PARVMEC) to produce a scalable and temporally rapidly convergent equilibrium code for use on parallel distributed memory platforms. The parallelization task naturally splits into three distinct parts 1) radial surfaces in the fixed-boundary part of the calculation; 2) two 2D angular meshes needed to compute the Green's function integrals over the plasma boundary for the free-boundary part of the code; and 3) block tridiagonal matrix needed to compute the full (3D) pre-conditioner near the final equilibrium state. Preliminary results show that scalability is achieved for tasks 1 and 3, with task 2 still nearing completion. The impact of this work on the rapid reconstruction of D3D plasmas using PARVMEC in the V3FIT code will be discussed. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.
Uncertainty Analysis in 3D Equilibrium Reconstruction
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
2018-02-21
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
Uncertainty Analysis in 3D Equilibrium Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
NASA Astrophysics Data System (ADS)
Ma, Xinxing; Ennis, D. A.; Hanson, J. D.; Hartwell, G. J.; Knowlton, S. F.; Maurer, D. A.
2017-10-01
Non-axisymmetric equilibrium reconstructions have been routinely performed with the V3FIT code in the Compact Toroidal Hybrid (CTH), a stellarator/tokamak hybrid. In addition to 50 external magnetic measurements, 160 SXR emissivity measurements are incorporated into V3FIT to reconstruct the magnetic flux surface geometry and infer the current distribution within the plasma. Improved reconstructions of current and q profiles provide insight into understanding the physics of density limit disruptions observed in current-carrying discharges in CTH. It is confirmed that the final scenario of the density limit of CTH plasmas is consistent with classic observations in tokamaks: current profile shrinkage leads to growing MHD instabilities (tearing modes) followed by a loss of MHD equilibrium. It is also observed that the density limit at a given current linearly increases with increasing amounts of 3D shaping fields. Consequently, plasmas with densities up to two times the Greenwald limit are attained. Equilibrium reconstructions show that addition of 3D fields effectively moves resonance surfaces towards the edge of the plasma where the current profile gradient is less, providing a stabilizing effect. This work is supported by US Department of Energy Grant No. DE-FG02-00ER54610.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, X., E-mail: xzm0005@auburn.edu; Maurer, D. A.; Knowlton, S. F.
2015-12-15
Non-axisymmetric free-boundary equilibrium reconstructions of stellarator plasmas are performed for discharges in which the magnetic configuration is strongly modified by ohmically driven plasma current. These studies were performed on the compact toroidal hybrid device using the V3FIT reconstruction code with a set of 50 magnetic diagnostics external to the plasma. With the assumption of closed magnetic flux surfaces, the reconstructions using external magnetic measurements allow accurate estimates of the net toroidal flux within the last closed flux surface, the edge safety factor, and the plasma shape of these highly non-axisymmetric plasmas. The inversion radius of standard sawteeth is used tomore » infer the current profile near the magnetic axis; with external magnetic diagnostics alone, the current density profile is imprecisely reconstructed.« less
NASA Astrophysics Data System (ADS)
Ma, X.; Maurer, D. A.; Knowlton, S. F.; ArchMiller, M. C.; Cianciosa, M. R.; Ennis, D. A.; Hanson, J. D.; Hartwell, G. J.; Hebert, J. D.; Herfindal, J. L.; Pandya, M. D.; Roberds, N. A.; Traverso, P. J.
2015-12-01
Non-axisymmetric free-boundary equilibrium reconstructions of stellarator plasmas are performed for discharges in which the magnetic configuration is strongly modified by ohmically driven plasma current. These studies were performed on the compact toroidal hybrid device using the V3FIT reconstruction code with a set of 50 magnetic diagnostics external to the plasma. With the assumption of closed magnetic flux surfaces, the reconstructions using external magnetic measurements allow accurate estimates of the net toroidal flux within the last closed flux surface, the edge safety factor, and the plasma shape of these highly non-axisymmetric plasmas. The inversion radius of standard sawteeth is used to infer the current profile near the magnetic axis; with external magnetic diagnostics alone, the current density profile is imprecisely reconstructed.
Ma, X.; Maurer, D. A.; Knowlton, Stephen F.; ...
2015-12-22
Non-axisymmetric free-boundary equilibrium reconstructions of stellarator plasmas are performed for discharges in which the magnetic configuration is strongly modified by ohmically driven plasma current. These studies were performed on the compact toroidal hybrid device using the V3FIT reconstruction code with a set of 50 magnetic diagnostics external to the plasma. With the assumption of closed magnetic flux surfaces, the reconstructions using external magnetic measurements allow accurate estimates of the net toroidal flux within the last closed flux surface, the edge safety factor, and the plasma shape of these highly non-axisymmetric plasmas. Lastly, the inversion radius of standard saw-teeth is usedmore » to infer the current profile near the magnetic axis; with external magnetic diagnostics alone, the current density profile is imprecisely reconstructed.« less
NASA Astrophysics Data System (ADS)
Faugeras, Blaise; Blum, Jacques; Heumann, Holger; Boulbe, Cédric
2017-08-01
The modelization of polarimetry Faraday rotation measurements commonly used in tokamak plasma equilibrium reconstruction codes is an approximation to the Stokes model. This approximation is not valid for the foreseen ITER scenarios where high current and electron density plasma regimes are expected. In this work a method enabling the consistent resolution of the inverse equilibrium reconstruction problem in the framework of non-linear free-boundary equilibrium coupled to the Stokes model equation for polarimetry is provided. Using optimal control theory we derive the optimality system for this inverse problem. A sequential quadratic programming (SQP) method is proposed for its numerical resolution. Numerical experiments with noisy synthetic measurements in the ITER tokamak configuration for two test cases, the second of which is an H-mode plasma, show that the method is efficient and that the accuracy of the identification of the unknown profile functions is improved compared to the use of classical Faraday measurements.
Plasma stability analysis using Consistent Automatic Kinetic Equilibrium reconstruction (CAKE)
NASA Astrophysics Data System (ADS)
Roelofs, Matthijs; Kolemen, Egemen; Eldon, David; Glasser, Alex; Meneghini, Orso; Smith, Sterling P.
2017-10-01
Presented here is the Consistent Automatic Kinetic Equilibrium (CAKE) code. CAKE is being developed to perform real-time kinetic equilibrium reconstruction, aiming to do a reconstruction in less than 100ms. This is achieved by taking, next to real-time Motional Stark Effect (MSE) and magnetics data, real-time Thomson Scattering (TS) and real-time Charge Exchange Recombination (CER, still in development) data in to account. Electron densities and temperature are determined by TS, while ion density and pressures are determined using CER. These form, together with the temperature and density of neutrals, the additional pressure constraints. Extra current constraints are imposed in the core by the MSE diagnostics. The pedestal current density is estimated using Sauters equation for the bootstrap current density. By comparing the behaviour of the ideal MHD perturbed potential energy (δW) and the linear stability index (Δ') of CAKE to magnetics-only reconstruction, it can be seen that the use of diagnostics to reconstruct the pedestal have a large effect on stability. Supported by U.S. DOE DE-SC0015878 and DE-FC02-04ER54698.
Kinetic equilibrium reconstruction for the NBI- and ICRH-heated H-mode plasma on EAST tokamak
NASA Astrophysics Data System (ADS)
Zhen, ZHENG; Nong, XIANG; Jiale, CHEN; Siye, DING; Hongfei, DU; Guoqiang, LI; Yifeng, WANG; Haiqing, LIU; Yingying, LI; Bo, LYU; Qing, ZANG
2018-04-01
The equilibrium reconstruction is important to study the tokamak plasma physical processes. To analyze the contribution of fast ions to the equilibrium, the kinetic equilibria at two time-slices in a typical H-mode discharge with different auxiliary heatings are reconstructed by using magnetic diagnostics, kinetic diagnostics and TRANSP code. It is found that the fast-ion pressure might be up to one-third of the plasma pressure and the contribution is mainly in the core plasma due to the neutral beam injection power is primarily deposited in the core region. The fast-ion current contributes mainly in the core region while contributes little to the pedestal current. A steep pressure gradient in the pedestal is observed which gives rise to a strong edge current. It is proved that the fast ion effects cannot be ignored and should be considered in the future study of EAST.
Helical core reconstruction of a DIII-D hybrid scenario tokamak discharge
Cianciosa, Mark; Wingen, Andreas; Hirshman, Steven P.; ...
2017-05-18
Our paper presents the first fully 3-dimensional (3D) equilibrium reconstruction of a helical core in a tokamak device. Using a new parallel implementation of the Variational Moments Equilibrium Code (PARVMEC) coupled to V3FIT, 3D reconstructions can be performed at resolutions necessary to produce helical states in nominally axisymmetric tokamak equilibria. In a flux pumping experiment performed on DIII-D, an external n=1 field was applied while a 3/2 neoclassical tearing mode was suppressed using ECCD. The externally applied field was rotated past a set of fixed diagnostics at a 20 Hz frequency. Furthermore, the modulation, were found to be strongest in the core SXR and MSE channels, indicates a localized rotating 3D structure locked in phase with the applied field. Signals from multiple time slices are converted to a virtual rotation of modeled diagnostics adding 3D signal information. In starting from an axisymmetric equilibrium reconstruction solution, the reconstructed broader current profile flattens the q-profile, resulting in an m=1, n=1 perturbation of the magnetic axis that ismore » $$\\sim 50\\times $$ larger than the applied n=1 deformation of the edge. Error propagation confirms that the displacement of the axis is much larger than the uncertainty in the axis position validating the helical equilibrium.« less
Helical core reconstruction of a DIII-D hybrid scenario tokamak discharge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cianciosa, Mark; Wingen, Andreas; Hirshman, Steven P.
Our paper presents the first fully 3-dimensional (3D) equilibrium reconstruction of a helical core in a tokamak device. Using a new parallel implementation of the Variational Moments Equilibrium Code (PARVMEC) coupled to V3FIT, 3D reconstructions can be performed at resolutions necessary to produce helical states in nominally axisymmetric tokamak equilibria. In a flux pumping experiment performed on DIII-D, an external n=1 field was applied while a 3/2 neoclassical tearing mode was suppressed using ECCD. The externally applied field was rotated past a set of fixed diagnostics at a 20 Hz frequency. Furthermore, the modulation, were found to be strongest in the core SXR and MSE channels, indicates a localized rotating 3D structure locked in phase with the applied field. Signals from multiple time slices are converted to a virtual rotation of modeled diagnostics adding 3D signal information. In starting from an axisymmetric equilibrium reconstruction solution, the reconstructed broader current profile flattens the q-profile, resulting in an m=1, n=1 perturbation of the magnetic axis that ismore » $$\\sim 50\\times $$ larger than the applied n=1 deformation of the edge. Error propagation confirms that the displacement of the axis is much larger than the uncertainty in the axis position validating the helical equilibrium.« less
Simulations in support of the T4B experiment
NASA Astrophysics Data System (ADS)
Qerushi, Artan; Ross, Patrick; Lohff, Chriss; Raymond, Anthony; Montecalvo, Niccolo
2017-10-01
Simulations in support of the T4B experiment are presented. These include a Grad-Shafranov equilibrium solver and equilibrium reconstruction from flux-loop measurements, collision radiative models for plasma spectroscopy (determination of electron density and temperature from line ratios) and fast ion test particle codes for neutral beam - plasma coupling. ©2017 Lockheed Martin Corporation. All Rights Reserved.
Modeling MHD Equilibrium and Dynamics with Non-Axisymmetric Resistive Walls in LTX and HBT-EP
NASA Astrophysics Data System (ADS)
Hansen, C.; Levesque, J.; Boyle, D. P.; Hughes, P.
2017-10-01
In experimental magnetized plasmas, currents in the first wall, vacuum vessel, and other conducting structures can have a strong influence on plasma shape and dynamics. These effects are complicated by the 3D nature of these structures, which dictate available current paths. Results from simulations to study the effect of external currents on plasmas in two different experiments will be presented: 1) The arbitrary geometry, 3D extended MHD code PSI-Tet is applied to study linear and non-linear plasma dynamics in the High Beta Tokamak (HBT-EP) focusing on toroidal asymmetries in the adjustable conducting wall. 2) Equilibrium reconstructions of the Lithium Tokamak eXperiment (LTX) in the presence of non-axisymmetric eddy currents. An axisymmetric model is used to reconstruct the plasma equilibrium, using the PSI-Tri code, along with a set of fixed 3D eddy current distributions in the first wall and vacuum vessel [C. Hansen et al., PoP Apr. 2017]. Simulations of detailed experimental geometries are enabled by use of the PSI-Tet code, which employs a high order finite element method on unstructured tetrahedral grids that are generated directly from CAD models. Further development of PSI-Tet and PSI-Tri will also be presented. This work supported by US DOE contract DE-SC0016256.
NASA Astrophysics Data System (ADS)
Ma, X.; Cianciosa, M.; Hanson, J. D.; Hartwell, G. J.; Knowlton, S. F.; Maurer, D. A.; Ennis, D. A.; Herfindal, J. L.
2015-11-01
Non-axisymmetric free-boundary equilibrium reconstructions of stellarator plasmas are performed for discharges in which the magnetic configuration is strongly modified by the driven plasma current. Studies were performed on the Compact Toroidal Hybrid device using the V3FIT reconstruction code incorporating a set of 50 magnetic diagnostics external to the plasma, combined with information from soft X-ray (SXR) arrays. With the assumption of closed magnetic flux surfaces, the reconstructions using external magnetic measurements allow accurate estimates of the net toroidal flux within the last closed flux surface, the edge safety factor, and the outer boundary of these highly non-axisymmetric plasmas. The inversion radius for sawtoothing plasmas is used to identify the location of the q = 1 surface, and thus infer the current profile near the magnetic axis. With external magnetic diagnostics alone, we find the reconstruction to be insufficiently constrained. This work is supported by US Department of Energy Grant No. DE-FG02-00ER54610.
Sensitivity of equilibrium profile reconstruction to motional Stark effect measurements
NASA Astrophysics Data System (ADS)
Batha, S. H.; Levinton, F. M.; Hirshman, S. P.; Bell, M. G.; Wieland, R. M.
1996-09-01
The magnetic-field pitch-angle profile, gamma p(R) identical to tan-1(Bpol/Btor), is measured on TFTR using a motional Stark effect (MSE) polarimeter. Measured pitch angle profiles, along with kinetic profiles and external magnetic measurements, are used to compute a self-consistent equilibrium using the free-boundary variational moments equilibrium code VMEC. Uncertainties in the q profile due to uncertainties in gamma P(R), magnetic measurements and kinetic measurements are found to be small. Subsequent uncertainties in the VMEC-calculated current density and shear profiles are also small
Evaluation of Magnetic Diagnostics for MHD Equilibrium Reconstruction of LHD Discharges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sontag, Aaron C; Hanson, James D.; Lazerson, Sam
2011-01-01
Equilibrium reconstruction is the process of determining the set of parameters of an MHD equilibrium that minimize the difference between expected and experimentally observed signals. This is routinely performed in axisymmetric devices, such as tokamaks, and the reconstructed equilibrium solution is then the basis for analysis of stability and transport properties. The V3FIT code [1] has been developed to perform equilibrium reconstruction in cases where axisymmetry cannot be assumed, such as in stellarators. The present work is focused on using V3FIT to analyze plasmas in the Large Helical Device (LHD) [2], a superconducting, heliotron type device with over 25 MWmore » of heating power that is capable of achieving both high-beta ({approx}5%) and high density (>1 x 10{sup 21}/m{sup 3}). This high performance as well as the ability to drive tens of kiloamperes of toroidal plasma current leads to deviations in the equilibrium state from the vacuum flux surfaces. This initial study examines the effectiveness of using magnetic diagnostics as the observed signals in reconstructing experimental plasma parameters for LHD discharges. V3FIT uses the VMEC [3] 3D equilibrium solver to calculate an initial equilibrium solution with closed, nested flux surfaces based on user specified plasma parameters. This equilibrium solution is then used to calculate the expected signals for specified diagnostics. The differences between these expected signal values and the observed values provides a starting {chi}{sup 2} value. V3FIT then varies all of the fit parameters independently, calculating a new equilibrium and corresponding {chi}{sup 2} for each variation. A quasi-Newton algorithm [1] is used to find the path in parameter space that leads to a minimum in {chi}{sup 2}. Effective diagnostic signals must vary in a predictable manner with the variations of the plasma parameters and this signal variation must be of sufficient amplitude to be resolved from the signal noise. Signal effectiveness can be defined for a specific signal and specific reconstruction parameter as the dimensionless fractional reduction in the posterior parameter variance with respect to the signal variance. Here, {sigma}{sub i}{sup sig} is the variance of the ith signal and {sigma}{sub j}{sup param} param is the posterior variance of the jth fit parameter. The sum of all signal effectiveness values for a given reconstruction parameter is normalized to one. This quantity will be used to determine signal effectiveness for various reconstruction cases. The next section will examine the variation of the expected signals with changes in plasma pressure and the following section will show results for reconstructing model plasmas using these signals.« less
Calculation of Eddy Currents In the CTH Vacuum Vessel and Coil Frame
DOE Office of Scientific and Technical Information (OSTI.GOV)
A. Zolfaghari, A. Brooks, A. Michaels, J. Hanson, and G. Hartwell
2012-09-25
Knowledge of eddy currents in the vacuum vessel walls and nearby conducting support structures can significantly contribute to the accuracy of Magnetohydrodynamics (MHD) equilibrium reconstruction in toroidal plasmas. Moreover, the magnetic fields produced by the eddy currents could generate error fields that may give rise to islands at rational surfaces or cause field lines to become chaotic. In the Compact Toroidal Hybrid (CTH) device (R0 = 0.75 m, a = 0.29 m, B ≤ 0.7 T), the primary driver of the eddy currents during the plasma discharge is the changing flux of the ohmic heating transformer. Electromagnetic simulations are usedmore » to calculate eddy current paths and profile in the vacuum vessel and in the coil frame pieces with known time dependent currents in the ohmic heating coils. MAXWELL and SPARK codes were used for the Electromagnetic modeling and simulation. MAXWELL code was used for detailed 3D finite-element analysis of the eddy currents in the structures. SPARK code was used to calculate the eddy currents in the structures as modeled with shell/surface elements, with each element representing a current loop. In both cases current filaments representing the eddy currents were prepared for input into VMEC code for MHD equilibrium reconstruction of the plasma discharge. __________________________________________________« less
3D Equilibrium Effects Due to RMP Application on DIII-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. Lazerson, E. Lazarus, S. Hudson, N. Pablant and D. Gates
2012-06-20
The mitigation and suppression of edge localized modes (ELMs) through application of resonant magnetic perturbations (RMPs) in Tokamak plasmas is a well documented phenomenon [1]. Vacuum calculations suggest the formation of edge islands and stochastic regions when RMPs are applied to the axisymmetric equilibria. Self-consistent calculations of the plasma equilibrium with the VMEC [2] and SPEC [3] codes have been performed for an up-down symmetric shot (142603) in DIII-D. In these codes, a self-consistent calculation of the plasma response due to the RMP coils is calculated. The VMEC code globally enforces the constraints of ideal MHD; consequently, a continuously nestedmore » family of flux surfaces is enforced throughout the plasma domain. This approach necessarily precludes the observation of islands or field-line chaos. The SPEC code relaxes the constraints of ideal MHD locally, and allows for islands and field line chaos at or near the rational surfaces. Equilibria with finite pressure gradients are approximated by a set of discrete "ideal-interfaces" at the most irrational flux surfaces and where the strongest pressure gradients are observed. Both the VMEC and SPEC calculations are initialized from EFIT reconstructions of the plasma that are consistent with the experimental pressure and current profiles. A 3D reconstruction using the STELLOPT code, which fits VMEC equilibria to experimental measurements, has also been performed. Comparisons between the equilibria generated by the 3D codes and between STELLOPT and EFIT are presented.« less
3D Equilibrium Effects Due to RMP Application on DIII-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lazerson, S.; Lazarus, E.; Hudson, S.
2012-06-20
The mitigation and suppression of edge localized modes (ELMs) through application of resonant magnetic perturbations (RMPs) in Tokamak plasmas is a well documented phenomenon. Vacuum calculations suggest the formation of edge islands and stochastic regions when RMPs are applied to the axisymmetric equilibria. Self-consistent calculations of the plasma equilibrium with the VMEC and SPEC codes have been performed for an up-down symmetric shot in DIII-D. In these codes, a self-consistent calculation of the plasma response due to the RMP coils is calculated. The VMEC code globally enforces the constraints of ideal MHD; consequently, a continuously nested family of flux surfacesmore » is enforced throughout the plasma domain. This approach necessarily precludes the observation of islands or field-line chaos. The SPEC code relaxes the constraints of ideal MHD locally, and allows for islands and field line chaos at or near the rational surfaces. Equilibria with finite pressure gradients are approximated by a set of discrete "ideal-interfaces" at the most irrational flux surfaces and where the strongest pressure gradients are observed. Both the VMEC and SPEC calculations are initialized from EFIT reconstructions of the plasma that are consistent with the experimental pressure and current profiles. A 3D reconstruction using the STELLOPT code, which fits VMEC equilibria to experimental measurements, has also been performed. Comparisons between the equilibria generated by the 3D codes and between STELLOPT and EFIT are presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, X.; Cianciosa, M. R.; Ennis, D. A.
In this research, collimated soft X-ray (SXR) emissivity measurements from multi-channel cameras on the Compact Toroidal Hybrid (CTH) tokamak/torsatron device are incorporated in the 3D equilibrium reconstruction code V3FIT to reconstruct the shape of flux surfaces and infer the current distribution within the plasma. Equilibrium reconstructions of sawtoothing plasmas that use data from both SXR and external magnetic diagnostics show the central safety factor to be near unity under the assumption that SXR iso-emissivity contours lie on magnetic flux surfaces. The reconstruction results are consistent with those using the external magnetic data and a constraint on the location of qmore » = 1 surfaces determined from the sawtooth inversion surface extracted from SXR brightness profiles. The agreement justifies the use of approximating SXR emission as a flux function in CTH, at least within the core of the plasma, subject to the spatial resolution of the SXR diagnostics. Lastly, this improved reconstruction of the central current density indicates that the current profile peakedness decreases with increasing external transform and that the internal inductance is not a relevant measure of how peaked the current profile is in hybrid discharges.« less
NASA Astrophysics Data System (ADS)
Ma, X.; Cianciosa, M. R.; Ennis, D. A.; Hanson, J. D.; Hartwell, G. J.; Herfindal, J. L.; Howell, E. C.; Knowlton, S. F.; Maurer, D. A.; Traverso, P. J.
2018-01-01
Collimated soft X-ray (SXR) emissivity measurements from multi-channel cameras on the Compact Toroidal Hybrid (CTH) tokamak/torsatron device are incorporated in the 3D equilibrium reconstruction code V3FIT to reconstruct the shape of flux surfaces and infer the current distribution within the plasma. Equilibrium reconstructions of sawtoothing plasmas that use data from both SXR and external magnetic diagnostics show the central safety factor to be near unity under the assumption that SXR iso-emissivity contours lie on magnetic flux surfaces. The reconstruction results are consistent with those using the external magnetic data and a constraint on the location of q = 1 surfaces determined from the sawtooth inversion surface extracted from SXR brightness profiles. The agreement justifies the use of approximating SXR emission as a flux function in CTH, at least within the core of the plasma, subject to the spatial resolution of the SXR diagnostics. This improved reconstruction of the central current density indicates that the current profile peakedness decreases with increasing external transform and that the internal inductance is not a relevant measure of how peaked the current profile is in hybrid discharges.
Ma, X.; Cianciosa, M. R.; Ennis, D. A.; ...
2018-01-31
In this research, collimated soft X-ray (SXR) emissivity measurements from multi-channel cameras on the Compact Toroidal Hybrid (CTH) tokamak/torsatron device are incorporated in the 3D equilibrium reconstruction code V3FIT to reconstruct the shape of flux surfaces and infer the current distribution within the plasma. Equilibrium reconstructions of sawtoothing plasmas that use data from both SXR and external magnetic diagnostics show the central safety factor to be near unity under the assumption that SXR iso-emissivity contours lie on magnetic flux surfaces. The reconstruction results are consistent with those using the external magnetic data and a constraint on the location of qmore » = 1 surfaces determined from the sawtooth inversion surface extracted from SXR brightness profiles. The agreement justifies the use of approximating SXR emission as a flux function in CTH, at least within the core of the plasma, subject to the spatial resolution of the SXR diagnostics. Lastly, this improved reconstruction of the central current density indicates that the current profile peakedness decreases with increasing external transform and that the internal inductance is not a relevant measure of how peaked the current profile is in hybrid discharges.« less
Use of reconstructed 3D VMEC equilibria to match effects of toroidally rotating discharges in DIII-D
Wingen, Andreas; Wilcox, Robert S.; Cianciosa, Mark R.; ...
2016-10-13
Here, a technique for tokamak equilibrium reconstructions is used for multiple DIII-D discharges, including L-mode and H-mode cases when weakly 3D fieldsmore » $$\\left(\\delta B/B\\sim {{10}^{-3}}\\right)$$ are applied. The technique couples diagnostics to the non-linear, ideal MHD equilibrium solver VMEC, using the V3FIT code, to find the most likely 3D equilibrium based on a suite of measurements. It is demonstrated that V3FIT can be used to find non-linear 3D equilibria that are consistent with experimental measurements of the plasma response to very weak 3D perturbations, as well as with 2D profile measurements. Observations at DIII-D show that plasma rotation larger than 20 krad s –1 changes the relative phase between the applied 3D fields and the measured plasma response. Discharges with low averaged rotation (10 krad s –1) and peaked rotation profiles (40 krad s –1) are reconstructed. Similarities and differences to forward modeled VMEC equilibria, which do not include rotational effects, are shown. Toroidal phase shifts of up to $${{30}^{\\circ}}$$ are found between the measured and forward modeled plasma responses at the highest values of rotation. The plasma response phases of reconstructed equilibra on the other hand match the measured ones. This is the first time V3FIT has been used to reconstruct weakly 3D tokamak equilibria.« less
NASA Astrophysics Data System (ADS)
Lian, H.; Liu, H. Q.; Li, K.; Zou, Z. Y.; Qian, J. P.; Wu, M. Q.; Li, G. Q.; Zeng, L.; Zang, Q.; Lv, B.; Jie, Y. X.; EAST Team
2017-12-01
Plasma equilibrium reconstruction plays an important role in the tokamak plasma research. With a high temporal and spatial resolution, the POlarimeter-INTerferometer (POINT) system on EAST has provided effective measurements for 102s H-mode operation. Based on internal Faraday rotation measurements provided by the POINT system, the equilibrium reconstruction with a more accurate core current profile constraint has been demonstrated successfully on EAST. Combining other experimental diagnostics and external magnetic fields measurement, the kinetic equilibrium has also been reconstructed on EAST. Take the pressure and edge current information from kinetic EFIT into the equilibrium reconstruction with Faraday rotation constraint, the new equilibrium reconstruction not only provides a more accurate internal current profile but also contains edge current and pressure information. One time slice result using new kinetic equilibrium reconstruction with POINT data constraints is demonstrated in this paper and the result shows there is a reversed shear of q profile and the pressure profile is also contained. The new improved equilibrium reconstruction is greatly helpful to the future theoretical analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koliner, J. J.; Boguski, J., E-mail: boguski@wisc.edu; Anderson, J. K.
2016-03-15
In order to characterize the Madison Symmetric Torus (MST) reversed-field pinch (RFP) plasmas that bifurcate to a helical equilibrium, the V3FIT equilibrium reconstruction code was modified to include a conducting boundary. RFP plasmas become helical at a high plasma current, which induces large eddy currents in MST's thick aluminum shell. The V3FIT conducting boundary accounts for the contribution from these eddy currents to external magnetic diagnostic coil signals. This implementation of V3FIT was benchmarked against MSTFit, a 2D Grad-Shafranov solver, for axisymmetric plasmas. The two codes both fit B{sub θ} measurement loops around the plasma minor diameter with qualitative agreementmore » between each other and the measured field. Fits in the 3D case converge well, with q-profile and plasma shape agreement between two distinct toroidal locking phases. Greater than 60% of the measured n = 5 component of B{sub θ} at r = a is due to eddy currents in the shell, as calculated by the conducting boundary model.« less
Koliner, J. J.; Boguski, J.; Anderson, J. K.; ...
2016-03-25
In order to characterize the Madison Symmetric Torus (MST) reversed-field pinch(RFP)plasmas that bifurcate to a helical equilibrium, the V3FIT equilibrium reconstruction code was modified to include a conducting boundary. RFPplasmas become helical at a high plasma current, which induces large eddy currents in MST's thick aluminum shell. The V3FIT conducting boundary accounts for the contribution from these eddy currents to external magnetic diagnostic coil signals. This implementation of V3FIT was benchmarked against MSTFit, a 2D Grad-Shafranov solver, for axisymmetric plasmas. The two codes both fit B measurement loops around the plasma minor diameter with qualitative agreement between each other andmore » the measured field. Fits in the 3D case converge well, with q-profile and plasma shape agreement between two distinct toroidal locking phases. Greater than 60% of the measured n = 5 component of B at r = a is due to eddy currents in the shell, as calculated by the conducting boundary model.« less
Coil Design for Low Aspect Ratio Stellarators
NASA Astrophysics Data System (ADS)
Miner, W. H., Jr.; Valanju, P. M.; Wiley, J. C.; Hirshman, S. P.; Whitson, J. C.
1998-11-01
Two compact stellarator designs have recently been under investigation because of their potential as a reactor featuring steady-state, disruption-free operation, low recirculating power and good confinement and beta. Both quasi-axisymmetric (QA) equilibria and quasi-omnigenous (QO) equilibria have been obtained by using the 3-D MHD equilibrium code VMEC. In order to build an experiment, coil sets must be obtained that are compatable with these equilibria. We have been using both the NESCOIL(Merkel, P., Nucl. Fus. 27, 5 (1987) 867.) code and the COILOPT code to find coilsets for both of these types of equilibria. We are considering three types of coil configurations. The first is a combination of modular coils and vertical field coils. The second configuration is a combination of toroidal field coils, vertical field coils and saddle coils. A third configuration is a combination of modular coils and a single helical winding. The quality of each coil set will be evaluated by computing its magnetic field and using that as input to VMEC in free boundary mode to see how accurately the original equilibrium can be reconstructed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noy, A
2004-05-04
Modern force microscopy techniques allow researchers to use mechanical forces to probe interactions between biomolecules. However, such measurements often happen in non-equilibrium regime, which precludes straightforward extraction of the equilibrium energy information. Here we use the work averaging method based on Jarzynski equality to reconstruct the equilibrium interaction potential from the unbinding of a complementary 14-mer DNA duplex from the results of non-equilibrium single-molecule measurements. The reconstructed potential reproduces most of the features of the DNA stretching transition, previously observed only in equilibrium stretching of long DNA sequences. We also compare the reconstructed potential with the thermodynamic parameters of DNAmore » duplex unbinding and show that the reconstruction accurately predicts duplex melting enthalpy.« less
NASA Astrophysics Data System (ADS)
Käppeli, R.; Mishra, S.
2016-03-01
Context. Many problems in astrophysics feature flows which are close to hydrostatic equilibrium. However, standard numerical schemes for compressible hydrodynamics may be deficient in approximating this stationary state, where the pressure gradient is nearly balanced by gravitational forces. Aims: We aim to develop a second-order well-balanced scheme for the Euler equations. The scheme is designed to mimic a discrete version of the hydrostatic balance. It therefore can resolve a discrete hydrostatic equilibrium exactly (up to machine precision) and propagate perturbations, on top of this equilibrium, very accurately. Methods: A local second-order hydrostatic equilibrium preserving pressure reconstruction is developed. Combined with a standard central gravitational source term discretization and numerical fluxes that resolve stationary contact discontinuities exactly, the well-balanced property is achieved. Results: The resulting well-balanced scheme is robust and simple enough to be very easily implemented within any existing computer code that solves time explicitly or implicitly the compressible hydrodynamics equations. We demonstrate the performance of the well-balanced scheme for several astrophysically relevant applications: wave propagation in stellar atmospheres, a toy model for core-collapse supernovae, convection in carbon shell burning, and a realistic proto-neutron star.
Development of a new virtual diagnostic for V3FIT
NASA Astrophysics Data System (ADS)
Trevisan, G. L.; Cianciosa, M. R.; Terranova, D.; Hanson, J. D.
2014-12-01
The determination of plasma equilibria from diagnostic information is a fundamental issue. V3FIT is a fully three-dimensional reconstruction code capable of solving the inverse problem using both magnetic and kinetic measurements. It uses VMEC as core equilibrium solver and supports both free- and fixed-boundary reconstruction approaches. In fixed-boundary mode VMEC does not use explicit information about currents in external coils, even though it has important effects on the shape of the safety factor profile. Indeed, the edge safety factor influences the reversal position in RFP plasmas, which then determines the position of the m = 0 island chain and the edge transport properties. In order to exploit such information a new virtual diagnostic has been developed, that thanks to Ampère's law relates the external current through the center of the torus to the circulation of the toroidal magnetic field on the outermost flux surface. The reconstructions that exploit the new diagnostic are indeed found to better interpret the experimental data with respect to edge physics.
NASA Astrophysics Data System (ADS)
Peterson, Ethan; Anderson, Jay; Clark, Mike; Egedal, Jan; Endrizzi, Douglass; Flanagan, Ken; Harvey, Robert; Lynn, Jacob; Milhone, Jason; Wallace, John; Waleffe, Roger; Mirnov, Vladimir; Forest, Cary
2017-10-01
Equilibrium reconstructions of rotating magnetospheres in the lab are computed using a user-friendly extended Grad-Shafranov solver written in Python and various magnetic and kinetic measurements. The stability of these equilibria are investigated using the NIMROD code with two goals: understand the onset of the classic ``wobble'' in the heliospheric current sheet and demonstrating proof-of-principle for a laboratory source of high- β turbulence. Using the same extended Grad-Shafranov solver, equilibria for an axisymmetric, non-paraxial magnetic mirror are used as a design foundation for a high-field magnetic mirror neutron source. These equilibria are numerically shown to be stable to the m=1 flute instability, with higher modes likely stabilized by FLR effects; this provides stability to gross MHD modes in an axisymmetric configuration. Numerical results of RF heating and neutral beam injection (NBI) from the GENRAY/CQL3D code suite show neutron fluxes promising for medical radioisotope production as well as materials testing. Synergistic effects between NBI and high-harmonic fast wave heating show large increases in neutron yield for a modest increase in RF power. work funded by DOE, NSF, NASA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bitter, M; Gates, D; Monticello, D
A high-resolution X-ray imaging crystal spectrometer, whose concept was tested on NSTX and Alcator C-Mod, is being designed for LHD. This instrument will record spatially resolved spectra of helium-like Ar16+ and provide ion temperature profiles with spatial and temporal resolutions of < 2 cm and ≥ 10 ms. The stellarator equilibrium reconstruction codes, STELLOPT and PIES, will be used for the tomographic inversion of the spectral data. The spectrometer layout and instrumental features are largely determined by the magnetic field structure of LHD.
Tearing Mode Stability of Evolving Toroidal Equilibria
NASA Astrophysics Data System (ADS)
Pletzer, A.; McCune, D.; Manickam, J.; Jardin, S. C.
2000-10-01
There are a number of toroidal equilibrium (such as JSOLVER, ESC, EFIT, and VMEC) and transport codes (such as TRANSP, BALDUR, and TSC) in our community that utilize differing equilibrium representations. There are also many heating and current drive (LSC and TORRAY), and stability (PEST1-3, GATO, NOVA, MARS, DCON, M3D) codes that require this equilibrium information. In an effort to provide seamless compatibility between the codes that produce and need these equilibria, we have developed two Fortran 90 modules, MEQ and XPLASMA, that serve as common interfaces between these two classes of codes. XPLASMA provides a common equilibrium representation for the heating and current drive applications while MEQ provides common equilibrium and associated metric information needed by MHD stability codes. We illustrate the utility of this approach by presenting results of PEST-3 tearing stability calculations of an NSTX discharge performed on profiles provided by the TRANSP code. Using the MEQ module, the TRANSP equilibrium data are stored in a Fortran 90 derived type and passed to PEST3 as a subroutine argument. All calculations are performed on the fly, as the profiles evolve.
Transport and stability analyses supporting disruption prediction in high beta KSTAR plasmas
NASA Astrophysics Data System (ADS)
Ahn, J.-H.; Sabbagh, S. A.; Park, Y. S.; Berkery, J. W.; Jiang, Y.; Riquezes, J.; Lee, H. H.; Terzolo, L.; Scott, S. D.; Wang, Z.; Glasser, A. H.
2017-10-01
KSTAR plasmas have reached high stability parameters in dedicated experiments, with normalized beta βN exceeding 4.3 at relatively low plasma internal inductance li (βN/li>6). Transport and stability analyses have begun on these plasmas to best understand a disruption-free path toward the design target of βN = 5 while aiming to maximize the non-inductive fraction of these plasmas. Initial analysis using the TRANSP code indicates that the non-inductive current fraction in these plasmas has exceeded 50 percent. The advent of KSTAR kinetic equilibrium reconstructions now allows more accurate computation of the MHD stability of these plasmas. Attention is placed on code validation of mode stability using the PEST-3 and resistive DCON codes. Initial evaluation of these analyses for disruption prediction is made using the disruption event characterization and forecasting (DECAF) code. The present global mode kinetic stability model in DECAF developed for low aspect ratio plasmas is evaluated to determine modifications required for successful disruption prediction of KSTAR plasmas. Work supported by U.S. DoE under contract DE-SC0016614.
NASA Astrophysics Data System (ADS)
Reiman, A.; Ferraro, N. M.; Turnbull, A.; Park, J. K.; Cerfon, A.; Evans, T. E.; Lanctot, M. J.; Lazarus, E. A.; Liu, Y.; McFadden, G.; Monticello, D.; Suzuki, Y.
2015-06-01
In comparing equilibrium solutions for a DIII-D shot that is amenable to analysis by both stellarator and tokamak three-dimensional (3D) equilibrium codes, a significant disagreement has been seen between solutions of the VMEC stellarator equilibrium code and solutions of tokamak perturbative 3D equilibrium codes. The source of that disagreement has been investigated, and that investigation has led to new insights into the domain of validity of the different equilibrium calculations, and to a finding that the manner in which localized screening currents at low order rational surfaces are handled can affect global properties of the equilibrium solution. The perturbative treatment has been found to break down at surprisingly small perturbation amplitudes due to overlap of the calculated perturbed flux surfaces, and that treatment is not valid in the pedestal region of the DIII-D shot studied. The perturbative treatment is valid, however, further into the interior of the plasma, and flux surface overlap does not account for the disagreement investigated here. Calculated equilibrium solutions for simple model cases and comparison of the 3D equilibrium solutions with those of other codes indicate that the disagreement arises from a difference in handling of localized currents at low order rational surfaces, with such currents being absent in VMEC and present in the perturbative codes. The significant differences in the global equilibrium solutions associated with the presence or absence of very localized screening currents at rational surfaces suggests that it may be possible to extract information about localized currents from appropriate measurements of global equilibrium plasma properties. That would require improved diagnostic capability on the high field side of the tokamak plasma, a region difficult to access with diagnostics.
Reconstruction of equilibrium trajectories during whole-body movements.
Domen, K; Latash, M L; Zatsiorsky, V M
1999-03-01
The framework of the equilibrium-point hypothesis was used to reconstruct equilibrium trajectories (ETs) of the ankle, hip and body center of mass during quick voluntary hip flexions ('Japanese courtesy bow') by standing subjects. Different spring loads applied to the subject's back were used to introduce smooth perturbations that are necessary to reconstruct ETs based on a series of trials at the same task. Time patterns of muscle torques were calculated using inverse dynamics techniques. A second-order linear model was employed to calculate the instantaneous position of the spring-like joint or center of mass characteristic at different times during the movement. ETs of the joints and of the center of mass had significantly different shapes from the actual trajectories. Integral measures of electromyographic bursts of activity in postural muscles demonstrated a relation to muscle length corresponding to the equilibrium-point hypothesis.
NASA Astrophysics Data System (ADS)
Monticello, D. A.; Reiman, A. H.; Watanabe, K. Y.; Nakajima, N.; Okamoto, M.
1997-11-01
The existence of bootstrap currents in both tokamaks and stellarators was confirmed, experimentally, more than ten years ago. Such currents can have significant effects on the equilibrium and stability of these MHD devices. In addition, stellarators, with the notable exception of W7-X, are predicted to have such large bootstrap currents that reliable equilibrium calculations require the self-consistent evaluation of bootstrap currents. Modeling of discharges which contain islands requires an algorithm that does not assume good surfaces. Only one of the two 3-D equilibrium codes that exist, PIES( Reiman, A. H., Greenside, H. S., Compt. Phys. Commun. 43), (1986)., can easily be modified to handle bootstrap current. Here we report on the coupling of the PIES 3-D equilibrium code and NIFS bootstrap code(Watanabe, K., et al., Nuclear Fusion 35) (1995), 335.
The rectangular array of magnetic probes on J-TEXT tokamak.
Chen, Zhipeng; Li, Fuming; Zhuang, Ge; Jian, Xiang; Zhu, Lizhi
2016-11-01
The rectangular array of magnetic probes system was newly designed and installed in the torus on J-TEXT tokamak to measure the local magnetic fields outside the last closed flux surface at a single toroidal angle. In the implementation, the experimental results agree well with the theoretical results based on the Spool model and three-dimensional numerical finite element model when the vertical field was applied. Furthermore, the measurements were successfully used as the input of EFIT code to conduct the plasma equilibrium reconstruction. The calculated Faraday rotation angle using the EFIT output is in agreement with the measured one from the three-wave polarimeter-interferometer system.
The rectangular array of magnetic probes on J-TEXT tokamak
NASA Astrophysics Data System (ADS)
Chen, Zhipeng; Li, Fuming; Zhuang, Ge; Jian, Xiang; Zhu, Lizhi
2016-11-01
The rectangular array of magnetic probes system was newly designed and installed in the torus on J-TEXT tokamak to measure the local magnetic fields outside the last closed flux surface at a single toroidal angle. In the implementation, the experimental results agree well with the theoretical results based on the Spool model and three-dimensional numerical finite element model when the vertical field was applied. Furthermore, the measurements were successfully used as the input of EFIT code to conduct the plasma equilibrium reconstruction. The calculated Faraday rotation angle using the EFIT output is in agreement with the measured one from the three-wave polarimeter-interferometer system.
Bitter, M; Hill, K; Gates, D; Monticello, D; Neilson, H; Reiman, A; Roquemore, A L; Morita, S; Goto, M; Yamada, H; Rice, J E
2010-10-01
A high-resolution x-ray imaging crystal spectrometer, whose concept was tested on NSTX and Alcator C-Mod, is being designed for the large helical device (LHD). This instrument will record spatially resolved spectra of helium-like Ar(16+) and will provide ion temperature profiles with spatial and temporal resolutions of <2 cm and ≥10 ms, respectively. The spectrometer layout and instrumental features are largely determined by the magnetic field structure of LHD. The stellarator equilibrium reconstruction codes, STELLOPT and PIES, will be used for the tomographic inversion of the spectral data.
Tokamak Equilibrium Reconstruction with MSE-LS Data in DIII-D
NASA Astrophysics Data System (ADS)
Lao, L.; Grierson, B.; Burrell, K. H.
2016-10-01
Equilibrium analysis of plasmas in DIII-D using EFIT was upgraded to include the internal magnetic field determined from spectroscopic measurements of motional-Stark-effect line-splitting (MSE-LS). MSE-LS provides measurements of the magnitude of the internal magnetic field, rather than the pitch angle as provided by MSE line-polarization (MSE-LP) used in most tokamaks to date. EFIT MSE-LS reconstruction algorithms and verifications are described. The capability of MSE-LS to provide significant constraints on the equilibrium analysis is evaluated. Reconstruction results with both synthetic and experimental MSE-LS data from 10 DIII-D discharges run over a range of conditions show that MSE-LS measurements can contribute to the equilibrium reconstruction of pressure and safety factor profiles. Adequate MSE-LS measurement accuracy and number of spatial locations are necessary. The 7 available experimental measurements provide useful additional constraints when used with other internal measurements. Using MSE-LS as the only internal measurement yields less current profile information. Work supported by the PPPL Subcontract S013769-F and US DOE under DE-FC02-04ER54698.
Modeling of Plasma Pressure Effects on ELM Suppression With RMP in DIII-D
NASA Astrophysics Data System (ADS)
Orlov, D. M.; Moyer, R. A.; Mordijck, S.; Evans, T. E.; Osborne, T. H.; Snyder, P. B.; Unterberg, E. A.; Fenstermacher, M. E.
2009-11-01
Resonant magnetic perturbations (RMPs) are used to control the pedestal pressure gradient in both low and high (ν3^*) DIII-D plasmas. In this work we have analyzed several discharges with different levels of triangularity, different neutral beam injection power levels, and with, βN ranging from 1.5 to 2.3. The field line integration code TRIP3D was used to model the magnetic perturbation in ELMing and ELM suppressed phases during the RMP pulse. The results of this modeling showed very little effect of βN on the structure of the vacuum magnetic field during ELM suppression using n=3 RMPs. Kinetic equilibrium reconstructions showed a decrease in bootstrap current during RMP. Linear peeling-ballooning stability analysis performed with the ELITE code suggested that the ELMs, which persist during RMP, i.e. ELMing still is observed, are not Type I ELMs. Identification of these Dα spikes is an ongoing work.
NASA Astrophysics Data System (ADS)
Frew, Craig R.; Pellitero, Ramón; Rea, Brice R.; Spagnolo, Matteo; Bakke, Jostein; Hughes, Philip D.; Ivy-Ochs, Susan; Lukas, Sven; Renssen, Hans; Ribolini, Adriano
2014-05-01
Reconstruction of glacier equilibrium line altitudes (ELAs) associated with advance stages of former ice masses is widely used as a tool for palaeoclimatic reconstruction. This requires an accurate reconstruction of palaeo-glacier surface hypsometry, based on mapping of available ice-marginal landform evidence. Classically, the approach used to define ice-surface elevations, using such evidence, follows the 'cartographic method', whereby contours are estimated based on an 'understanding' of the typical surface form of contemporary ice masses. This method introduces inherent uncertainties in the palaeoclimatic interpretation of reconstructed ELAs, especially where the upper limits of glaciation are less well constrained and/or the age of such features in relation to terminal moraine sequences is unknown. An alternative approach is to use equilibrium profile models to define ice surface elevations. Such models are tuned, generally using basal shear stress, in order to generate an ice surface that reaches 'target elevations' defined by geomorphology. In areas where there are no geomorphological constraints for the former ice surface, the reconstruction is undertaken using glaciologiaclly representative values for basal shear stress. Numerical reconstructions have been shown to produce glaciologically "realistic" ice surface geometries, allowing for more objective and robust comparative studies at local to regional scales. User-friendly tools for the calculation of equilibrium profiles are presently available in the literature. Despite this, their use is not yet widespread, perhaps owing to the difficult and time consuming nature of acquiring the necessary inputs from contour maps or digital elevation models. Here we describe a tool for automatically reconstructing palaeo-glacier surface geometry using an equilibrium profile equation implemented in ArcGIS. The only necessary inputs for this tool are 1) a suitable digital elevation model and 2) mapped outlines of the former glacier terminus position (usually a frontal moraine system) and any relevant geomorphological constraints on ice surface elevation (e.g. lateral moraines, trimlines etc.). This provides a standardised method for glacier reconstruction that can be applied rapidly and systematically to large geomorphological datasets.
Kusaba, Akira; Li, Guanchen; von Spakovsky, Michael R; Kangawa, Yoshihiro; Kakimoto, Koichi
2017-08-15
Clearly understanding elementary growth processes that depend on surface reconstruction is essential to controlling vapor-phase epitaxy more precisely. In this study, ammonia chemical adsorption on GaN(0001) reconstructed surfaces under metalorganic vapor phase epitaxy (MOVPE) conditions (3Ga-H and N ad -H + Ga-H on a 2 × 2 unit cell) is investigated using steepest-entropy-ascent quantum thermodynamics (SEAQT). SEAQT is a thermodynamic-ensemble based, first-principles framework that can predict the behavior of non-equilibrium processes, even those far from equilibrium where the state evolution is a combination of reversible and irreversible dynamics. SEAQT is an ideal choice to handle this problem on a first-principles basis since the chemical adsorption process starts from a highly non-equilibrium state. A result of the analysis shows that the probability of adsorption on 3Ga-H is significantly higher than that on N ad -H + Ga-H. Additionally, the growth temperature dependence of these adsorption probabilities and the temperature increase due to the heat of reaction is determined. The non-equilibrium thermodynamic modeling applied can lead to better control of the MOVPE process through the selection of preferable reconstructed surfaces. The modeling also demonstrates the efficacy of DFT-SEAQT coupling for determining detailed non-equilibrium process characteristics with a much smaller computational burden than would be entailed with mechanics-based, microscopic-mesoscopic approaches.
Kusaba, Akira; von Spakovsky, Michael R.; Kangawa, Yoshihiro; Kakimoto, Koichi
2017-01-01
Clearly understanding elementary growth processes that depend on surface reconstruction is essential to controlling vapor-phase epitaxy more precisely. In this study, ammonia chemical adsorption on GaN(0001) reconstructed surfaces under metalorganic vapor phase epitaxy (MOVPE) conditions (3Ga-H and Nad-H + Ga-H on a 2 × 2 unit cell) is investigated using steepest-entropy-ascent quantum thermodynamics (SEAQT). SEAQT is a thermodynamic-ensemble based, first-principles framework that can predict the behavior of non-equilibrium processes, even those far from equilibrium where the state evolution is a combination of reversible and irreversible dynamics. SEAQT is an ideal choice to handle this problem on a first-principles basis since the chemical adsorption process starts from a highly non-equilibrium state. A result of the analysis shows that the probability of adsorption on 3Ga-H is significantly higher than that on Nad-H + Ga-H. Additionally, the growth temperature dependence of these adsorption probabilities and the temperature increase due to the heat of reaction is determined. The non-equilibrium thermodynamic modeling applied can lead to better control of the MOVPE process through the selection of preferable reconstructed surfaces. The modeling also demonstrates the efficacy of DFT-SEAQT coupling for determining detailed non-equilibrium process characteristics with a much smaller computational burden than would be entailed with mechanics-based, microscopic-mesoscopic approaches. PMID:28809816
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmitt, J. C.; Bialek, J.; Lazerson, S.
2014-11-01
The Lithium Tokamak eXperiment is a spherical tokamak with a close-fitting low-recycling wall composed of thin lithium layers evaporated onto a stainless steel-lined copper shell. Long-lived non-axisymmetric eddy currents are induced in the shell and vacuum vessel by transient plasma and coil currents and these eddy currents influence both the plasma and the magnetic diagnositc signals that are used as constraints for equilibrium reconstruction. A newly installed set of re-entrant magnetic diagnostics and internal saddle flux loops, compatible with high-temperatures and lithium environments, is discussed. Details of the axisymmetric (2D) and non-axisymmetric (3D) treatments of the eddy currents and themore » equilibrium reconstruction are presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmitt, J. C., E-mail: jschmitt@pppl.gov; Lazerson, S.; Majeski, R.
2014-11-15
The Lithium Tokamak eXperiment is a spherical tokamak with a close-fitting low-recycling wall composed of thin lithium layers evaporated onto a stainless steel-lined copper shell. Long-lived non-axisymmetric eddy currents are induced in the shell and vacuum vessel by transient plasma and coil currents and these eddy currents influence both the plasma and the magnetic diagnostic signals that are used as constraints for equilibrium reconstruction. A newly installed set of re-entrant magnetic diagnostics and internal saddle flux loops, compatible with high-temperatures and lithium environments, is discussed. Details of the axisymmetric (2D) and non-axisymmetric (3D) treatments of the eddy currents and themore » equilibrium reconstruction are presented.« less
Edge equilibrium code for tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xujing; Zakharov, Leonid E.; Drozdov, Vladimir V.
2014-01-15
The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids.
Magnetic Diagnostics Suite Upgrade on LTX- β
NASA Astrophysics Data System (ADS)
Hughes, P. E.; Majeski, R.; Kaita, R.; Kozub, T.; Hansen, C.; Smalley, G.; Boyle, D. P.
2017-10-01
LTX- β will be exploring a new regime of flat temperature-profile tokamak plasmas first demonstrated in LTX [D.P. Boyle et al. PRL July 2017]. The incorporation of neutral beam core-fueling and heating in LTX- β is expected to increase plasma beta and drive increased MHD activity. An upgrade of the magnetic diagnostics is underway, including an expansion of the reentrant 3-axis poloidal Mirnov array, as well as the addition of a toroidal array of poloidal Mirnov sensors and a set of 2-axis Mirnov sensors measuring fields from shell eddy currents. The poloidal and toroidal arrays will facilitate the study of MHD mode activity and other non-axisymmetric perturbations, while the new shell eddy sensors and improvements to existing axisymmetric measurements will support enhanced equilibrium reconstructions using the PSI-Tri equilibrium code [C. Hansen et al. PoP Apr. 2017] to better characterize these novel hot-edge discharges. This work is supported by US DOE contracts DE-AC02-09CH11466 and DE-AC05-00OR22725.
Edge Equilibrium Code (EEC) For Tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xujling
2014-02-24
The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids
The motional Stark effect diagnostic for ITER using a line-shift approach.
Foley, E L; Levinton, F M; Yuh, H Y; Zakharov, L E
2008-10-01
The United States has been tasked with the development and implementation of a motional Stark effect (MSE) system on ITER. In the harsh ITER environment, MSE is particularly susceptible to degradation, as it depends on polarimetry, and the polarization reflection properties of surfaces are highly sensitive to thin film effects due to plasma deposition and erosion of a first mirror. Here we present the results of a comprehensive study considering a new MSE-based approach to internal plasma magnetic field measurements for ITER. The proposed method uses the line shifts in the MSE spectrum (MSE-LS) to provide a radial profile of the magnetic field magnitude. To determine the utility of MSE-LS for equilibrium reconstruction, studies were performed using the ESC-ERV code system. A near-term opportunity to test the use of MSE-LS for equilibrium reconstruction is being pursued in the implementation of MSE with laser-induced fluorescence on NSTX. Though the field values and beam energies are very different from ITER, the use of a laser allows precision spectroscopy with a similar ratio of linewidth to line spacing on NSTX as would be achievable with a passive system on ITER. Simulation results for ITER and NSTX are presented, and the relative merits of the traditional line polarization approach and the new line-shift approach are discussed.
Inclusion of pressure and flow in a new 3D MHD equilibrium code
NASA Astrophysics Data System (ADS)
Raburn, Daniel; Fukuyama, Atsushi
2012-10-01
Flow and nonsymmetric effects can play a large role in plasma equilibria and energy confinement. A concept for such a 3D equilibrium code was developed and presented in 2011. The code is called the Kyoto ITerative Equilibrium Solver (KITES) [1], and the concept is based largely on the PIES code [2]. More recently, the work-in-progress KITES code was used to calculate force-free equilibria. Here, progress and results on the inclusion of pressure and flow in the code are presented. [4pt] [1] Daniel Raburn and Atsushi Fukuyama, Plasma and Fusion Research: Regular Articles, 7:240381 (2012).[0pt] [2] H. S. Greenside, A. H. Reiman, and A. Salas, J. Comput. Phys, 81(1):102-136 (1989).
TEA: A Code Calculating Thermochemical Equilibrium Abundances
NASA Astrophysics Data System (ADS)
Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver
2016-07-01
We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. We tested the code against the method of Burrows & Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows & Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.
TEA: A CODE CALCULATING THERMOCHEMICAL EQUILIBRIUM ABUNDANCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver, E-mail: jasmina@physics.ucf.edu
2016-07-01
We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature–pressure pairs. We tested the code against the method of Burrows and Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows and Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but withmore » higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.« less
Norman, Janette A.; Blackmore, Caroline J.; Rourke, Meaghan; Christidis, Les
2014-01-01
Mitochondrial sequence data is often used to reconstruct the demographic history of Pleistocene populations in an effort to understand how species have responded to past climate change events. However, departures from neutral equilibrium conditions can confound evolutionary inference in species with structured populations or those that have experienced periods of population expansion or decline. Selection can affect patterns of mitochondrial DNA variation and variable mutation rates among mitochondrial genes can compromise inferences drawn from single markers. We investigated the contribution of these factors to patterns of mitochondrial variation and estimates of time to most recent common ancestor (TMRCA) for two clades in a co-operatively breeding avian species, the white-browed babbler Pomatostomus superciliosus. Both the protein-coding ND3 gene and hypervariable domain I control region sequences showed departures from neutral expectations within the superciliosus clade, and a two-fold difference in TMRCA estimates. Bayesian phylogenetic analysis provided evidence of departure from a strict clock model of molecular evolution in domain I, leading to an over-estimation of TMRCA for the superciliosus clade at this marker. Our results suggest mitochondrial studies that attempt to reconstruct Pleistocene demographic histories should rigorously evaluate data for departures from neutral equilibrium expectations, including variation in evolutionary rates across multiple markers. Failure to do so can lead to serious errors in the estimation of evolutionary parameters and subsequent demographic inferences concerning the role of climate as a driver of evolutionary change. These effects may be especially pronounced in species with complex social structures occupying heterogeneous environments. We propose that environmentally driven differences in social structure may explain observed differences in evolutionary rate of domain I sequences, resulting from longer than expected retention times for matriarchal lineages in the superciliosus clade. PMID:25181547
Modified NASA-Lewis chemical equilibrium code for MHD applications
NASA Technical Reports Server (NTRS)
Sacks, R. A.; Geyer, H. K.; Grammel, S. J.; Doss, E. D.
1979-01-01
A substantially modified version of the NASA-Lewis Chemical Equilibrium Code was recently developed. The modifications were designed to extend the power and convenience of the Code as a tool for performing combustor analysis for MHD systems studies. The effect of the programming details is described from a user point of view.
The historical biogeography of Mammalia
Springer, Mark S.; Meredith, Robert W.; Janecka, Jan E.; Murphy, William J.
2011-01-01
Palaeobiogeographic reconstructions are underpinned by phylogenies, divergence times and ancestral area reconstructions, which together yield ancestral area chronograms that provide a basis for proposing and testing hypotheses of dispersal and vicariance. Methods for area coding include multi-state coding with a single character, binary coding with multiple characters and string coding. Ancestral reconstruction methods are divided into parsimony versus Bayesian/likelihood approaches. We compared nine methods for reconstructing ancestral areas for placental mammals. Ambiguous reconstructions were a problem for all methods. Important differences resulted from coding areas based on the geographical ranges of extant species versus the geographical provenance of the oldest fossil for each lineage. Africa and South America were reconstructed as the ancestral areas for Afrotheria and Xenarthra, respectively. Most methods reconstructed Eurasia as the ancestral area for Boreoeutheria, Euarchontoglires and Laurasiatheria. The coincidence of molecular dates for the separation of Afrotheria and Xenarthra at approximately 100 Ma with the plate tectonic sundering of Africa and South America hints at the importance of vicariance in the early history of Placentalia. Dispersal has also been important including the origins of Madagascar's endemic mammal fauna. Further studies will benefit from increased taxon sampling and the application of new ancestral area reconstruction methods. PMID:21807730
Thermodynamic and transport properties of gaseous tetrafluoromethane in chemical equilibrium
NASA Technical Reports Server (NTRS)
Hunt, J. L.; Boney, L. R.
1973-01-01
Equations and in computer code are presented for the thermodynamic and transport properties of gaseous, undissociated tetrafluoromethane (CF4) in chemical equilibrium. The computer code calculates the thermodynamic and transport properties of CF4 when given any two of five thermodynamic variables (entropy, temperature, volume, pressure, and enthalpy). Equilibrium thermodynamic and transport property data are tabulated and pressure-enthalpy diagrams are presented.
Implementation of Premixed Equilibrium Chemistry Capability in OVERFLOW
NASA Technical Reports Server (NTRS)
Olsen, M. E.; Liu, Y.; Vinokur, M.; Olsen, T.
2003-01-01
An implementation of premixed equilibrium chemistry has been completed for the OVERFLOW code, a chimera capable, complex geometry flow code widely used to predict transonic flowfields. The implementation builds on the computational efficiency and geometric generality of the solver.
Implementation of Premixed Equilibrium Chemistry Capability in OVERFLOW
NASA Technical Reports Server (NTRS)
Olsen, Mike E.; Liu, Yen; Vinokur, M.; Olsen, Tom
2004-01-01
An implementation of premixed equilibrium chemistry has been completed for the OVERFLOW code, a chimera capable, complex geometry flow code widely used to predict transonic flowfields. The implementation builds on the computational efficiency and geometric generality of the solver.
NASA Technical Reports Server (NTRS)
Kumar, A.; Graves, R. A., Jr.; Weilmuenster, K. J.
1980-01-01
A vectorized code, EQUIL, was developed for calculating the equilibrium chemistry of a reacting gas mixture on the Control Data STAR-100 computer. The code provides species mole fractions, mass fractions, and thermodynamic and transport properties of the mixture for given temperature, pressure, and elemental mass fractions. The code is set up for the electrons H, He, C, O, N system of elements. In all, 24 chemical species are included.
Recent update of the RPLUS2D/3D codes
NASA Technical Reports Server (NTRS)
Tsai, Y.-L. Peter
1991-01-01
The development of the RPLUS2D/3D codes is summarized. These codes utilize LU algorithms to solve chemical non-equilibrium flows in a body-fitted coordinate system. The motivation behind the development of these codes is the need to numerically predict chemical non-equilibrium flows for the National AeroSpace Plane Program. Recent improvements include vectorization method, blocking algorithms for geometric flexibility, out-of-core storage for large-size problems, and an LU-SW/UP combination for CPU-time efficiency and solution quality.
Side information in coded aperture compressive spectral imaging
NASA Astrophysics Data System (ADS)
Galvis, Laura; Arguello, Henry; Lau, Daniel; Arce, Gonzalo R.
2017-02-01
Coded aperture compressive spectral imagers sense a three-dimensional cube by using two-dimensional projections of the coded and spectrally dispersed source. These imagers systems often rely on FPA detectors, SLMs, micromirror devices (DMDs), and dispersive elements. The use of the DMDs to implement the coded apertures facilitates the capture of multiple projections, each admitting a different coded aperture pattern. The DMD allows not only to collect the sufficient number of measurements for spectrally rich scenes or very detailed spatial scenes but to design the spatial structure of the coded apertures to maximize the information content on the compressive measurements. Although sparsity is the only signal characteristic usually assumed for reconstruction in compressing sensing, other forms of prior information such as side information have been included as a way to improve the quality of the reconstructions. This paper presents the coded aperture design in a compressive spectral imager with side information in the form of RGB images of the scene. The use of RGB images as side information of the compressive sensing architecture has two main advantages: the RGB is not only used to improve the reconstruction quality but to optimally design the coded apertures for the sensing process. The coded aperture design is based on the RGB scene and thus the coded aperture structure exploits key features such as scene edges. Real reconstructions of noisy compressed measurements demonstrate the benefit of the designed coded apertures in addition to the improvement in the reconstruction quality obtained by the use of side information.
Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa
2009-01-01
Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769
NASA Technical Reports Server (NTRS)
Rosen, Bruce S.
1991-01-01
An upwind three-dimensional volume Navier-Stokes code is modified to facilitate modeling of complex geometries and flow fields represented by proposed National Aerospace Plane concepts. Code enhancements include an equilibrium air model, a generalized equilibrium gas model and several schemes to simplify treatment of complex geometric configurations. The code is also restructured for inclusion of an arbitrary number of independent and dependent variables. This latter capability is intended for eventual use to incorporate nonequilibrium/chemistry gas models, more sophisticated turbulence and transition models, or other physical phenomena which will require inclusion of additional variables and/or governing equations. Comparisons of computed results with experimental data and results obtained using other methods are presented for code validation purposes. Good correlation is obtained for all of the test cases considered, indicating the success of the current effort.
NASA Technical Reports Server (NTRS)
Sozen, Mehmet
2003-01-01
In what follows, the model used for combustion of liquid hydrogen (LH2) with liquid oxygen (LOX) using chemical equilibrium assumption, and the novel computational method developed for determining the equilibrium composition and temperature of the combustion products by application of the first and second laws of thermodynamics will be described. The modular FORTRAN code developed as a subroutine that can be incorporated into any flow network code with little effort has been successfully implemented in GFSSP as the preliminary runs indicate. The code provides capability of modeling the heat transfer rate to the coolants for parametric analysis in system design.
Transport and equilibrium in field-reversed mirrors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyd, J.K.
Two plasma models relevant to compact torus research have been developed to study transport and equilibrium in field reversed mirrors. In the first model for small Larmor radius and large collision frequency, the plasma is described as an adiabatic hydromagnetic fluid. In the second model for large Larmor radius and small collision frequency, a kinetic theory description has been developed. Various aspects of the two models have been studied in five computer codes ADB, AV, NEO, OHK, RES. The ADB code computes two dimensional equilibrium and one dimensional transport in a flux coordinate. The AV code calculates orbit average integralsmore » in a harmonic oscillator potential. The NEO code follows particle trajectories in a Hill's vortex magnetic field to study stochasticity, invariants of the motion, and orbit average formulas. The OHK code displays analytic psi(r), B/sub Z/(r), phi(r), E/sub r/(r) formulas developed for the kinetic theory description. The RES code calculates resonance curves to consider overlap regions relevant to stochastic orbit behavior.« less
Study of Globus-M Tokamak Poloidal System and Plasma Position Control
NASA Astrophysics Data System (ADS)
Dokuka, V. N.; Korenev, P. S.; Mitrishkin, Yu. V.; Pavlova, E. A.; Patrov, M. I.; Khayrutdinov, R. R.
2017-12-01
In order to provide efficient performance of tokamaks with vertically elongated plasma position, control systems for limited and diverted plasma configuration are required. The accuracy, stability, speed of response, and reliability of plasma position control as well as plasma shape and current control depend on the performance of the control system. Therefore, the problem of the development of such systems is an important and actual task in modern tokamaks. In this study, the measured signals from the magnetic loops and Rogowski coils are used to reconstruct the plasma equilibrium, for which linear models in small deviations are constructed. We apply methods of the H∞-optimization theory to the synthesize control system for vertical and horizontal position of plasma capable to working with structural uncertainty of the models of the plant. These systems are applied to the plasma-physical DINA code which is configured for the tokamak Globus-M plasma. The testing of the developed systems applied to the DINA code with Heaviside step functions have revealed the complex dynamics of plasma magnetic configurations. Being close to the bifurcation point in the parameter space of unstable plasma has made it possible to detect an abrupt change in the X-point position from the top to the bottom and vice versa. Development of the methods for reconstruction of plasma magnetic configurations and experience in designing plasma control systems with feedback for tokamaks provided an opportunity to synthesize new digital controllers for plasma vertical and horizontal position stabilization. It also allowed us to test the synthesized digital controllers in the closed loop of the control system with the DINA code as a nonlinear model of plasma.
NASA Technical Reports Server (NTRS)
Hamilton, H. Harris, II; Millman, Daniel R.; Greendyke, Robert B.
1992-01-01
A computer code was developed that uses an implicit finite-difference technique to solve nonsimilar, axisymmetric boundary layer equations for both laminar and turbulent flow. The code can treat ideal gases, air in chemical equilibrium, and carbon tetrafluoride (CF4), which is a useful gas for hypersonic blunt-body simulations. This is the only known boundary layer code that can treat CF4. Comparisons with experimental data have demonstrated that accurate solutions are obtained. The method should prove useful as an analysis tool for comparing calculations with wind tunnel experiments and for making calculations about flight vehicles where equilibrium air chemistry assumptions are valid.
NASA Astrophysics Data System (ADS)
Hamilton, H. Harris, II; Millman, Daniel R.; Greendyke, Robert B.
1992-12-01
A computer code was developed that uses an implicit finite-difference technique to solve nonsimilar, axisymmetric boundary layer equations for both laminar and turbulent flow. The code can treat ideal gases, air in chemical equilibrium, and carbon tetrafluoride (CF4), which is a useful gas for hypersonic blunt-body simulations. This is the only known boundary layer code that can treat CF4. Comparisons with experimental data have demonstrated that accurate solutions are obtained. The method should prove useful as an analysis tool for comparing calculations with wind tunnel experiments and for making calculations about flight vehicles where equilibrium air chemistry assumptions are valid.
Accessible and informative sectioned images, color-coded images, and surface models of the ear.
Park, Hyo Seok; Chung, Min Suk; Shin, Dong Sun; Jung, Yong Wook; Park, Jin Seo
2013-08-01
In our previous research, we created state-of-the-art sectioned images, color-coded images, and surface models of the human ear. Our ear data would be more beneficial and informative if they were more easily accessible. Therefore, the purpose of this study was to distribute the browsing software and the PDF file in which ear images are to be readily obtainable and freely explored. Another goal was to inform other researchers of our methods for establishing the browsing software and the PDF file. To achieve this, sectioned images and color-coded images of ear were prepared (voxel size 0.1 mm). In the color-coded images, structures related to hearing, equilibrium, and structures originated from the first and second pharyngeal arches were segmented supplementarily. The sectioned and color-coded images of right ear were added to the browsing software, which displayed the images serially along with structure names. The surface models were reconstructed to be combined into the PDF file where they could be freely manipulated. Using the browsing software and PDF file, sectional and three-dimensional shapes of ear structures could be comprehended in detail. Furthermore, using the PDF file, clinical knowledge could be identified through virtual otoscopy. Therefore, the presented educational tools will be helpful to medical students and otologists by improving their knowledge of ear anatomy. The browsing software and PDF file can be downloaded without charge and registration at our homepage (http://anatomy.dongguk.ac.kr/ear/). Copyright © 2013 Wiley Periodicals, Inc.
NR-code: Nonlinear reconstruction code
NASA Astrophysics Data System (ADS)
Yu, Yu; Pen, Ue-Li; Zhu, Hong-Ming
2018-04-01
NR-code applies nonlinear reconstruction to the dark matter density field in redshift space and solves for the nonlinear mapping from the initial Lagrangian positions to the final redshift space positions; this reverses the large-scale bulk flows and improves the precision measurement of the baryon acoustic oscillations (BAO) scale.
LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.
2000-01-01
A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).
Unitary reconstruction of secret for stabilizer-based quantum secret sharing
NASA Astrophysics Data System (ADS)
Matsumoto, Ryutaroh
2017-08-01
We propose a unitary procedure to reconstruct quantum secret for a quantum secret sharing scheme constructed from stabilizer quantum error-correcting codes. Erasure correcting procedures for stabilizer codes need to add missing shares for reconstruction of quantum secret, while unitary reconstruction procedures for certain class of quantum secret sharing are known to work without adding missing shares. The proposed procedure also works without adding missing shares.
PIES free boundary stellarator equilibria with improved initial conditions
NASA Astrophysics Data System (ADS)
Drevlak, M.; Monticello, D.; Reiman, A.
2005-07-01
The MFBE procedure developed by Strumberger (1997 Nucl. Fusion 37 19) is used to provide an improved starting point for free boundary equilibrium computations in the case of W7-X (Nührenberg and Zille 1986 Phys. Lett. A 114 129) using the Princeton iterative equilibrium solver (PIES) code (Reiman and Greenside 1986 Comput. Phys. Commun. 43 157). Transferring the consistent field found by the variational moments equilibrium code (VMEC) (Hirshmann and Whitson 1983 Phys. Fluids 26 3553) to an extended coordinate system using the VMORPH code, a safe margin between plasma boundary and PIES domain is established. The new EXTENDER_P code implements a generalization of the virtual casing principle, which allows field extension both for VMEC and PIES equilibria. This facilitates analysis of the 5/5 islands of the W7-X standard case without including them in the original PIES computation.
Fast GPU-based Monte Carlo code for SPECT/CT reconstructions generates improved 177Lu images.
Rydén, T; Heydorn Lagerlöf, J; Hemmingsson, J; Marin, I; Svensson, J; Båth, M; Gjertsson, P; Bernhardt, P
2018-01-04
Full Monte Carlo (MC)-based SPECT reconstructions have a strong potential for correcting for image degrading factors, but the reconstruction times are long. The objective of this study was to develop a highly parallel Monte Carlo code for fast, ordered subset expectation maximum (OSEM) reconstructions of SPECT/CT images. The MC code was written in the Compute Unified Device Architecture language for a computer with four graphics processing units (GPUs) (GeForce GTX Titan X, Nvidia, USA). This enabled simulations of parallel photon emissions from the voxels matrix (128 3 or 256 3 ). Each computed tomography (CT) number was converted to attenuation coefficients for photo absorption, coherent scattering, and incoherent scattering. For photon scattering, the deflection angle was determined by the differential scattering cross sections. An angular response function was developed and used to model the accepted angles for photon interaction with the crystal, and a detector scattering kernel was used for modeling the photon scattering in the detector. Predefined energy and spatial resolution kernels for the crystal were used. The MC code was implemented in the OSEM reconstruction of clinical and phantom 177 Lu SPECT/CT images. The Jaszczak image quality phantom was used to evaluate the performance of the MC reconstruction in comparison with attenuated corrected (AC) OSEM reconstructions and attenuated corrected OSEM reconstructions with resolution recovery corrections (RRC). The performance of the MC code was 3200 million photons/s. The required number of photons emitted per voxel to obtain a sufficiently low noise level in the simulated image was 200 for a 128 3 voxel matrix. With this number of emitted photons/voxel, the MC-based OSEM reconstruction with ten subsets was performed within 20 s/iteration. The images converged after around six iterations. Therefore, the reconstruction time was around 3 min. The activity recovery for the spheres in the Jaszczak phantom was clearly improved with MC-based OSEM reconstruction, e.g., the activity recovery was 88% for the largest sphere, while it was 66% for AC-OSEM and 79% for RRC-OSEM. The GPU-based MC code generated an MC-based SPECT/CT reconstruction within a few minutes, and reconstructed patient images of 177 Lu-DOTATATE treatments revealed clearly improved resolution and contrast.
Current/Pressure Profile Effects on Tearing Mode Stability in DIII-D Hybrid Discharges
NASA Astrophysics Data System (ADS)
Kim, K.; Park, J. M.; Murakami, M.; La Haye, R. J.; Na, Yong-Su
2015-11-01
It is important to understand the onset threshold and the evolution of tearing modes (TMs) for developing a high-performance steady state fusion reactor. As initial and basic comparisons to determine TM onset, the measured plasma profiles (such as temperature, density, rotation) were compared with the calculated current profiles between a pair of discharges with/without n=1 mode based on the database for DIII-D hybrid plasmas. The profiles were not much different, but the details were analyzed to determine their characteristics, especially near the rational surface. The tearing stability index calculated from PEST3, Δ' tends to increase rapidly just before the n=1 mode onset for these cases. The modeled equilibrium with varying pressure or current profiles parametrically based on the reference discharge is reconstructed for checking the onset dependency on Δ' or neoclassical effects such as bootstrap current. Simulations of TMs with the modeled equilibrium using resistive MHD codes will also be presented and compared with experiments to determine the sensibility for predicting TM onset. Work supported by US DOE under DE-FC02-04ER54698 and DE-AC52-07NA27344.
MHD stability analysis and global mode identification preparing for high beta operation in KSTAR
NASA Astrophysics Data System (ADS)
Park, Y. S.; Sabbagh, S. A.; Berkery, J. W.; Jiang, Y.; Ahn, J. H.; Han, H. S.; Bak, J. G.; Park, B. H.; Jeon, Y. M.; Kim, J.; Hahn, S. H.; Lee, J. H.; Ko, J. S.; in, Y. K.; Yoon, S. W.; Oh, Y. K.; Wang, Z.; Glasser, A. H.
2017-10-01
H-mode plasma operation in KSTAR has surpassed the computed n = 1 ideal no-wall stability limit in discharges exceeding several seconds in duration. The achieved high normalized beta plasmas are presently limited by resistive tearing instabilities rather than global kink/ballooning or RWMs. The ideal and resistive stability of these plasmas is examined by using different physics models. The observed m/ n = 2/1 tearing stability is computed by using the M3D-C1 code, and by the resistive DCON code. The global MHD stability modified by kinetic effects is examined using the MISK code. Results from the analysis explain the stabilization of the plasma above the ideal MHD no-wall limit. Equilibrium reconstructions used include the measured kinetic profiles and MSE data. In preparation for plasma operation at higher beta utilizing the planned second NBI system, three sets of 3D magnetic field sensors have been installed and will be used for RWM active feedback control. To accurately determine the dominant n-component produced by low frequency unstable RWMs, an algorithm has been developed that includes magnetic sensor compensation of the prompt applied field and the field from the induced current on the passive conductors. Supported by US DOE Contracts DE-FG02-99ER54524 and DE-SC0016614.
A chemical equilibrium code was improved and used to show that calcium and magnesium have a large yet different effect on the aerosol size distribution in different regions of Los Angeles. In the code, a new technique of solving individual equilibrium equation...
User's manual for the FLORA equilibrium and stability code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freis, R.P.; Cohen, B.I.
1985-04-01
This document provides a user's guide to the content and use of the two-dimensional axisymmetric equilibrium and stability code FLORA. FLORA addresses the low-frequency MHD stability of long-thin axisymmetric tandem mirror systems with finite pressure and finite-larmor-radius effects. FLORA solves an initial-value problem for interchange, rotational, and ballooning stability.
Radiation in Space and Its Control of Equilibrium Temperatures in the Solar System
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
2004-01-01
The problem of determining equilibrium temperatures for reradiating surfaces in space vacuum was analyzed and the resulting mathematical relationships were incorporated in a code to determine space sink temperatures in the solar system. A brief treatment of planetary atmospheres is also included. Temperature values obtained with the code are in good agreement with available spacecraft telemetry and meteorological measurements for Venus and Earth. The code has been used in the design of space power system radiators for future interplanetary missions.
Paul T. Rygiewicz; Vicente J. Monleon; Elaine R. Ingham; Kendall J. Martin; Mark G. Johnson
2010-01-01
Disrupting ecosystem components, while transferring and reconstructing them for experiments can produce myriad responses. Establishing the extent of these biological responses as the system approaches a new equilibrium allows us more reliably to emulate comparable native systems. That is, the sensitivity of analyzing ecosystem processes in a reconstructed system is...
An Initial Non-Equilibrium Porous-Media Model for CFD Simulation of Stirling Regenerators
NASA Technical Reports Server (NTRS)
Tew, Roy; Simon, Terry; Gedeon, David; Ibrahim, Mounir; Rong, Wei
2006-01-01
The objective of this paper is to define empirical parameters (or closwre models) for an initial thermai non-equilibrium porous-media model for use in Computational Fluid Dynamics (CFD) codes for simulation of Stirling regenerators. The two CFD codes currently being used at Glenn Research Center (GRC) for Stirling engine modeling are Fluent and CFD-ACE. The porous-media models available in each of these codes are equilibrium models, which assmne that the solid matrix and the fluid are in thermal equilibrium at each spatial location within the porous medium. This is believed to be a poor assumption for the oscillating-flow environment within Stirling regenerators; Stirling 1-D regenerator models, used in Stirling design, we non-equilibrium regenerator models and suggest regenerator matrix and gas average temperatures can differ by several degrees at a given axial location end time during the cycle. A NASA regenerator research grant has been providing experimental and computational results to support definition of various empirical coefficients needed in defining a noa-equilibrium, macroscopic, porous-media model (i.e., to define "closure" relations). The grant effort is being led by Cleveland State University, with subcontractor assistance from the University of Minnesota, Gedeon Associates, and Sunpower, Inc. Friction-factor and heat-transfer correlations based on data taken with the NASAlSunpower oscillating-flow test rig also provide experimentally based correlations that are useful in defining parameters for the porous-media model; these correlations are documented in Gedeon Associates' Sage Stirling-Code Manuals. These sources of experimentally based information were used to define the following terms and parameters needed in the non-equilibrium porous-media model: hydrodynamic dispersion, permeability, inertial coefficient, fluid effective thermal conductivity (including themal dispersion and estimate of tortuosity effects}, and fluid-solid heat transfer coefficient. Solid effective thermal conductivity (including the effect of tortuosity) was also estimated. Determination of the porous-media model parameters was based on planned use in a CFD model of Infinia's Stirling Technology Demonstration Convertor (TDC), which uses a random-fiber regenerator matrix. The non-equilibrium porous-media model presented is considered to be an initial, or "draft," model for possible incorporation in commercial CFD codes, with the expectation that the empirical parameters will likely need to be updated once resulting Stirling CFD model regenerator and engine results have been analyzed. The emphasis of the paper is on use of available data to define empirical parameters (and closure models) needed in a thermal non-equilibrium porous-media model for Stirling regenerator simulation. Such a model has not yet been implemented by the authors or their associates. However, it is anticipated that a thermal non-equilibrium model such as that presented here, when iacorporated in the CFD codes, will improve our ability to accurately model Stirling regenerators with CFD relative to current thermal-equilibrium porous-media models.
An Initial Non-Equilibrium Porous-Media Model for CFD Simulation of Stirling Regenerators
NASA Technical Reports Server (NTRS)
Tew, Roy C.; Simon, Terry; Gedeon, David; Ibrahim, Mounir; Rong, Wei
2006-01-01
The objective of this paper is to define empirical parameters for an initial thermal non-equilibrium porous-media model for use in Computational Fluid Dynamics (CFD) codes for simulation of Stirling regenerators. The two codes currently used at Glenn Research Center for Stirling modeling are Fluent and CFD-ACE. The codes porous-media models are equilibrium models, which assume solid matrix and fluid are in thermal equilibrium. This is believed to be a poor assumption for Stirling regenerators; Stirling 1-D regenerator models, used in Stirling design, use non-equilibrium regenerator models and suggest regenerator matrix and gas average temperatures can differ by several degrees at a given axial location and time during the cycle. Experimentally based information was used to define: hydrodynamic dispersion, permeability, inertial coefficient, fluid effective thermal conductivity, and fluid-solid heat transfer coefficient. Solid effective thermal conductivity was also estimated. Determination of model parameters was based on planned use in a CFD model of Infinia's Stirling Technology Demonstration Converter (TDC), which uses a random-fiber regenerator matrix. Emphasis is on use of available data to define empirical parameters needed in a thermal non-equilibrium porous media model for Stirling regenerator simulation. Such a model has not yet been implemented by the authors or their associates.
Guttman, Mitchell; Garber, Manuel; Levin, Joshua Z.; Donaghey, Julie; Robinson, James; Adiconis, Xian; Fan, Lin; Koziol, Magdalena J.; Gnirke, Andreas; Nusbaum, Chad; Rinn, John L.; Lander, Eric S.; Regev, Aviv
2010-01-01
RNA-Seq provides an unbiased way to study a transcriptome, including both coding and non-coding genes. To date, most RNA-Seq studies have critically depended on existing annotations, and thus focused on expression levels and variation in known transcripts. Here, we present Scripture, a method to reconstruct the transcriptome of a mammalian cell using only RNA-Seq reads and the genome sequence. We apply it to mouse embryonic stem cells, neuronal precursor cells, and lung fibroblasts to accurately reconstruct the full-length gene structures for the vast majority of known expressed genes. We identify substantial variation in protein-coding genes, including thousands of novel 5′-start sites, 3′-ends, and internal coding exons. We then determine the gene structures of over a thousand lincRNA and antisense loci. Our results open the way to direct experimental manipulation of thousands of non-coding RNAs, and demonstrate the power of ab initio reconstruction to render a comprehensive picture of mammalian transcriptomes. PMID:20436462
Feature reconstruction of LFP signals based on PLSR in the neural information decoding study.
Yonghui Dong; Zhigang Shang; Mengmeng Li; Xinyu Liu; Hong Wan
2017-07-01
To solve the problems of Signal-to-Noise Ratio (SNR) and multicollinearity when the Local Field Potential (LFP) signals is used for the decoding of animal motion intention, a feature reconstruction of LFP signals based on partial least squares regression (PLSR) in the neural information decoding study is proposed in this paper. Firstly, the feature information of LFP coding band is extracted based on wavelet transform. Then the PLSR model is constructed by the extracted LFP coding features. According to the multicollinearity characteristics among the coding features, several latent variables which contribute greatly to the steering behavior are obtained, and the new LFP coding features are reconstructed. Finally, the K-Nearest Neighbor (KNN) method is used to classify the reconstructed coding features to verify the decoding performance. The results show that the proposed method can achieve the highest accuracy compared to the other three methods and the decoding effect of the proposed method is robust.
Limitations of bootstrap current models
Belli, Emily A.; Candy, Jefferey M.; Meneghini, Orso; ...
2014-03-27
We assess the accuracy and limitations of two analytic models of the tokamak bootstrap current: (1) the well-known Sauter model and (2) a recent modification of the Sauter model by Koh et al. For this study, we use simulations from the first-principles kinetic code NEO as the baseline to which the models are compared. Tests are performed using both theoretical parameter scans as well as core- to-edge scans of real DIII-D and NSTX plasma profiles. The effects of extreme aspect ratio, large impurity fraction, energetic particles, and high collisionality are studied. In particular, the error in neglecting cross-species collisional couplingmore » – an approximation inherent to both analytic models – is quantified. Moreover, the implications of the corrections from kinetic NEO simulations on MHD equilibrium reconstructions is studied via integrated modeling with kinetic EFIT.« less
Measurement of neoclassically predicted edge current density at ASDEX Upgrade
NASA Astrophysics Data System (ADS)
Dunne, M. G.; McCarthy, P. J.; Wolfrum, E.; Fischer, R.; Giannone, L.; Burckhart, A.; the ASDEX Upgrade Team
2012-12-01
Experimental confirmation of neoclassically predicted edge current density in an ELMy H-mode plasma is presented. Current density analysis using the CLISTE equilibrium code is outlined and the rationale for accuracy of the reconstructions is explained. Sample profiles and time traces from analysis of data at ASDEX Upgrade are presented. A high time resolution is possible due to the use of an ELM-synchronization technique. Additionally, the flux-surface-averaged current density is calculated using a neoclassical approach. Results from these two separate methods are then compared and are found to validate the theoretical formula. Finally, several discharges are compared as part of a fuelling study, showing that the size and width of the edge current density peak at the low-field side can be explained by the electron density and temperature drives and their respective collisionality modifications.
NASA Technical Reports Server (NTRS)
Talcott, N. A., Jr.
1977-01-01
Equations and computer code are given for the thermodynamic properties of gaseous fluorocarbons in chemical equilibrium. In addition, isentropic equilibrium expansions of two binary mixtures of fluorocarbons and argon are included. The computer code calculates the equilibrium thermodynamic properties and, in some cases, the transport properties for the following fluorocarbons: CCl2F, CCl2F2, CBrF3, CF4, CHCl2F, CHF3, CCL2F-CCl2F, CCLF2-CClF2, CF3-CF3, and C4F8. Equilibrium thermodynamic properties are tabulated for six of the fluorocarbons(CCl3F, CCL2F2, CBrF3, CF4, CF3-CF3, and C4F8) and pressure-enthalpy diagrams are presented for CBrF3.
NASA Astrophysics Data System (ADS)
Miyata, Y.; Suzuki, T.; Takechi, M.; Urano, H.; Ide, S.
2015-07-01
For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.
Incorporation of a Chemical Equilibrium Equation of State into LOCI-Chem
NASA Technical Reports Server (NTRS)
Cox, Carey F.
2005-01-01
Renewed interest in development of advanced high-speed transport, reentry vehicles and propulsion systems has led to a resurgence of research into high speed aerodynamics. As this flow regime is typically dominated by hot reacting gaseous flow, efficient models for the characteristic chemical activity are necessary for accurate and cost effective analysis and design of aerodynamic vehicles that transit this regime. The LOCI-Chem code recently developed by Ed Luke at Mississippi State University for NASA/MSFC and used by NASA/MSFC and SSC represents an important step in providing an accurate, efficient computational tool for the simulation of reacting flows through the use of finite-rate kinetics [3]. Finite rate chemistry however, requires the solution of an additional N-1 species mass conservation equations with source terms involving reaction kinetics that are not fully understood. In the equilibrium limit, where the reaction rates approach infinity, these equations become very stiff. Through the use of the assumption of local chemical equilibrium the set of governing equations is reduced back to the usual gas dynamic equations, and thus requires less computation, while still allowing for the inclusion of reacting flow phenomenology. The incorporation of a chemical equilibrium equation of state module into the LOCI-Chem code was the primary objective of the current research. The major goals of the project were: (1) the development of a chemical equilibrium composition solver, and (2) the incorporation of chemical equilibrium solver into LOCI-Chem. Due to time and resource constraints, code optimization was not considered unless it was important to the proper functioning of the code.
Latash, M L
1994-01-01
A method for reconstructing joint compliant characteristics during voluntary movements was applied to the analysis of oscillatory and unidirectional elbow flexion movements. In different series, the subjects were given one of the following instructions: (1) do not intervene voluntarily; (2) keep the trajectory; (3) in cases of perturbations, return back to the starting position as quickly as possible (only during unidirectional movements). Under the instruction 'keep trajectory', the apparent joint stiffness increased by 50% to 250%. During oscillatory movements, this was accompanied by a decrease in the maximal difference between the actual and equilibrium joint trajectories and, in several cases, led to a change in the phase relation between the two trajectories. The coefficients of correlation between joint torque and angle were very high (commonly, over 0.9) under the 'do not intervene' instruction. They dropped to about 0.6 under the 'keep trajectory' and to about 0.3 under the 'return back' instructions. Under these two instructions, the low values of the coefficients of correlation did not allow reconstruction of segments of equilibrium trajectories and joint stiffness values in all the subjects. The results provide further support for the lambda-version of the equilibrium-point hypothesis and for using the instruction 'do not intervene voluntarily' to obtain reproducible time patterns of the central motor command.
Reconstruction of Pressure Profile Evolution during Levitated Dipole Experiments
NASA Astrophysics Data System (ADS)
Mauel, M.; Garnier, D.; Boxer, A.; Ellsworth, J.; Kesner, J.
2008-11-01
Magnetic levitation of the LDX superconducting dipole causes significant changes in the measured diamagnetic flux and what appears to be an isotropic plasma pressure profile (p˜p||). This poster describes the reconstruction of plasma current and plasma pressure profiles from external measurements of the equilibrium magnetic field, which vary substantially as a function of time depending upon variations in neutral pressure and multifrequency ECRH power levels. Previous free-boundary reconstructions of plasma equilibrium showed the plasma to be anisotropic and highly peaked at the location of the cyclotron resonance of the microwave heating sources. Reconstructions of the peaked plasma pressures confined by a levitated dipole incorporate the small axial motion of the dipole (±5 mm), time varying levitation coil currents, eddy currents flowing in the vacuum vessel, constant magnetic flux linking the superconductor, and new flux loops located near the hot plasma in order to closely couple to plasma current and dipole current variations. I. Karim, et al., J. Fusion Energy, 26 (2007) 99.
Numerical optimization of perturbative coils for tokamaks
NASA Astrophysics Data System (ADS)
Lazerson, Samuel; Park, Jong-Kyu; Logan, Nikolas; Boozer, Allen; NSTX-U Research Team
2014-10-01
Numerical optimization of coils which apply three dimensional (3D) perturbative fields to tokamaks is presented. The application of perturbative 3D magnetic fields in tokamaks is now commonplace for control of error fields, resistive wall modes, resonant field drive, and neoclassical toroidal viscosity (NTV) torques. The design of such systems has focused on control of toroidal mode number, with coil shapes based on simple window-pane designs. In this work, a numerical optimization suite based on the STELLOPT 3D equilibrium optimization code is presented. The new code, IPECOPT, replaces the VMEC equilibrium code with the IPEC perturbed equilibrium code, and targets NTV torque by coupling to the PENT code. Fixed boundary optimizations of the 3D fields for the NSTX-U experiment are underway. Initial results suggest NTV torques can be driven by normal field spectrums which are not pitch-resonant with the magnetic field lines. Work has focused on driving core torque with n = 1 and edge torques with n = 3 fields. Optimizations of the coil currents for the planned NSTX-U NCC coils highlight the code's free boundary capability. This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy.
Bellomo, A; Inbar, G
1997-01-01
One of the theories of human motor control is the gamma Equilibrium Point Hypothesis. It is an attractive theory since it offers an easy control scheme where the planned trajectory shifts monotionically from an initial to a final equilibrium state. The feasibility of this model was tested by reconstructing the virtual trajectory and the stiffness profiles for movements performed with different inertial loads and examining them. Three types of movements were tested: passive movements, targeted movements, and repetitive movements. Each of the movements was performed with five different inertial loads. Plausible virtual trajectories and stiffness profiles were reconstructed based on the gamma Equilibrium Point Hypothesis for the three different types of movements performed with different inertial loads. However, the simple control strategy supported by the model, where the planned trajectory shifts monotonically from an initial to a final equilibrium state, could not be supported for targeted movements performed with added inertial load. To test the feasibility of the model further we must examine the probability that the human motor control system would choose a trajectory more complicated than the actual trajectory to control.
NASA Astrophysics Data System (ADS)
Blecic, Jasmina; Harrington, Joseph; Bowman, Matthew O.; Cubillos, Patricio E.; Stemm, Madison; Foster, Andrew
2014-11-01
We present a new, open-source, Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. TEA uses the Gibbs-free-energy minimization method with an iterative Lagrangian optimization scheme. It initializes the radiative-transfer calculation in our Bayesian Atmospheric Radiative Transfer (BART) code. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. The code is tested against the original method developed by White at al. (1958), the analytic method developed by Burrows and Sharp (1999), and the Newton-Raphson method implemented in the open-source Chemical Equilibrium with Applications (CEA) code. TEA is written in Python and is available to the community via the open-source development site GitHub.com. We also present BART applied to eclipse depths of WASP-43b exoplanet, constraining atmospheric thermal and chemical parameters. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.
NASA Astrophysics Data System (ADS)
Hirshman, S. P.; Shafer, M. W.; Seal, S. K.; Canik, J. M.
2016-04-01
> The SIESTA magnetohydrodynamic (MHD) equilibrium code has been used to compute a sequence of ideally stable equilibria resulting from numerical variation of the helical resonant magnetic perturbation (RMP) applied to an axisymmetric DIII-D plasma equilibrium. Increasing the perturbation strength at the dominant , resonant surface leads to lower MHD energies and increases in the equilibrium island widths at the (and sidebands) surfaces, in agreement with theoretical expectations. Island overlap at large perturbation strengths leads to stochastic magnetic fields which correlate well with the experimentally inferred field structure. The magnitude and spatial phase (around the dominant rational surfaces) of the resonant (shielding) component of the parallel current are shown to change qualitatively with the magnetic island topology.
Extension of CE/SE method to non-equilibrium dissociating flows
NASA Astrophysics Data System (ADS)
Wen, C. Y.; Saldivar Massimi, H.; Shen, H.
2018-03-01
In this study, the hypersonic non-equilibrium flows over rounded nose geometries are numerically investigated by a robust conservation element and solution element (CE/SE) code, which is based on hybrid meshes consisting of triangular and quadrilateral elements. The dissociating and recombination chemical reactions as well as the vibrational energy relaxation are taken into account. The stiff source terms are solved by an implicit trapezoidal method of integration. Comparison with laboratory and numerical cases are provided to demonstrate the accuracy and reliability of the present CE/SE code in simulating hypersonic non-equilibrium flows.
NASA Astrophysics Data System (ADS)
Gilleron, Franck; Piron, Robin
2015-12-01
We present Dédale, a fast code implementing a simplified non-local-thermodynamic-equilibrium (NLTE) plasma model. In this approach, the stationary collisional-radiative rates equations are solved for a set of well-chosen Layzer complexes in order to determine the ion state populations. The electronic structure is approximated using the screened hydrogenic model (SHM) of More with relativistic corrections. The radiative and collisional cross-sections are based on Kramers and Van Regemorter formula, respectively, which are extrapolated to derive analytical expressions for all the rates. The latter are improved thereafter using Gaunt factors or more accurate tabulated data. Special care is taken for dielectronic rates which are compared and rescaled with quantum calculations from the Averroès code. The emissivity and opacity spectra are calculated under the same assumptions as for the radiative rates, either in a detailed manner by summing the transitions between each pair of complexes, or in a coarser statistical way by summing the one-electron transitions averaged over the complexes. Optionally, nℓ-splitting can be accounted for using a WKB approach in an approximate potential reconstructed analytically from the screened charges. It is also possible to improve the spectra by replacing some transition arrays with more accurate data tabulated using the SCO-RCG or FAC codes. This latter option is particularly useful for K-shell emission spectroscopy. The Dédale code was used to submit neon and tungsten cases in the last NLTE-8 workshop (Santa Fe, November 4-8, 2013). Some of these results are presented, as well as comparisons with Averroès calculations.
Prediction of U-Mo dispersion nuclear fuels with Al-Si alloy using artificial neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Susmikanti, Mike, E-mail: mike@batan.go.id; Sulistyo, Jos, E-mail: soj@batan.go.id
2014-09-30
Dispersion nuclear fuels, consisting of U-Mo particles dispersed in an Al-Si matrix, are being developed as fuel for research reactors. The equilibrium relationship for a mixture component can be expressed in the phase diagram. It is important to analyze whether a mixture component is in equilibrium phase or another phase. The purpose of this research it is needed to built the model of the phase diagram, so the mixture component is in the stable or melting condition. Artificial neural network (ANN) is a modeling tool for processes involving multivariable non-linear relationships. The objective of the present work is to developmore » code based on artificial neural network models of system equilibrium relationship of U-Mo in Al-Si matrix. This model can be used for prediction of type of resulting mixture, and whether the point is on the equilibrium phase or in another phase region. The equilibrium model data for prediction and modeling generated from experimentally data. The artificial neural network with resilient backpropagation method was chosen to predict the dispersion of nuclear fuels U-Mo in Al-Si matrix. This developed code was built with some function in MATLAB. For simulations using ANN, the Levenberg-Marquardt method was also used for optimization. The artificial neural network is able to predict the equilibrium phase or in the phase region. The develop code based on artificial neural network models was built, for analyze equilibrium relationship of U-Mo in Al-Si matrix.« less
A novel data processing technique for image reconstruction of penumbral imaging
NASA Astrophysics Data System (ADS)
Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin
2011-06-01
CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.
Simplified Thermo-Chemical Modelling For Hypersonic Flow
NASA Astrophysics Data System (ADS)
Sancho, Jorge; Alvarez, Paula; Gonzalez, Ezequiel; Rodriguez, Manuel
2011-05-01
Hypersonic flows are connected with high temperatures, generally associated with strong shock waves that appear in such flows. At high temperatures vibrational degrees of freedom of the molecules may become excited, the molecules may dissociate into atoms, the molecules or free atoms may ionize, and molecular or ionic species, unimportant at lower temperatures, may be formed. In order to take into account these effects, a chemical model is needed, but this model should be simplified in order to be handled by a CFD code, but with a sufficient precision to take into account the physics more important. This work is related to a chemical non-equilibrium model validation, implemented into a commercial CFD code, in order to obtain the flow field around bodies in hypersonic flow. The selected non-equilibrium model is composed of seven species and six direct reactions together with their inverse. The commercial CFD code where the non- equilibrium model has been implemented is FLUENT. For the validation, the X38/Sphynx Mach 20 case is rebuilt on a reduced geometry, including the 1/3 Lref forebody. This case has been run in laminar regime, non catalytic wall and with radiative equilibrium wall temperature. The validated non-equilibrium model is applied to the EXPERT (European Experimental Re-entry Test-bed) vehicle at a specified trajectory point (Mach number 14). This case has been run also in laminar regime, non catalytic wall and with radiative equilibrium wall temperature.
Disrupting ecosystem components, while transferring and reconstructing them for experiments can produce myriad responses. Establishing the extent of these biological responses as the system approaches a new equilibrium allows us more reliably to emulate comparable native systems....
NASA Astrophysics Data System (ADS)
Reimer, R.; Marchuk, O.; Geiger, B.; Mc Carthy, P. J.; Dunne, M.; Hobirk, J.; Wolf, R.; ASDEX Upgrade Team
2017-08-01
The Motional Stark Effect (MSE) diagnostic is a well established technique to infer the local internal magnetic field in fusion plasmas. In this paper, the existing forward model which describes the MSE data is extended by the Zeeman effect, fine-structure, and relativistic corrections in the interpretation of the MSE spectra for different experimental conditions at the tokamak ASDEX Upgrade. The contribution of the non-Local Thermodynamic Equilibrium (non-LTE) populations among the magnetic sub-levels and the Zeeman effect on the derived plasma parameters is different. The obtained pitch angle is changed by 3 ° … 4 ° and by 0 . 5 ° … 1 ° including the non-LTE and the Zeeman effects into the standard statistical MSE model. The total correction is about 4°. Moreover, the variation of the magnetic field strength is significantly changed by 2.2% due to the Zeeman effect only. While the data on the derived pitch angle still could not be tested against the other diagnostics, the results from an equilibrium reconstruction solver confirm the obtained values for magnetic field strength.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyata, Y.; Suzuki, T.; Takechi, M.
2015-07-15
For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichletmore » and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.« less
Nonambipolar Transport and Torque in Perturbed Equilibria
NASA Astrophysics Data System (ADS)
Logan, N. C.; Park, J.-K.; Wang, Z. R.; Berkery, J. W.; Kim, K.; Menard, J. E.
2013-10-01
A new Perturbed Equilibrium Nonambipolar Transport (PENT) code has been developed to calculate the neoclassical toroidal torque from radial current composed of both passing and trapped particles in perturbed equilibria. This presentation outlines the physics approach used in the development of the PENT code, with emphasis on the effects of retaining general aspect-ratio geometric effects. First, nonambipolar transport coefficients and corresponding neoclassical toroidal viscous (NTV) torque in perturbed equilibria are re-derived from the first order gyro-drift-kinetic equation in the ``combined-NTV'' PENT formalism. The equivalence of NTV torque and change in potential energy due to kinetic effects [J-K. Park, Phys. Plas., 2011] is then used to showcase computational challenges shared between PENT and stability codes MISK and MARS-K. Extensive comparisons to a reduced model, which makes numerous large aspect ratio approximations, are used throughout to emphasize geometry dependent physics such as pitch angle resonances. These applications make extensive use of the PENT code's native interfacing with the Ideal Perturbed Equilibrium Code (IPEC), and the combination of these codes is a key step towards an iterative solver for self-consistent perturbed equilibrium torque. Supported by US DOE contract #DE-AC02-09CH11466 and the DOE Office of Science Graduate Fellowship administered by the Oak Ridge Institute for Science & Education under contract #DE-AC05-06OR23100.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bitter, M; Gates, D; Neilson, H
A high-resolution X-ray imaging crystal spectrometer, whose instrumental concept was thoroughly tested on NSTX and Alcator C-Mod, is presently being designed for LHD. The instrument will record spatially resolved spectra of helium-like Ar16+ and provide ion temperature profiles with spatial and temporal resolutions of 1 cm and > 10 ms which are obtained by a tomographic inversion of the spectral data, using the stellarator equilibrium reconstruction codes, STELLOPT and PIES. Since the spectrometer will be equipped with radiation hardened, high count rate, PILATUS detectors,, it is expected to be operational for all experimental conditions on LHD, which include plasmas ofmore » high density and plasmas with auxiliary RF and neutral beam heating. The special design features required by the magnetic field structure at LHD will be described.« less
Implementation of Soft X-ray Tomography on NSTX
NASA Astrophysics Data System (ADS)
Tritz, K.; Stutman, D.; Finkenthal, M.; Granetz, R.; Menard, J.; Park, W.
2003-10-01
A set of poloidal ultrasoft X-ray arrays is operated by the Johns Hopkins group on NSTX. To enable MHD mode analysis independent of the magnetic reconstruction, the McCormick-Granetz tomography code developed at MIT is being adapted to the NSTX geometry. Tests of the code using synthetic data show that that present X-ray system is adequate for m=1 tomography. In addition, we have found that spline basis functions may be better suited than Bessel functions for the reconstruction of radially localized phenomena in NSTX. The tomography code was also used to determine the necessary array expansion and optimal array placement for the characterization of higher m modes (m=2,3) in the future. Initial reconstruction of experimental soft X-ray data has been performed for m=1 internal modes, which are often encountered in high beta NSTX discharges. The reconstruction of these modes will be compared to predictions from the M3D code and magnetic measurements.
NASA Astrophysics Data System (ADS)
Wu, M. Q.; Pan, C. K.; Chan, V. S.; Li, G. Q.; Garofalo, A. M.; Jian, X.; Liu, L.; Ren, Q. L.; Chen, J. L.; Gao, X.; Gong, X. Z.; Ding, S. Y.; Qian, J. P.; Cfetr Physics Team
2018-04-01
Time-dependent integrated modeling of DIII-D ITER-like and high bootstrap current plasma ramp-up discharges has been performed with the equilibrium code EFIT, and the transport codes TGYRO and ONETWO. Electron and ion temperature profiles are simulated by TGYRO with the TGLF (SAT0 or VX model) turbulent and NEO neoclassical transport models. The VX model is a new empirical extension of the TGLF turbulent model [Jian et al., Nucl. Fusion 58, 016011 (2018)], which captures the physics of multi-scale interaction between low-k and high-k turbulence from nonlinear gyro-kinetic simulation. This model is demonstrated to accurately model low Ip discharges from the EAST tokamak. Time evolution of the plasma current density profile is simulated by ONETWO with the experimental current ramp-up rate. The general trend of the predicted evolution of the current density profile is consistent with that obtained from the equilibrium reconstruction with Motional Stark effect constraints. The predicted evolution of βN , li , and βP also agrees well with the experiments. For the ITER-like cases, the predicted electron and ion temperature profiles using TGLF_Sat0 agree closely with the experimental measured profiles, and are demonstrably better than other proposed transport models. For the high bootstrap current case, the predicted electron and ion temperature profiles perform better in the VX model. It is found that the SAT0 model works well at high IP (>0.76 MA) while the VX model covers a wider range of plasma current ( IP > 0.6 MA). The results reported in this paper suggest that the developed integrated modeling could be a candidate for ITER and CFETR ramp-up engineering design modeling.
Luce, T. C.; Petty, C. C.; Meyer, W. H.; ...
2016-11-02
An approximate method to correct the motional Stark effect (MSE) spectroscopy for the effects of intrinsic plasma electric fields has been developed. The motivation for using an approximate method is to incorporate electric field effects for between-pulse or real-time analysis of the current density or safety factor profile. The toroidal velocity term in the momentum balance equation is normally the dominant contribution to the electric field orthogonal to the flux surface over most of the plasma. When this approximation is valid, the correction to the MSE data can be included in a form like that used when electric field effectsmore » are neglected. This allows measurements of the toroidal velocity to be integrated into the interpretation of the MSE polarization angles without changing how the data is treated in existing codes. In some cases, such as the DIII-D system, the correction is especially simple, due to the details of the neutral beam and MSE viewing geometry. The correction method is compared using DIII-D data in a variety of plasma conditions to analysis that assumes no radial electric field is present and to analysis that uses the standard correction method, which involves significant human intervention for profile fitting. The comparison shows that the new correction method is close to the standard one, and in all cases appears to offer a better result than use of the uncorrected data. Lastly, the method has been integrated into the standard DIII-D equilibrium reconstruction code in use for analysis between plasma pulses and is sufficiently fast that it will be implemented in real-time equilibrium analysis for control applications.« less
Research on compressive sensing reconstruction algorithm based on total variation model
NASA Astrophysics Data System (ADS)
Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin
2017-12-01
Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.
Conservative bin-to-bin fractional collisions
NASA Astrophysics Data System (ADS)
Martin, Robert
2016-11-01
Particle methods such as direct simulation Monte Carlo (DSMC) and particle-in-cell (PIC) are commonly used to model rarefied kinetic flows for engineering applications because of their ability to efficiently capture non-equilibrium behavior. The primary drawback to these methods relates to the poor convergence properties due to the stochastic nature of the methods which typically rely heavily on high degrees of non-equilibrium and time averaging to compensate for poor signal to noise ratios. For standard implementations, each computational particle represents many physical particles which further exacerbate statistical noise problems for flow with large species density variation such as encountered in flow expansions and chemical reactions. The stochastic weighted particle method (SWPM) introduced by Rjasanow and Wagner overcome this difficulty by allowing the ratio of real to computational particles to vary on a per particle basis throughout the flow. The DSMC procedure must also be slightly modified to properly sample the Boltzmann collision integral accounting for the variable particle weights and to avoid the creation of additional particles with negative weight. In this work, the SWPM with necessary modification to incorporate the variable hard sphere (VHS) collision cross section model commonly used in engineering applications is first incorporated into an existing engineering code, the Thermophysics Universal Research Framework. The results and computational efficiency are compared to a few simple test cases using a standard validated implementation of the DSMC method along with the adapted SWPM/VHS collision using an octree based conservative phase space reconstruction. The SWPM method is then further extended to combine the collision and phase space reconstruction into a single step which avoids the need to create additional computational particles only to destroy them again during the particle merge. This is particularly helpful when oversampling the collision integral when compared to the standard DSMC method. However, it is found that the more frequent phase space reconstructions can cause added numerical thermalization with low particle per cell counts due to the coarseness of the octree used. However, the methods are expected to be of much greater utility in transient expansion flows and chemical reactions in the future.
Improved image decompression for reduced transform coding artifacts
NASA Technical Reports Server (NTRS)
Orourke, Thomas P.; Stevenson, Robert L.
1994-01-01
The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanov, A. A., E-mail: aai@a5.kiam.ru; Martynov, A. A., E-mail: martynov@a5.kiam.ru; Medvedev, S. Yu., E-mail: medvedev@a5.kiam.ru
In the MHD tokamak plasma theory, the plasma pressure is usually assumed to be isotropic. However, plasma heating by neutral beam injection and RF heating can lead to a strong anisotropy of plasma parameters and rotation of the plasma. The development of MHD equilibrium theory taking into account the plasma inertia and anisotropic pressure began a long time ago, but until now it has not been consistently applied in computational codes for engineering calculations of the plasma equilibrium and evolution in tokamak. This paper contains a detailed derivation of the axisymmetric plasma equilibrium equation in the most general form (withmore » arbitrary rotation and anisotropic pressure) and description of the specialized version of the SPIDER code. The original method of calculation of the equilibrium with an anisotropic pressure and a prescribed rotational transform profile is proposed. Examples of calculations and discussion of the results are also presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirshman, S. P.; Shafer, M. W.; Seal, S. K.
The SIESTA magnetohydrodynamic (MHD) equilibrium code has been used to compute a sequence of ideally stable equilibria resulting from numerical variation of the helical resonant magnetic perturbation (RMP) applied to an axisymmetric DIII-D plasma equilibrium. Increasing the perturbation strength at the dominant m=2, n=-1 , resonant surface leads to lower MHD energies and increases in the equilibrium island widths at the m=2 (and sidebands) surfaces, in agreement with theoretical expectations. Island overlap at large perturbation strengths leads to stochastic magnetic fields which correlate well with the experimentally inferred field structure. The magnitude and spatial phase (around the dominant rational surfaces)more » of the resonant (shielding) component of the parallel current are shown to change qualitatively with the magnetic island topology.« less
Hirshman, S. P.; Shafer, M. W.; Seal, S. K.; ...
2016-03-03
The SIESTA magnetohydrodynamic (MHD) equilibrium code has been used to compute a sequence of ideally stable equilibria resulting from numerical variation of the helical resonant magnetic perturbation (RMP) applied to an axisymmetric DIII-D plasma equilibrium. Increasing the perturbation strength at the dominant m=2, n=-1 , resonant surface leads to lower MHD energies and increases in the equilibrium island widths at the m=2 (and sidebands) surfaces, in agreement with theoretical expectations. Island overlap at large perturbation strengths leads to stochastic magnetic fields which correlate well with the experimentally inferred field structure. The magnitude and spatial phase (around the dominant rational surfaces)more » of the resonant (shielding) component of the parallel current are shown to change qualitatively with the magnetic island topology.« less
Helliker, Brent R
2014-03-01
Using both oxygen isotope ratios of leaf water (δ(18) OL ) and cellulose (δ(18) OC ) of Tillandsia usneoides in situ, this paper examined how short- and long-term responses to environmental variation and model parameterization affected the reconstruction of the atmospheric water vapour (δ(18) Oa ). During sample-intensive field campaigns, predictions of δ(18) OL matched observations well using a non-steady-state model, but the model required data-rich parameterization. Predictions from the more easily parameterized maximum enrichment model (δ(18) OL-M ) matched observed δ(18) OL and observed δ(18) Oa when leaf water turnover was less than 3.5 d. Using the δ(18) OL-M model and weekly samples of δ(18) OL across two growing seasons in Florida, USA, reconstructed δ(18) Oa was -12.6 ± 0.3‰. This is compared with δ(18) Oa of -12.4 ± 0.2‰ resolved from the growing-season-weighted δ(18) OC . Both of these values were similar to δ(18) Oa in equilibrium with precipitation, -12.9‰. δ(18) Oa was also reconstructed through a large-scale transect with δ(18) OL and the growing-season-integrated δ(18) OC across the southeastern United States. There was considerable large-scale variation, but there was regional, weather-induced coherence in δ(18) Oa when using δ(18) OL . The reconstruction of δ(18) Oa with δ(18) OC generally supported the assumption of δ(18) Oa being in equilibrium with precipitation δ(18) O (δ(18) Oppt ), but the pool of δ(18) Oppt with which δ(18) Oa was in equilibrium - growing season versus annual δ(18) Oppt - changed with latitude. © 2013 John Wiley & Sons Ltd.
Reduced Equations for Calculating the Combustion Rates of Jet-A and Methane Fuel
NASA Technical Reports Server (NTRS)
Molnar, Melissa; Marek, C. John
2003-01-01
Simplified kinetic schemes for Jet-A and methane fuels were developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) that is being developed at Glenn. These kinetic schemes presented here result in a correlation that gives the chemical kinetic time as a function of initial overall cell fuel/air ratio, pressure, and temperature. The correlations would then be used with the turbulent mixing times to determine the limiting properties and progress of the reaction. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentration of carbon monoxide as a function of fuel air ratio, pressure, and temperature. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates and the values obtained from the equilibrium correlations were then used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide, and NOx were obtained for both Jet-A fuel and methane.
On scalable lossless video coding based on sub-pixel accurate MCTF
NASA Astrophysics Data System (ADS)
Yea, Sehoon; Pearlman, William A.
2006-01-01
We propose two approaches to scalable lossless coding of motion video. They achieve SNR-scalable bitstream up to lossless reconstruction based upon the subpixel-accurate MCTF-based wavelet video coding. The first approach is based upon a two-stage encoding strategy where a lossy reconstruction layer is augmented by a following residual layer in order to obtain (nearly) lossless reconstruction. The key advantages of our approach include an 'on-the-fly' determination of bit budget distribution between the lossy and the residual layers, freedom to use almost any progressive lossy video coding scheme as the first layer and an added feature of near-lossless compression. The second approach capitalizes on the fact that we can maintain the invertibility of MCTF with an arbitrary sub-pixel accuracy even in the presence of an extra truncation step for lossless reconstruction thanks to the lifting implementation. Experimental results show that the proposed schemes achieve compression ratios not obtainable by intra-frame coders such as Motion JPEG-2000 thanks to their inter-frame coding nature. Also they are shown to outperform the state-of-the-art non-scalable inter-frame coder H.264 (JM) lossless mode, with the added benefit of bitstream embeddedness.
Rode, Ambadas B; Endoh, Tamaki; Sugimoto, Naoki
2016-11-07
Non-coding RNAs play important roles in cellular homeostasis and are involved in many human diseases including cancer. Intermolecular RNA-RNA interactions are the basis for the diverse functions of many non-coding RNAs. Herein, we show how the presence of tRNA influences the equilibrium between hairpin and G-quadruplex conformations in the 5' untranslated regions of oncogenes and model sequences. Kinetic and equilibrium analyses of the hairpin to G-quadruplex conformational transition of purified RNA as well as during co-transcriptional folding indicate that tRNA significantly shifts the equilibrium toward the hairpin conformer. The enhancement of relative translation efficiency in a reporter gene assay is shown to be due to the tRNA-mediated shift in hairpin-G-quadruplex equilibrium of oncogenic mRNAs. Our findings suggest that tRNA is a possible therapeutic target in diseases in which RNA conformational equilibria is dysregulated. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Coded diffraction system in X-ray crystallography using a boolean phase coded aperture approximation
NASA Astrophysics Data System (ADS)
Pinilla, Samuel; Poveda, Juan; Arguello, Henry
2018-03-01
Phase retrieval is a problem present in many applications such as optics, astronomical imaging, computational biology and X-ray crystallography. Recent work has shown that the phase can be better recovered when the acquisition architecture includes a coded aperture, which modulates the signal before diffraction, such that the underlying signal is recovered from coded diffraction patterns. Moreover, this type of modulation effect, before the diffraction operation, can be obtained using a phase coded aperture, just after the sample under study. However, a practical implementation of a phase coded aperture in an X-ray application is not feasible, because it is computationally modeled as a matrix with complex entries which requires changing the phase of the diffracted beams. In fact, changing the phase implies finding a material that allows to deviate the direction of an X-ray beam, which can considerably increase the implementation costs. Hence, this paper describes a low cost coded X-ray diffraction system based on block-unblock coded apertures that enables phase reconstruction. The proposed system approximates the phase coded aperture with a block-unblock coded aperture by using the detour-phase method. Moreover, the SAXS/WAXS X-ray crystallography software was used to simulate the diffraction patterns of a real crystal structure called Rhombic Dodecahedron. Additionally, several simulations were carried out to analyze the performance of block-unblock approximations in recovering the phase, using the simulated diffraction patterns. Furthermore, the quality of the reconstructions was measured in terms of the Peak Signal to Noise Ratio (PSNR). Results show that the performance of the block-unblock phase coded apertures approximation decreases at most 12.5% compared with the phase coded apertures. Moreover, the quality of the reconstructions using the boolean approximations is up to 2.5 dB of PSNR less with respect to the phase coded aperture reconstructions.
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.; Jones, Scott M.
1991-01-01
This analysis and this computer code apply to full, split, and dual expander cycles. Heat regeneration from the turbine exhaust to the pump exhaust is allowed. The combustion process is modeled as one of chemical equilibrium in an infinite-area or a finite-area combustor. Gas composition in the nozzle may be either equilibrium or frozen during expansion. This report, which serves as a users guide for the computer code, describes the system, the analysis methodology, and the program input and output. Sample calculations are included to show effects of key variables such as nozzle area ratio and oxidizer-to-fuel mass ratio.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rasouli, C.; Abbasi Davani, F., E-mail: fabbasidavani@gmail.com
A series of experiments and numerical calculations have been done on the Damavand tokamak for accurate determination of equilibrium parameters, such as the plasma boundary position and shape. For this work, the pickup coils of the Damavand tokamak were recalibrated and after that a plasma boundary shape identification code was developed for analyzing the experimental data, such as magnetic probes and coils currents data. The plasma boundary position, shape and other parameters are determined by the plasma shape identification code. A free-boundary equilibrium code was also generated for comparison with the plasma boundary shape identification results and determination of requiredmore » fields to obtain elongated plasma in the Damavand tokamak.« less
From bed topography to ice thickness: GlaRe, a GIS tool to reconstruct the surface of palaeoglaciers
NASA Astrophysics Data System (ADS)
Pellitero, Ramon; Rea, Brice; Spagnolo, Matteo; Bakke, Jostein; Ivy-Ochs, Susan; Frew, Craig; Hughes, Philip; Ribolini, Adriano; Renssen, Hans; Lukas, Sven
2016-04-01
We present GlaRe, A GIS tool that automatically reconstructs the 3D geometry for palaeoglaciers given the bed topography. This tool utilises a numerical approach and can work using a minimum of morphological evidence i.e. the position of the palaeoglacier front. The numerical approach is based on an iterative solution to the perfect plasticity assumption for ice rheology, explained in Benn and Hulton (2010). The tool can be run in ArcGIS 10.1 (ArcInfo license) and later updates and the toolset is written in Python code. The GlaRe toolbox presented in this paper implements a well-established approach for the determination of palaeoglacier equilibrium profiles. Significantly it permits users to quickly run multiple glacier reconstructions which were previously very laborious and time consuming (typically days for a single valley glacier). The implementation of GlaRe will facilitate the reconstruction of large numbers of palaeoglaciers which will provide opportunities for such research addressing at least two fundamental problems: 1. Investigation of the dynamics of palaeoglaciers. Glacier reconstructions are often based on a rigorous interpretation of glacial landforms but not always sufficient attention and/or time has been given to the actual reconstruction of the glacier surface, which is crucial for the calculation of palaeoglacier ELAs and subsequent derivation of quantitative palaeoclimatic data. 2. the ability to run large numbers of reconstructions and over much larger spatial areas provides an opportunity to undertake palaeoglaciers reconstructions across entire mountain, ranges, regions or even continents, allowing climatic gradients and atmospheric circulation patterns to be elucidated. The tool performance has been evaluated by comparing two extant glaciers, an icefield and a cirque/valley glacier from which the subglacial topography is known with a basic reconstruction using GlaRe. Results from the comparisons between extant glacier surfaces and modelled ones show very similar ELA values on the order of 10-20 meter error (which would account for a 0.065-0.13 K degree variation on a typical -6.5 K altitudinal gradient), and these can be improved further by increasing the number of flowlines and using F factors where needed. GlaRe is able to quickly generate robust palaeoglacier surfaces based on the very limited inputs often available from the geomorphological record.
NASA Astrophysics Data System (ADS)
Zhang, Yujia; Yilmaz, Alper
2016-06-01
Surface reconstruction using coded structured light is considered one of the most reliable techniques for high-quality 3D scanning. With a calibrated projector-camera stereo system, a light pattern is projected onto the scene and imaged by the camera. Correspondences between projected and recovered patterns are computed in the decoding process, which is used to generate 3D point cloud of the surface. However, the indirect illumination effects on the surface, such as subsurface scattering and interreflections, will raise the difficulties in reconstruction. In this paper, we apply maximum min-SW gray code to reduce the indirect illumination effects of the specular surface. We also analysis the errors when comparing the maximum min-SW gray code and the conventional gray code, which justifies that the maximum min-SW gray code has significant superiority to reduce the indirect illumination effects. To achieve sub-pixel accuracy, we project high frequency sinusoidal patterns onto the scene simultaneously. But for specular surface, the high frequency patterns are susceptible to decoding errors. Incorrect decoding of high frequency patterns will result in a loss of depth resolution. Our method to resolve this problem is combining the low frequency maximum min-SW gray code and the high frequency phase shifting code, which achieves dense 3D reconstruction for specular surface. Our contributions include: (i) A complete setup of the structured light based 3D scanning system; (ii) A novel combination technique of the maximum min-SW gray code and phase shifting code. First, phase shifting decoding with sub-pixel accuracy. Then, the maximum min-SW gray code is used to resolve the ambiguity resolution. According to the experimental results and data analysis, our structured light based 3D scanning system enables high quality dense reconstruction of scenes with a small number of images. Qualitative and quantitative comparisons are performed to extract the advantages of our new combined coding method.
Analysis of kinematically redundant reaching movements using the equilibrium-point hypothesis.
Cesari, P; Shiratori, T; Olivato, P; Duarte, M
2001-03-01
Six subjects performed a planar reaching arm movement to a target while unpredictable perturbations were applied to the endpoint; the perturbations consisted of pulling springs having different stiffness. Two conditions were applied; in the first, subjects had to reach for the target despite the perturbation, in the second condition, the subjects were asked to not correct the motion as a perturbation was applied. We analyzed the kinematics profiles of the three arm segments and, by means of inverse dynamics, calculated the joint torques. The framework of the equilibrium-point (EP) hypothesis, the lambda model, allowed the reconstruction of the control variables, the "equilibrium trajectories", in the "do not correct" condition for the wrist and the elbow joints as well as for the end point final position, while for the other condition, the reconstruction was less reliable. The findings support and extend to a multiple-joint planar movement, the paradigm of the EP hypothesis along with the "do not correct" instruction.
LArSoft: toolkit for simulation, reconstruction and analysis of liquid argon TPC neutrino detectors
NASA Astrophysics Data System (ADS)
Snider, E. L.; Petrillo, G.
2017-10-01
LArSoft is a set of detector-independent software tools for the simulation, reconstruction and analysis of data from liquid argon (LAr) neutrino experiments The common features of LAr time projection chambers (TPCs) enable sharing of algorithm code across detectors of very different size and configuration. LArSoft is currently used in production simulation and reconstruction by the ArgoNeuT, DUNE, LArlAT, MicroBooNE, and SBND experiments. The software suite offers a wide selection of algorithms and utilities, including those for associated photo-detectors and the handling of auxiliary detectors outside the TPCs. Available algorithms cover the full range of simulation and reconstruction, from raw waveforms to high-level reconstructed objects, event topologies and classification. The common code within LArSoft is contributed by adopting experiments, which also provide detector-specific geometry descriptions, and code for the treatment of electronic signals. LArSoft is also a collaboration of experiments, Fermilab and associated software projects which cooperate in setting requirements, priorities, and schedules. In this talk, we outline the general architecture of the software and the interaction with external libraries and detector-specific code. We also describe the dynamics of LArSoft software development between the contributing experiments, the projects supporting the software infrastructure LArSoft relies on, and the core LArSoft support project.
Computer model of one-dimensional equilibrium controlled sorption processes
Grove, D.B.; Stollenwerk, K.G.
1984-01-01
A numerical solution to the one-dimensional solute-transport equation with equilibrium-controlled sorption and a first-order irreversible-rate reaction is presented. The computer code is written in FORTRAN language, with a variety of options for input and output for user ease. Sorption reactions include Langmuir, Freundlich, and ion-exchange, with or without equal valance. General equations describing transport and reaction processes are solved by finite-difference methods, with nonlinearities accounted for by iteration. Complete documentation of the code, with examples, is included. (USGS)
Spinel cataclasites in 15445 and 72435 - Petrology and criteria for equilibrium
NASA Technical Reports Server (NTRS)
Baker, M. B.; Herzberg, C. T.
1980-01-01
The problem of establishing the existence of equilibrium among the coexisting phases in the rock is addressed by presenting petrographic and mineral chemistry data on a new spinel cataclasite from 15445 (clast H) and data more extensive than those previously available on two clasts in 72435. Criteria useful in reconstructing the original petrology of these and other spinel cataclasites are analyzed by considering equilibrium among the different phases, that is, the mono- or polymict nature of these cataclasized samples. Finally, the role of impact processes in disturbing the equilibria is discussed.
GENESIS: new self-consistent models of exoplanetary spectra
NASA Astrophysics Data System (ADS)
Gandhi, Siddharth; Madhusudhan, Nikku
2017-12-01
We are entering the era of high-precision and high-resolution spectroscopy of exoplanets. Such observations herald the need for robust self-consistent spectral models of exoplanetary atmospheres to investigate intricate atmospheric processes and to make observable predictions. Spectral models of plane-parallel exoplanetary atmospheres exist, mostly adapted from other astrophysical applications, with different levels of sophistication and accuracy. There is a growing need for a new generation of models custom-built for exoplanets and incorporating state-of-the-art numerical methods and opacities. The present work is a step in this direction. Here we introduce GENESIS, a plane-parallel, self-consistent, line-by-line exoplanetary atmospheric modelling code that includes (a) formal solution of radiative transfer using the Feautrier method, (b) radiative-convective equilibrium with temperature correction based on the Rybicki linearization scheme, (c) latest absorption cross-sections, and (d) internal flux and external irradiation, under the assumptions of hydrostatic equilibrium, local thermodynamic equilibrium and thermochemical equilibrium. We demonstrate the code here with cloud-free models of giant exoplanetary atmospheres over a range of equilibrium temperatures, metallicities, C/O ratios and spanning non-irradiated and irradiated planets, with and without thermal inversions. We provide the community with theoretical emergent spectra and pressure-temperature profiles over this range, along with those for several known hot Jupiters. The code can generate self-consistent spectra at high resolution and has the potential to be integrated into general circulation and non-equilibrium chemistry models as it is optimized for efficiency and convergence. GENESIS paves the way for high-fidelity remote sensing of exoplanetary atmospheres at high resolution with current and upcoming observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Shawn
This code consists of Matlab routines which enable the user to perform non-manifold surface reconstruction via triangulation from high dimensional point cloud data. The code was based on an algorithm originally developed in [Freedman (2007), An Incremental Algorithm for Reconstruction of Surfaces of Arbitrary Codimension Computational Geometry: Theory and Applications, 36(2):106-116]. This algorithm has been modified to accommodate non-manifold surface according to the work described in [S. Martin and J.-P. Watson (2009), Non-Manifold Surface Reconstruction from High Dimensional Point Cloud DataSAND #5272610].The motivation for developing the code was a point cloud describing the molecular conformation space of cyclooctane (C8H16). Cyclooctanemore » conformation space was represented using points in 72 dimensions (3 coordinates for each molecule). The code was used to triangulate the point cloud and thereby study the geometry and topology of cyclooctane. Futures applications are envisioned for peptides and proteins.« less
A generalized chemistry version of SPARK
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.
1988-01-01
An extension of the reacting H2-air computer code SPARK is presented, which enables the code to be used on any reacting flow problem. Routines are developed calculating in a general fashion, the reaction rates, and chemical Jacobians of any reacting system. In addition, an equilibrium routine is added so that the code will have frozen, finite rate, and equilibrium capabilities. The reaction rate for the species is determined from the law of mass action using Arrhenius expressions for the rate constants. The Jacobian routines are determined by numerically or analytically differentiating the law of mass action for each species. The equilibrium routine is based on a Gibbs free energy minimization routine. The routines are written in FORTRAN 77, with special consideration given to vectorization. Run times for the generalized routines are generally 20 percent slower than reaction specific routines. The numerical efficiency of the generalized analytical Jacobian, however, is nearly 300 percent better than the reaction specific numerical Jacobian used in SPARK.
Modified Mean-Pyramid Coding Scheme
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Romer, Richard
1996-01-01
Modified mean-pyramid coding scheme requires transmission of slightly fewer data. Data-expansion factor reduced from 1/3 to 1/12. Schemes for progressive transmission of image data transmitted in sequence of frames in such way coarse version of image reconstructed after receipt of first frame and increasingly refined version of image reconstructed after receipt of each subsequent frame.
NASA Astrophysics Data System (ADS)
Xie, W.; Li, N.; Wu, J.-D.; Hao, X.-L.
2014-04-01
Disaster damages have negative effects on the economy, whereas reconstruction investment has positive effects. The aim of this study is to model economic causes of disasters and recovery involving the positive effects of reconstruction activities. Computable general equilibrium (CGE) model is a promising approach because it can incorporate these two kinds of shocks into a unified framework and furthermore avoid the double-counting problem. In order to factor both shocks into the CGE model, direct loss is set as the amount of capital stock reduced on the supply side of the economy; a portion of investments restores the capital stock in an existing period; an investment-driven dynamic model is formulated according to available reconstruction data, and the rest of a given country's saving is set as an endogenous variable to balance the fixed investment. The 2008 Wenchuan Earthquake is selected as a case study to illustrate the model, and three scenarios are constructed: S0 (no disaster occurs), S1 (disaster occurs with reconstruction investment) and S2 (disaster occurs without reconstruction investment). S0 is taken as business as usual, and the differences between S1 and S0 and that between S2 and S0 can be interpreted as economic losses including reconstruction and excluding reconstruction, respectively. The study showed that output from S1 is found to be closer to real data than that from S2. Economic loss under S2 is roughly 1.5 times that under S1. The gap in the economic aggregate between S1 and S0 is reduced to 3% at the end of government-led reconstruction activity, a level that should take another four years to achieve under S2.
FastChem: An ultra-fast equilibrium chemistry
NASA Astrophysics Data System (ADS)
Kitzmann, Daniel; Stock, Joachim
2018-04-01
FastChem is an equilibrium chemistry code that calculates the chemical composition of the gas phase for given temperatures and pressures. Written in C++, it is based on a semi-analytic approach, and is optimized for extremely fast and accurate calculations.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A. Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of specifying boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation nuances will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of one-dimensional and multi-dimensional problems
A two-dimensional, TVD numerical scheme for inviscid, high Mach number flows in chemical equilibrium
NASA Technical Reports Server (NTRS)
Eberhardt, S.; Palmer, G.
1986-01-01
A new algorithm has been developed for hypervelocity flows in chemical equilibrium. Solutions have been achieved for Mach numbers up to 15 with no adverse effect on convergence. Two methods of coupling an equilibrium chemistry package have been tested, with the simpler method proving to be more robust. Improvements in boundary conditions are still required for a production-quality code.
RMP Enhanced Transport and Rotation Screening in DIII-D Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izzo, V; Joseph, I; Moyer, R
The application of resonant magnetic perturbations (RMP) to DIII-D plasmas at low collisionality has achieved ELM suppression, primarily due to a pedestal density reduction. The mechanism of the enhanced particle transport is investigated in 3D MHD simulations with the NIMROD code. The simulations apply realistic vacuum fields from the DIII-D I-coils, C-coils and measure intrinsic error fields to an EFIT reconstructed DIII-D equilibrium, and allow the plasma to respond to the applied fields while the fields are fixed at the boundary, which lies in the vacuum region. A non-rotating plasma amplifies the resonant components of the applied fields by factorsmore » of 2-5. The poloidal velocity forms E x B convection cells crossing the separatrix, which push particles into the vacuum region and reduce the pedestal density. Low toroidal rotation at the separatrix reduces the resonant field amplitudes, but does not strongly affect the particle pumpout. At higher separatrix rotation, the poloidal E x B velocity is reduced by half, while the enhanced particle transport is entirely eliminated. A high collisionality DIII-D equilibrium with an experimentally measured rotation profile serves as the starting point for a simulation with odd parity I-coil fields that can ultimately be compared with experimental results. All of the NIMROD results are compared with analytic error field theory.« less
A new nuclide transport model in soil in the GENII-LIN health physics code
NASA Astrophysics Data System (ADS)
Teodori, F.
2017-11-01
The nuclide soil transfer model, originally included in the GENII-LIN software system, was intended for residual contamination from long term activities and from waste form degradation. Short life nuclides were supposed absent or at equilibrium with long life parents. Here we present an enhanced soil transport model, where short life nuclide contributions are correctly accounted. This improvement extends the code capabilities to handle incidental release of contaminant to soil, by evaluating exposure since the very beginning of the contamination event, before the radioactive decay chain equilibrium is reached.
Thermodynamic equilibrium-air correlations for flowfield applications
NASA Technical Reports Server (NTRS)
Zoby, E. V.; Moss, J. N.
1981-01-01
Equilibrium-air thermodynamic correlations have been developed for flowfield calculation procedures. A comparison between the postshock results computed by the correlation equations and detailed chemistry calculations is very good. The thermodynamic correlations are incorporated in an approximate inviscid flowfield code with a convective heating capability for the purpose of defining the thermodynamic environment through the shock layer. Comparisons of heating rates computed by the approximate code and a viscous-shock-layer method are good. In addition to presenting the thermodynamic correlations, the impact of several viscosity models on the convective heat transfer is demonstrated.
Reconstructing free-energy landscapes for nonequilibrium periodic potentials
NASA Astrophysics Data System (ADS)
López-Alamilla, N. J.; Jack, Michael W.; Challis, K. J.
2018-03-01
We present a method for reconstructing the free-energy landscape of overdamped Brownian motion on a tilted periodic potential. Our approach exploits the periodicity of the system by using the k -space form of the Smoluchowski equation and we employ an iterative approach to determine the nonequilibrium tilt. We reconstruct landscapes for a number of example potentials to show the applicability of the method to both deep and shallow wells and near-to- and far-from-equilibrium regimes. The method converges logarithmically with the number of Fourier terms in the potential.
Numerical solution of Space Shuttle Orbiter flow field including real gas effects
NASA Technical Reports Server (NTRS)
Prabhu, D. K.; Tannehill, J. C.
1984-01-01
The hypersonic, laminar flow around the Space Shuttle Orbiter has been computed for both an ideal gas (gamma = 1.2) and equilibrium air using a real-gas, parabolized Navier-Stokes code. This code employs a generalized coordinate transformation; hence, it places no restrictions on the orientation of the solution surfaces. The initial solution in the nose region was computed using a 3-D, real-gas, time-dependent Navier-Stokes code. The thermodynamic and transport properties of equilibrium air were obtained from either approximate curve fits or a table look-up procedure. Numerical results are presented for flight conditions corresponding to the STS-3 trajectory. The computed surface pressures and convective heating rates are compared with data from the STS-3 flight.
NASA Astrophysics Data System (ADS)
Papior, Nick; Lorente, Nicolás; Frederiksen, Thomas; García, Alberto; Brandbyge, Mads
2017-03-01
We present novel methods implemented within the non-equilibrium Green function code (NEGF) TRANSIESTA based on density functional theory (DFT). Our flexible, next-generation DFT-NEGF code handles devices with one or multiple electrodes (Ne ≥ 1) with individual chemical potentials and electronic temperatures. We describe its novel methods for electrostatic gating, contour optimizations, and assertion of charge conservation, as well as the newly implemented algorithms for optimized and scalable matrix inversion, performance-critical pivoting, and hybrid parallelization. Additionally, a generic NEGF "post-processing" code (TBTRANS/PHTRANS) for electron and phonon transport is presented with several novelties such as Hamiltonian interpolations, Ne ≥ 1 electrode capability, bond-currents, generalized interface for user-defined tight-binding transport, transmission projection using eigenstates of a projected Hamiltonian, and fast inversion algorithms for large-scale simulations easily exceeding 106 atoms on workstation computers. The new features of both codes are demonstrated and bench-marked for relevant test systems.
NASA Astrophysics Data System (ADS)
Marini, Andrea
Density functional theory and many-body perturbation theory methods (such as GW and Bethe-Selpether equation) are standard approaches to the equilibrium ground and excited state properties of condensed matter systems, surfaces, molecules and other several kind of materials. At the same time ultra-fast optical spectroscopy is becoming a widely used and powerful tool for the observation of the out-of-equilibrium dynamical processes. In this case the theoretical tools (such as the Baym-Kadanoff equation) are well known but, only recently, have been merged with the ab-Initio approach. And, for this reason, highly parallel and efficient codes are lacking. Nevertheless, the combination of these two areas of research represents, for the ab-initio community, a challenging prespective as it requires the development of advanced theoretical, methodological and numerical tools. Yambo is a popular community software implementing the above methods using plane-waves and pseudo-potentials. Yambo is available to the community as open-source software, and oriented to high-performance computing. The Yambo project aims at making the simulation of these equilibrium and out-of-equilibrium complex processes available to a wide community of users. Indeed the code is used, in practice, in many countries and well beyond the European borders. Yambo is a member of the suite of codes of the MAX European Center of Excellence (Materials design at the exascale) . It is also used by the user facilities of the European Spectroscopy Facility and of the NFFA European Center (nanoscience foundries & fine analysis). In this talk I will discuss some recent numerical and methodological developments that have been implemented in Yambo towards to exploitation of next generation HPC supercomputers. In particular, I will present the hybrid MPI+OpenMP parallelization and the specific case of the response function calculation. I will also discuss the future plans of the Yambo project and its potential use as tool for science dissemination, also in third world countries. Etsf, MAX European Center of Excellence and NFFA European Center.
High-SNR spectrum measurement based on Hadamard encoding and sparse reconstruction
NASA Astrophysics Data System (ADS)
Wang, Zhaoxin; Yue, Jiang; Han, Jing; Li, Long; Jin, Yong; Gao, Yuan; Li, Baoming
2017-12-01
The denoising capabilities of the H-matrix and cyclic S-matrix based on the sparse reconstruction, employed in the Pixel of Focal Plane Coded Visible Spectrometer for spectrum measurement are investigated, where the spectrum is sparse in a known basis. In the measurement process, the digital micromirror device plays an important role, which implements the Hadamard coding. In contrast with Hadamard transform spectrometry, based on the shift invariability, this spectrometer may have the advantage of a high efficiency. Simulations and experiments show that the nonlinear solution with a sparse reconstruction has a better signal-to-noise ratio than the linear solution and the H-matrix outperforms the cyclic S-matrix whether the reconstruction method is nonlinear or linear.
Equilibrium Free Energies from Nonequilibrium Metadynamics
NASA Astrophysics Data System (ADS)
Bussi, Giovanni; Laio, Alessandro; Parrinello, Michele
2006-03-01
In this Letter we propose a new formalism to map history-dependent metadynamics in a Markovian process. We apply this formalism to model Langevin dynamics and determine the equilibrium distribution of a collection of simulations. We demonstrate that the reconstructed free energy is an unbiased estimate of the underlying free energy and analytically derive an expression for the error. The present results can be applied to other history-dependent stochastic processes, such as Wang-Landau sampling.
NASA Technical Reports Server (NTRS)
Bade, W. L.; Yos, J. M.
1975-01-01
A computer program for calculating quasi-one-dimensional gas flow in axisymmetric and two-dimensional nozzles and rectangular channels is presented. Flow is assumed to start from a state of thermochemical equilibrium at a high temperature in an upstream reservoir. The program provides solutions based on frozen chemistry, chemical equilibrium, and nonequilibrium flow with finite reaction rates. Electronic nonequilibrium effects can be included using a two-temperature model. An approximate laminar boundary layer calculation is given for the shear and heat flux on the nozzle wall. Boundary layer displacement effects on the inviscid flow are considered also. Chemical equilibrium and transport property calculations are provided by subroutines. The code contains precoded thermochemical, chemical kinetic, and transport cross section data for high-temperature air, CO2-N2-Ar mixtures, helium, and argon. It provides calculations of the stagnation conditions on axisymmetric or two-dimensional models, and of the conditions on the flat surface of a blunt wedge. The primary purpose of the code is to describe the flow conditions and test conditions in electric arc heated wind tunnels.
Confinement properties of tokamak plasmas with extended regions of low magnetic shear
NASA Astrophysics Data System (ADS)
Graves, J. P.; Cooper, W. A.; Kleiner, A.; Raghunathan, M.; Neto, E.; Nicolas, T.; Lanthaler, S.; Patten, H.; Pfefferle, D.; Brunetti, D.; Lutjens, H.
2017-10-01
Extended regions of low magnetic shear can be advantageous to tokamak plasmas. But the core and edge can be susceptible to non-resonant ideal fluctuations due to the weakened restoring force associated with magnetic field line bending. This contribution shows how saturated non-linear phenomenology, such as 1 / 1 Long Lived Modes, and Edge Harmonic Oscillations associated with QH-modes, can be modelled accurately using the non-linear stability code XTOR, the free boundary 3D equilibrium code VMEC, and non-linear analytic theory. That the equilibrium approach is valid is particularly valuable because it enables advanced particle confinement studies to be undertaken in the ordinarily difficult environment of strongly 3D magnetic fields. The VENUS-LEVIS code exploits the Fourier description of the VMEC equilibrium fields, such that full Lorenzian and guiding centre approximated differential operators in curvilinear angular coordinates can be evaluated analytically. Consequently, the confinement properties of minority ions such as energetic particles and high Z impurities can be calculated accurately over slowing down timescales in experimentally relevant 3D plasmas.
Charm: Cosmic history agnostic reconstruction method
NASA Astrophysics Data System (ADS)
Porqueres, Natalia; Ensslin, Torsten A.
2017-03-01
Charm (cosmic history agnostic reconstruction method) reconstructs the cosmic expansion history in the framework of Information Field Theory. The reconstruction is performed via the iterative Wiener filter from an agnostic or from an informative prior. The charm code allows one to test the compatibility of several different data sets with the LambdaCDM model in a non-parametric way.
Thomson scattering diagnostic on the Compact Toroidal Hybrid Experiment
NASA Astrophysics Data System (ADS)
Traverso, Peter; Maurer, D. A.; Ennis, D. A.; Hartwell, G. J.
2016-10-01
A Thomson scattering system is being commissioned for the non-axisymmetric plasmas of the Compact Toroidal Hybrid (CTH), a five-field period current-carrying torsatron. The system takes a single point measurement at the magnetic axis to both calibrate the two- color soft x-ray Te system and serve as an additional diagnostic for the V3FIT 3D equilibrium reconstruction code. A single point measurement will reduce the uncertainty in the reconstructed peak pressure by an order of magnitude for both current-carrying plasmas and future gyrotron-heated stellarator plasmas. The beam, generated by a frequency doubled Continuum 2 J, Nd:YaG laser, is passed vertically through an entrance Brewster window and a two-aperture optical baffle system to minimize stray light. The beam line propagates 8 m to the CTH device mid-plane with the beam diameter < 3 mm inside the plasma volume. Thomson scattered light is collected by two adjacent f/2 plano-convex condenser lenses and focused onto a custom fiber bundle. The fiber is then re-bundled and routed to a Holospec f/1.8 spectrograph to collect the red-shifted scattered light from 535-565 nm. The system has been designed to measure plasmas with core Te of 100 to 200 eV and densities of 5 ×1018 to 5 ×1019 m-3. Work supported by USDOE Grant DE-FG02-00ER54610.
Thomson scattering diagnostic on the Compact Toroidal Hybrid Experiment
NASA Astrophysics Data System (ADS)
Traverso, P. J.; Ennis, D. A.; Hartwell, G. J.; Kring, J. D.; Maurer, D. A.
2017-10-01
A Thomson scattering system is being commissioned for the non-axisymmetric plasmas of the Compact Toroidal Hybrid (CTH), a five-field period current-carrying torsatron. The system takes a single point measurement at the magnetic axis to both calibrate the two-color soft x-ray Te system and serve as an additional diagnostic for the V3FIT 3D equilibrium reconstruction code. A single point measurement will reduce the uncertainty in the reconstructed peak pressure by an order of magnitude for both current-carrying plasmas and future gyrotron-heated stellarator plasmas. The beam, generated by a frequency doubled Continuum 2 J, Nd:YAG laser, is passed vertically through an entrance Brewster window and a two-aperture optical baffle system to minimize stray light. Thomson scattered light is collected by two adjacent f/2 plano-convex condenser lenses and routed via a fiber bundle through a Holospec f/1.8 spectrograph. The red-shifted scattered light from 533-563 nm will be collected by an array of Hamamatsu H11706-40 PMTs. The system has been designed to measure plasmas with core Te of 100 to 200 eV and densities of 5 ×1018 to 5 ×1019 m-3. Stray light and calibration data for a single wavelength channel will be presented. This work is supported by U.S. Department of Energy Grant No. DE-FG02-00ER54610.
NASA Astrophysics Data System (ADS)
Hu, Qiang
2017-09-01
We develop an approach of the Grad-Shafranov (GS) reconstruction for toroidal structures in space plasmas, based on in situ spacecraft measurements. The underlying theory is the GS equation that describes two-dimensional magnetohydrostatic equilibrium, as widely applied in fusion plasmas. The geometry is such that the arbitrary cross-section of the torus has rotational symmetry about the rotation axis, Z, with a major radius, r0. The magnetic field configuration is thus determined by a scalar flux function, Ψ, and a functional F that is a single-variable function of Ψ. The algorithm is implemented through a two-step approach: i) a trial-and-error process by minimizing the residue of the functional F(Ψ) to determine an optimal Z-axis orientation, and ii) for the chosen Z, a χ2 minimization process resulting in a range of r0. Benchmark studies of known analytic solutions to the toroidal GS equation with noise additions are presented to illustrate the two-step procedure and to demonstrate the performance of the numerical GS solver, separately. For the cases presented, the errors in Z and r0 are 9° and 22%, respectively, and the relative percent error in the numerical GS solutions is smaller than 10%. We also make public the computer codes for these implementations and benchmark studies.
Calculating Shocks In Flows At Chemical Equilibrium
NASA Technical Reports Server (NTRS)
Eberhardt, Scott; Palmer, Grant
1988-01-01
Boundary conditions prove critical. Conference paper describes algorithm for calculation of shocks in hypersonic flows of gases at chemical equilibrium. Although algorithm represents intermediate stage in development of reliable, accurate computer code for two-dimensional flow, research leading up to it contributes to understanding of what is needed to complete task.
Beyond filtered backprojection: A reconstruction software package for ion beam microtomography data
NASA Astrophysics Data System (ADS)
Habchi, C.; Gordillo, N.; Bourret, S.; Barberet, Ph.; Jovet, C.; Moretto, Ph.; Seznec, H.
2013-01-01
A new version of the TomoRebuild data reduction software package is presented, for the reconstruction of scanning transmission ion microscopy tomography (STIMT) and particle induced X-ray emission tomography (PIXET) images. First, we present a state of the art of the reconstruction codes available for ion beam microtomography. The algorithm proposed here brings several advantages. It is a portable, multi-platform code, designed in C++ with well-separated classes for easier use and evolution. Data reduction is separated in different steps and the intermediate results may be checked if necessary. Although no additional graphic library or numerical tool is required to run the program as a command line, a user friendly interface was designed in Java, as an ImageJ plugin. All experimental and reconstruction parameters may be entered either through this plugin or directly in text format files. A simple standard format is proposed for the input of experimental data. Optional graphic applications using the ROOT interface may be used separately to display and fit energy spectra. Regarding the reconstruction process, the filtered backprojection (FBP) algorithm, already present in the previous version of the code, was optimized so that it is about 10 times as fast. In addition, Maximum Likelihood Expectation Maximization (MLEM) and its accelerated version Ordered Subsets Expectation Maximization (OSEM) algorithms were implemented. A detailed user guide in English is available. A reconstruction example of experimental data from a biological sample is given. It shows the capability of the code to reduce noise in the sinograms and to deal with incomplete data, which puts a new perspective on tomography using low number of projections or limited angle.
Joint reconstruction of multiview compressed images.
Thirumalai, Vijayaraghavan; Frossard, Pascal
2013-05-01
Distributed representation of correlated multiview images is an important problem that arises in vision sensor networks. This paper concentrates on the joint reconstruction problem where the distributively compressed images are decoded together in order to take benefit from the image correlation. We consider a scenario where the images captured at different viewpoints are encoded independently using common coding solutions (e.g., JPEG) with a balanced rate distribution among different cameras. A central decoder first estimates the inter-view image correlation from the independently compressed data. The joint reconstruction is then cast as a constrained convex optimization problem that reconstructs total-variation (TV) smooth images, which comply with the estimated correlation model. At the same time, we add constraints that force the reconstructed images to be as close as possible to their compressed versions. We show through experiments that the proposed joint reconstruction scheme outperforms independent reconstruction in terms of image quality, for a given target bit rate. In addition, the decoding performance of our algorithm compares advantageously to state-of-the-art distributed coding schemes based on motion learning and on the DISCOVER algorithm.
NEBULAR: Spectrum synthesis for mixed hydrogen-helium gas in ionization equilibrium
NASA Astrophysics Data System (ADS)
Schirmer, Mischa
2016-08-01
NEBULAR synthesizes the spectrum of a mixed hydrogen helium gas in collisional ionization equilibrium. It is not a spectral fitting code, but it can be used to resample a model spectrum onto the wavelength grid of a real observation. It supports a wide range of temperatures and densities. NEBULAR includes free-free, free-bound, two-photon and line emission from HI, HeI and HeII. The code will either return the composite model spectrum, or, if desired, the unrescaled atomic emission coefficients. It is written in C++ and depends on the GNU Scientific Library (GSL).
A numerical code for a three-dimensional magnetospheric MHD equilibrium model
NASA Technical Reports Server (NTRS)
Voigt, G.-H.
1992-01-01
Two dimensional and three dimensional MHD equilibrium models were begun for Earth's magnetosphere. The original proposal was motivated by realizing that global, purely data based models of Earth's magnetosphere are inadequate for studying the underlying plasma physical principles according to which the magnetosphere evolves on the quasi-static convection time scale. Complex numerical grid generation schemes were established for a 3-D Poisson solver, and a robust Grad-Shafranov solver was coded for high beta MHD equilibria. Thus, the effects were calculated of both the magnetopause geometry and boundary conditions on the magnetotail current distribution.
Effect of scrape-off-layer current on reconstructed tokamak equilibrium
King, J. R.; Kruger, S. E.; Groebner, R. J.; ...
2017-01-13
Methods are described that extend fields from reconstructed equilibria to include scrape-off-layer current through extrapolated parametrized and experimental fits. The extrapolation includes both the effects of the toroidal-field and pressure gradients which produce scrape-off-layer current after recomputation of the Grad-Shafranov solution. To quantify the degree that inclusion of scrape-off-layer current modifies the equilibrium, the χ-squared goodness-of-fit parameter is calculated for cases with and without scrape-off-layer current. The change in χ-squared is found to be minor when scrape-off-layer current is included however flux surfaces are shifted by up to 3 cm. Here the impact on edge modes of these scrape-off-layer modificationsmore » is also found to be small and the importance of these methods to nonlinear computation is discussed.« less
Equilibrium and non-equilibrium dynamics simultaneously operate in the Galápagos islands.
Valente, Luis M; Phillimore, Albert B; Etienne, Rampal S
2015-08-01
Island biotas emerge from the interplay between colonisation, speciation and extinction and are often the scene of spectacular adaptive radiations. A common assumption is that insular diversity is at a dynamic equilibrium, but for remote islands, such as Hawaii or Galápagos, this idea remains untested. Here, we reconstruct the temporal accumulation of terrestrial bird species of the Galápagos using a novel phylogenetic method that estimates rates of biota assembly for an entire community. We show that species richness on the archipelago is in an ascending phase and does not tend towards equilibrium. The majority of the avifauna diversifies at a slow rate, without detectable ecological limits. However, Darwin's finches form an exception: they rapidly reach a carrying capacity and subsequently follow a coalescent-like diversification process. Together, these results suggest that avian diversity of remote islands is rising, and challenge the mutual exclusivity of the non-equilibrium and equilibrium ecological paradigms. © 2015 The Authors Ecology Letters published by John Wiley & Sons Ltd and CNRS.
Code Verification Results of an LLNL ASC Code on Some Tri-Lab Verification Test Suite Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, S R; Bihari, B L; Salari, K
As scientific codes become more complex and involve larger numbers of developers and algorithms, chances for algorithmic implementation mistakes increase. In this environment, code verification becomes essential to building confidence in the code implementation. This paper will present first results of a new code verification effort within LLNL's B Division. In particular, we will show results of code verification of the LLNL ASC ARES code on the test problems: Su Olson non-equilibrium radiation diffusion, Sod shock tube, Sedov point blast modeled with shock hydrodynamics, and Noh implosion.
Ultra-narrow bandwidth voice coding
Holzrichter, John F [Berkeley, CA; Ng, Lawrence C [Danville, CA
2007-01-09
A system of removing excess information from a human speech signal and coding the remaining signal information, transmitting the coded signal, and reconstructing the coded signal. The system uses one or more EM wave sensors and one or more acoustic microphones to determine at least one characteristic of the human speech signal.
Coded mask telescopes for X-ray astronomy
NASA Astrophysics Data System (ADS)
Skinner, G. K.; Ponman, T. J.
1987-04-01
The principle of the coded mask techniques are discussed together with the methods of image reconstruction. The coded mask telescopes built at the University of Birmingham, including the SL 1501 coded mask X-ray telescope flown on the Skylark rocket and the Coded Mask Imaging Spectrometer (COMIS) projected for the Soviet space station Mir, are described. A diagram of a coded mask telescope and some designs for coded masks are included.
Implementation of Finite Rate Chemistry Capability in OVERFLOW
NASA Technical Reports Server (NTRS)
Olsen, M. E.; Venkateswaran, S.; Prabhu, D. K.
2004-01-01
An implementation of both finite rate and equilibrium chemistry have been completed for the OVERFLOW code, a chimera capable, complex geometry flow code widely used to predict transonic flow fields. The implementation builds on the computational efficiency and geometric generality of the solver.
Verification of the ideal magnetohydrodynamic response at rational surfaces in the VMEC code
Lazerson, Samuel A.; Loizu, Joaquim; Hirshman, Steven; ...
2016-01-13
The VMEC nonlinear ideal MHD equilibrium code [S. P. Hirshman and J. C. Whitson, Phys. Fluids 26, 3553 (1983)] is compared against analytic linear ideal MHD theory in a screw-pinch-like configuration. The focus of such analysis is to verify the ideal MHD response at magnetic surfaces which possess magnetic transform (ι) which is resonant with spectral values of the perturbed boundary harmonics. A large aspect ratio circular cross section zero-beta equilibrium is considered. This equilibrium possess a rational surface with safety factor q = 2 at a normalized flux value of 0.5. A small resonant boundary perturbation is introduced, excitingmore » a response at the resonant rational surface. The code is found to capture the plasma response as predicted by a newly developed analytic theory that ensures the existence of nested flux surfaces by allowing for a jump in rotational transform (ι=1/q). The VMEC code satisfactorily reproduces these theoretical results without the necessity of an explicit transform discontinuity (Δι) at the rational surface. It is found that the response across the rational surfaces depends upon both radial grid resolution and local shear (dι/dΦ, where ι is the rotational transform and Φ the enclosed toroidal flux). Calculations of an implicit Δι suggest that it does not arise due to numerical artifacts (attributed to radial finite differences in VMEC) or existence conditions for flux surfaces as predicted by linear theory (minimum values of Δι). Scans of the rotational transform profile indicate that for experimentally relevant levels of transform shear the response becomes increasing localised. Furthermore, careful examination of a large experimental tokamak equilibrium, with applied resonant fields, indicates that this shielding response is present, suggesting the phenomena is not limited to this verification exercise.« less
Spotted star mapping by light curve inversion: Tests and application to HD 12545
NASA Astrophysics Data System (ADS)
Kolbin, A. I.; Shimansky, V. V.
2013-06-01
A code for mapping the surfaces of spotted stars is developed. The concept of the code is to analyze rotational-modulated light curves. We simulate the process of reconstruction for the star surface and the results of simulation are presented. The reconstruction atrifacts caused by the ill-posed nature of the problem are deduced. The surface of the spotted component of system HD 12545 is mapped using the procedure.
Maximising information recovery from rank-order codes
NASA Astrophysics Data System (ADS)
Sen, B.; Furber, S.
2007-04-01
The central nervous system encodes information in sequences of asynchronously generated voltage spikes, but the precise details of this encoding are not well understood. Thorpe proposed rank-order codes as an explanation of the observed speed of information processing in the human visual system. The work described in this paper is inspired by the performance of SpikeNET, a biologically inspired neural architecture using rank-order codes for information processing, and is based on the retinal model developed by VanRullen and Thorpe. This model mimics retinal information processing by passing an input image through a bank of Difference of Gaussian (DoG) filters and then encoding the resulting coefficients in rank-order. To test the effectiveness of this encoding in capturing the information content of an image, the rank-order representation is decoded to reconstruct an image that can be compared with the original. The reconstruction uses a look-up table to infer the filter coefficients from their rank in the encoded image. Since the DoG filters are approximately orthogonal functions, they are treated as their own inverses in the reconstruction process. We obtained a quantitative measure of the perceptually important information retained in the reconstructed image relative to the original using a slightly modified version of an objective metric proposed by Petrovic. It is observed that around 75% of the perceptually important information is retained in the reconstruction. In the present work we reconstruct the input using a pseudo-inverse of the DoG filter-bank with the aim of improving the reconstruction and thereby extracting more information from the rank-order encoded stimulus. We observe that there is an increase of 10 - 15% in the information retrieved from a reconstructed stimulus as a result of inverting the filter-bank.
42 CFR 73.3 - HHS select agents and toxins.
Code of Federal Regulations, 2012 CFR
2012-10-01
... virus Monkeypox virus Reconstructed replication competent forms of the 1918 pandemic influenza virus containing any portion of the coding regions of all eight gene segments (Reconstructed 1918 Influenza virus...
Combined LAURA-UPS hypersonic solution procedure
NASA Technical Reports Server (NTRS)
Wood, William A.; Thompson, Richard A.
1993-01-01
A combined solution procedure for hypersonic flowfields around blunted slender bodies was implemented using a thin-layer Navier-Stokes code (LAURA) in the nose region and a parabolized Navier-Stokes code (UPS) on the after body region. Perfect gas, equilibrium air, and non-equilibrium air solutions to sharp cones and a sharp wedge were obtained using UPS alone as a preliminary step. Surface heating rates are presented for two slender bodies with blunted noses, having used LAURA to provide a starting solution to UPS downstream of the sonic line. These are an 8 deg sphere-cone in Mach 5, perfect gas, laminar flow at 0 and 4 deg angles of attack and the Reentry F body at Mach 20, 80,000 ft equilibrium gas conditions for 0 and 0.14 deg angles of attack. The results indicate that this procedure is a timely and accurate method for obtaining aerothermodynamic predictions on slender hypersonic vehicles.
Extension of the SIESTA MHD equilibrium code to free-plasma-boundary problems
Peraza-Rodriguez, Hugo; Reynolds-Barredo, J. M.; Sanchez, Raul; ...
2017-08-28
Here, SIESTA is a recently developed MHD equilibrium code designed to perform fast and accurate calculations of ideal MHD equilibria for three-dimensional magnetic configurations. Since SIESTA does not assume closed magnetic surfaces, the solution can exhibit magnetic islands and stochastic regions. In its original implementation SIESTA addressed only fixed-boundary problems. That is, the shape of the plasma edge, assumed to be a magnetic surface, was kept fixed as the solution iteratively converges to equilibrium. This condition somewhat restricts the possible applications of SIESTA. In this paper we discuss an extension that will enable SIESTA to address free-plasma-boundary problems, opening upmore » the possibility of investigating problems in which the plasma boundary is perturbed either externally or internally. As an illustration, SIESTA is applied to a configuration of the W7-X stellarator.« less
Extension of the SIESTA MHD equilibrium code to free-plasma-boundary problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peraza-Rodriguez, Hugo; Reynolds-Barredo, J. M.; Sanchez, Raul
Here, SIESTA is a recently developed MHD equilibrium code designed to perform fast and accurate calculations of ideal MHD equilibria for three-dimensional magnetic configurations. Since SIESTA does not assume closed magnetic surfaces, the solution can exhibit magnetic islands and stochastic regions. In its original implementation SIESTA addressed only fixed-boundary problems. That is, the shape of the plasma edge, assumed to be a magnetic surface, was kept fixed as the solution iteratively converges to equilibrium. This condition somewhat restricts the possible applications of SIESTA. In this paper we discuss an extension that will enable SIESTA to address free-plasma-boundary problems, opening upmore » the possibility of investigating problems in which the plasma boundary is perturbed either externally or internally. As an illustration, SIESTA is applied to a configuration of the W7-X stellarator.« less
Noniterative MAP reconstruction using sparse matrix representations.
Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J
2009-09-01
We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.
Benchmarking gyrokinetic simulations in a toroidal flux-tube
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y.; Parker, S. E.; Wan, W.
2013-09-15
A flux-tube model is implemented in the global turbulence code GEM [Y. Chen and S. E. Parker, J. Comput. Phys. 220, 839 (2007)] in order to facilitate benchmarking with Eulerian codes. The global GEM assumes the magnetic equilibrium to be completely given. The initial flux-tube implementation simply selects a radial location as the center of the flux-tube and a radial size of the flux-tube, sets all equilibrium quantities (B, ∇B, etc.) to be equal to the values at the center of the flux-tube, and retains only a linear radial profile of the safety factor needed for boundary conditions. This implementationmore » shows disagreement with Eulerian codes in linear simulations. An alternative flux-tube model based on a complete local equilibrium solution of the Grad-Shafranov equation [J. Candy, Plasma Phys. Controlled Fusion 51, 105009 (2009)] is then implemented. This results in better agreement between Eulerian codes and the particle-in-cell (PIC) method. The PIC algorithm based on the v{sub ||}-formalism [J. Reynders, Ph.D. dissertation, Princeton University, 1992] and the gyrokinetic ion/fluid electron hybrid model with kinetic electron closure [Y. Chan and S. E. Parker, Phys. Plasmas 18, 055703 (2011)] are also implemented in the flux-tube geometry and compared with the direct method for both the ion temperature gradient driven modes and the kinetic ballooning modes.« less
Rapid exploration of configuration space with diffusion-map-directed molecular dynamics.
Zheng, Wenwei; Rohrdanz, Mary A; Clementi, Cecilia
2013-10-24
The gap between the time scale of interesting behavior in macromolecular systems and that which our computational resources can afford often limits molecular dynamics (MD) from understanding experimental results and predicting what is inaccessible in experiments. In this paper, we introduce a new sampling scheme, named diffusion-map-directed MD (DM-d-MD), to rapidly explore molecular configuration space. The method uses a diffusion map to guide MD on the fly. DM-d-MD can be combined with other methods to reconstruct the equilibrium free energy, and here, we used umbrella sampling as an example. We present results from two systems: alanine dipeptide and alanine-12. In both systems, we gain tremendous speedup with respect to standard MD both in exploring the configuration space and reconstructing the equilibrium distribution. In particular, we obtain 3 orders of magnitude of speedup over standard MD in the exploration of the configurational space of alanine-12 at 300 K with DM-d-MD. The method is reaction coordinate free and minimally dependent on a priori knowledge of the system. We expect wide applications of DM-d-MD to other macromolecular systems in which equilibrium sampling is not affordable by standard MD.
Rapid Exploration of Configuration Space with Diffusion Map-directed-Molecular Dynamics
Zheng, Wenwei; Rohrdanz, Mary A.; Clementi, Cecilia
2013-01-01
The gap between the timescale of interesting behavior in macromolecular systems and that which our computational resources can afford oftentimes limits Molecular Dynamics (MD) from understanding experimental results and predicting what is inaccessible in experiments. In this paper, we introduce a new sampling scheme, named Diffusion Map-directed-MD (DM-d-MD), to rapidly explore molecular configuration space. The method uses diffusion map to guide MD on the fly. DM-d-MD can be combined with other methods to reconstruct the equilibrium free energy, and here we used umbrella sampling as an example. We present results from two systems: alanine dipeptide and alanine-12. In both systems we gain tremendous speedup with respect to standard MD both in exploring the configuration space and reconstructing the equilibrium distribution. In particular, we obtain 3 orders of magnitude of speedup over standard MD in the exploration of the configurational space of alanine-12 at 300K with DM-d-MD. The method is reaction coordinate free and minimally dependent on a priori knowledge of the system. We expect wide applications of DM-d-MD to other macromolecular systems in which equilibrium sampling is not affordable by standard MD. PMID:23865517
NASA Astrophysics Data System (ADS)
Fable, E.; Angioni, C.; Ivanov, A. A.; Lackner, K.; Maj, O.; Medvedev, S. Yu; Pautasso, G.; Pereverzev, G. V.; Treutterer, W.; the ASDEX Upgrade Team
2013-07-01
The modelling of tokamak scenarios requires the simultaneous solution of both the time evolution of the plasma kinetic profiles and of the magnetic equilibrium. Their dynamical coupling involves additional complications, which are not present when the two physical problems are solved separately. Difficulties arise in maintaining consistency in the time evolution among quantities which appear in both the transport and the Grad-Shafranov equations, specifically the poloidal and toroidal magnetic fluxes as a function of each other and of the geometry. The required consistency can be obtained by means of iteration cycles, which are performed outside the equilibrium code and which can have different convergence properties depending on the chosen numerical scheme. When these external iterations are performed, the stability of the coupled system becomes a concern. In contrast, if these iterations are not performed, the coupled system is numerically stable, but can become physically inconsistent. By employing a novel scheme (Fable E et al 2012 Nucl. Fusion submitted), which ensures stability and physical consistency among the same quantities that appear in both the transport and magnetic equilibrium equations, a newly developed version of the ASTRA transport code (Pereverzev G V et al 1991 IPP Report 5/42), which is coupled to the SPIDER equilibrium code (Ivanov A A et al 2005 32nd EPS Conf. on Plasma Physics (Tarragona, 27 June-1 July) vol 29C (ECA) P-5.063), in both prescribed- and free-boundary modes is presented here for the first time. The ASTRA-SPIDER coupled system is then applied to the specific study of the modelling of controlled current ramp-up in ASDEX Upgrade discharges.
An X-Ray Analysis Database of Photoionization Cross Sections Including Variable Ionization
NASA Technical Reports Server (NTRS)
Wang, Ping; Cohen, David H.; MacFarlane, Joseph J.; Cassinelli, Joseph P.
1997-01-01
Results of research efforts in the following areas are discussed: review of the major theoretical and experimental data of subshell photoionization cross sections and ionization edges of atomic ions to assess the accuracy of the data, and to compile the most reliable of these data in our own database; detailed atomic physics calculations to complement the database for all ions of 17 cosmically abundant elements; reconciling the data from various sources and our own calculations; and fitting cross sections with functional approximations and incorporating these functions into a compact computer code.Also, efforts included adapting an ionization equilibrium code, tabulating results, and incorporating them into the overall program and testing the code (both ionization equilibrium and opacity codes) with existing observational data. The background and scientific applications of this work are discussed. Atomic physics cross section models and calculations are described. Calculation results are compared with available experimental data and other theoretical data. The functional approximations used for fitting cross sections are outlined and applications of the database are discussed.
NASA Astrophysics Data System (ADS)
Papadimitriou, P.; Skorek, T.
THESUS is a thermohydraulic code for the calculation of steady state and transient processes of two-phase cryogenic flows. The physical model is based on four conservation equations with separate liquid and gas phase mass conservation equations. The thermohydraulic non-equilibrium is calculated by means of evaporation and condensation models. The mechanical non-equilibrium is modeled by a full-range drift-flux model. Also heat conduction in solid structures and heat exchange for the full spectrum of heat transfer regimes can be simulated. Test analyses of two-channel chilldown experiments and comparisons with the measured data have been performed.
CAG12 - A CSCM based procedure for flow of an equilibrium chemically reacting gas
NASA Technical Reports Server (NTRS)
Green, M. J.; Davy, W. C.; Lombard, C. K.
1985-01-01
The Conservative Supra Characteristic Method (CSCM), an implicit upwind Navier-Stokes algorithm, is extended to the numerical simulation of flows in chemical equilibrium. The resulting computer code known as Chemistry and Gasdynamics Implicit - Version 2 (CAG12) is described. First-order accurate results are presented for inviscid and viscous Mach 20 flows of air past a hemisphere-cylinder. The solution procedure captures the bow shock in a chemically reacting gas, a technique that is needed for simulating high altitude, rarefied flows. In an initial effort to validate the code, the inviscid results are compared with published gasdynamic and chemistry solutions and satisfactorily agreement is obtained.
PARVMEC: An Efficient, Scalable Implementation of the Variational Moments Equilibrium Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seal, Sudip K; Hirshman, Steven Paul; Wingen, Andreas
The ability to sustain magnetically confined plasma in a state of stable equilibrium is crucial for optimal and cost-effective operations of fusion devices like tokamaks and stellarators. The Variational Moments Equilibrium Code (VMEC) is the de-facto serial application used by fusion scientists to compute magnetohydrodynamics (MHD) equilibria and study the physics of three dimensional plasmas in confined configurations. Modern fusion energy experiments have larger system scales with more interactive experimental workflows, both demanding faster analysis turnaround times on computational workloads that are stressing the capabilities of sequential VMEC. In this paper, we present PARVMEC, an efficient, parallel version of itsmore » sequential counterpart, capable of scaling to thousands of processors on distributed memory machines. PARVMEC is a non-linear code, with multiple numerical physics modules, each with its own computational complexity. A detailed speedup analysis supported by scaling results on 1,024 cores of a Cray XC30 supercomputer is presented. Depending on the mode of PARVMEC execution, speedup improvements of one to two orders of magnitude are reported. PARVMEC equips fusion scientists for the first time with a state-of-theart capability for rapid, high fidelity analyses of magnetically confined plasmas at unprecedented scales.« less
Hidden Structural Codes in Protein Intrinsic Disorder.
Borkosky, Silvia S; Camporeale, Gabriela; Chemes, Lucía B; Risso, Marikena; Noval, María Gabriela; Sánchez, Ignacio E; Alonso, Leonardo G; de Prat Gay, Gonzalo
2017-10-17
Intrinsic disorder is a major structural category in biology, accounting for more than 30% of coding regions across the domains of life, yet consists of conformational ensembles in equilibrium, a major challenge in protein chemistry. Anciently evolved papillomavirus genomes constitute an unparalleled case for sequence to structure-function correlation in cases in which there are no folded structures. E7, the major transforming oncoprotein of human papillomaviruses, is a paradigmatic example among the intrinsically disordered proteins. Analysis of a large number of sequences of the same viral protein allowed for the identification of a handful of residues with absolute conservation, scattered along the sequence of its N-terminal intrinsically disordered domain, which intriguingly are mostly leucine residues. Mutation of these led to a pronounced increase in both α-helix and β-sheet structural content, reflected by drastic effects on equilibrium propensities and oligomerization kinetics, and uncovers the existence of local structural elements that oppose canonical folding. These folding relays suggest the existence of yet undefined hidden structural codes behind intrinsic disorder in this model protein. Thus, evolution pinpoints conformational hot spots that could have not been identified by direct experimental methods for analyzing or perturbing the equilibrium of an intrinsically disordered protein ensemble.
A New Method for Coronal Magnetic Field Reconstruction
NASA Astrophysics Data System (ADS)
Yi, Sibaek; Choe, Gwang-Son; Cho, Kyung-Suk; Kim, Kap-Sung
2017-08-01
A precise way of coronal magnetic field reconstruction (extrapolation) is an indispensable tool for understanding of various solar activities. A variety of reconstruction codes have been developed so far and are available to researchers nowadays, but they more or less bear this and that shortcoming. In this paper, a new efficient method for coronal magnetic field reconstruction is presented. The method imposes only the normal components of magnetic field and current density at the bottom boundary to avoid the overspecification of the reconstruction problem, and employs vector potentials to guarantee the divergence-freeness. In our method, the normal component of current density is imposed, not by adjusting the tangential components of A, but by adjusting its normal component. This allows us to avoid a possible numerical instability that on and off arises in codes using A. In real reconstruction problems, the information for the lateral and top boundaries is absent. The arbitrariness of the boundary conditions imposed there as well as various preprocessing brings about the diversity of resulting solutions. We impose the source surface condition at the top boundary to accommodate flux imbalance, which always shows up in magnetograms. To enhance the convergence rate, we equip our code with a gradient-method type accelerator. Our code is tested on two analytical force-free solutions. When the solution is given only at the bottom boundary, our result surpasses competitors in most figures of merits devised by Schrijver et al. (2006). We have also applied our code to a real active region NOAA 11974, in which two M-class flares and a halo CME took place. The EUV observation shows a sudden appearance of an erupting loop before the first flare. Our numerical solutions show that two entwining flux tubes exist before the flare and their shackling is released after the CME with one of them opened up. We suggest that the erupting loop is created by magnetic reconnection between two entwining flux tubes and later appears in the coronagraph as the major constituent of the observed CME.
Reconstructing the equilibrium Boltzmann distribution from well-tempered metadynamics.
Bonomi, M; Barducci, A; Parrinello, M
2009-08-01
Metadynamics is a widely used and successful method for reconstructing the free-energy surface of complex systems as a function of a small number of suitably chosen collective variables. This is achieved by biasing the dynamics of the system. The bias acting on the collective variables distorts the probability distribution of the other variables. Here we present a simple reweighting algorithm for recovering the unbiased probability distribution of any variable from a well-tempered metadynamics simulation. We show the efficiency of the reweighting procedure by reconstructing the distribution of the four backbone dihedral angles of alanine dipeptide from two and even one dimensional metadynamics simulation. 2009 Wiley Periodicals, Inc.
Latash, M L; Goodman, S R
1994-01-01
The purpose of this work has been to develop a model of electromyographic (EMG) patterns during single-joint movements based on a version of the equilibrium-point hypothesis, a method for experimental reconstruction of the joint compliant characteristics, the dual-strategy hypothesis, and a kinematic model of movement trajectory. EMG patterns are considered emergent properties of hypothetical control patterns that are equally affected by the control signals and peripheral feedback reflecting actual movement trajectory. A computer model generated the EMG patterns based on simulated movement kinematics and hypothetical control signals derived from the reconstructed joint compliant characteristics. The model predictions have been compared to published recordings of movement kinematics and EMG patterns in a variety of movement conditions, including movements over different distances, at different speeds, against different-known inertial loads, and in conditions of possible unexpected decrease in the inertial load. Changes in task parameters within the model led to simulated EMG patterns qualitatively similar to the experimentally recorded EMG patterns. The model's predictive power compares it favourably to the existing models of the EMG patterns. Copyright © 1994. Published by Elsevier Ltd.
Krusic, A.G.; Prentice, M.L.; Licciardi, J.M.
2009-01-01
Early-mid Pliocene moraines in the McMurdo Dry Valleys, Antarctica, are more extensive than the present alpine glaciers in this region, indicating substantial climatic differences between the early-mid Pliocene and the present. To quantify this difference in the glacier-climate regime, we estimated the equilibrium-line altitude (ELA) change since the early-mid Pliocene by calculating the modern ELA and reconstructing the ELAs of four alpine glaciers in Wright and Taylor Valleys at their early-mid Pliocene maxima. The area-altitude balance ratio method was used on modern and reconstructed early-mid Pliocene hypsometry. In Wright and Victoria Valleys, mass-balance data identify present-day ELAs of 800-1600 m a.s.l. and an average balance ratio of 1.1. The estimated ELAs of the much larger early-mid Pliocene glaciers in Wright and Taylor Valleys range from 600 to 950 ?? 170 m a.s.l., and thus are 250-600 ??170 m lower than modern ELAs in these valleys. The depressed ELAs during the early-mid-Pliocene most likely indicate a wetter and therefore warmer climate in the Dry Valleys during this period than previous studies have recognized.
Jiansen Li; Jianqi Sun; Ying Song; Yanran Xu; Jun Zhao
2014-01-01
An effective way to improve the data acquisition speed of magnetic resonance imaging (MRI) is using under-sampled k-space data, and dictionary learning method can be used to maintain the reconstruction quality. Three-dimensional dictionary trains the atoms in dictionary in the form of blocks, which can utilize the spatial correlation among slices. Dual-dictionary learning method includes a low-resolution dictionary and a high-resolution dictionary, for sparse coding and image updating respectively. However, the amount of data is huge for three-dimensional reconstruction, especially when the number of slices is large. Thus, the procedure is time-consuming. In this paper, we first utilize the NVIDIA Corporation's compute unified device architecture (CUDA) programming model to design the parallel algorithms on graphics processing unit (GPU) to accelerate the reconstruction procedure. The main optimizations operate in the dictionary learning algorithm and the image updating part, such as the orthogonal matching pursuit (OMP) algorithm and the k-singular value decomposition (K-SVD) algorithm. Then we develop another version of CUDA code with algorithmic optimization. Experimental results show that more than 324 times of speedup is achieved compared with the CPU-only codes when the number of MRI slices is 24.
Reconstruction of coded aperture images
NASA Technical Reports Server (NTRS)
Bielefeld, Michael J.; Yin, Lo I.
1987-01-01
Balanced correlation method and the Maximum Entropy Method (MEM) were implemented to reconstruct a laboratory X-ray source as imaged by a Uniformly Redundant Array (URA) system. Although the MEM method has advantages over the balanced correlation method, it is computationally time consuming because of the iterative nature of its solution. Massively Parallel Processing, with its parallel array structure is ideally suited for such computations. These preliminary results indicate that it is possible to use the MEM method in future coded-aperture experiments with the help of the MPP.
Hierarchical image coding with diamond-shaped sub-bands
NASA Technical Reports Server (NTRS)
Li, Xiaohui; Wang, Jie; Bauer, Peter; Sauer, Ken
1992-01-01
We present a sub-band image coding/decoding system using a diamond-shaped pyramid frequency decomposition to more closely match visual sensitivities than conventional rectangular bands. Filter banks are composed of simple, low order IIR components. The coder is especially designed to function in a multiple resolution reconstruction setting, in situations such as variable capacity channels or receivers, where images must be reconstructed without the entire pyramid of sub-bands. We use a nonlinear interpolation technique for lost subbands to compensate for loss of aliasing cancellation.
Development and application of the GIM code for the Cyber 203 computer
NASA Technical Reports Server (NTRS)
Stainaker, J. F.; Robinson, M. A.; Rawlinson, E. G.; Anderson, P. G.; Mayne, A. W.; Spradley, L. W.
1982-01-01
The GIM computer code for fluid dynamics research was developed. Enhancement of the computer code, implicit algorithm development, turbulence model implementation, chemistry model development, interactive input module coding and wing/body flowfield computation are described. The GIM quasi-parabolic code development was completed, and the code used to compute a number of example cases. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and implicit finite difference scheme were also added. Development was completed on the interactive module for generating the input data for GIM. Solutions for inviscid hypersonic flow over a wing/body configuration are also presented.
Deleyiannis, Frederic W-B; Porter, Andrew C
2007-07-01
The purpose of this study was to determine the relative financial value of providing the service of free-tissue transfer for head and neck reconstruction from the surgeons' and hospital's perspective. Medical and hospital accounting records of 58 consecutive patients undergoing head and neck resections and simultaneous free-flap reconstruction were reviewed. Software from the Center for Medicare and Medicaid Services was used to calculate anticipated Medicare payments to the surgeon based on current procedural terminology codes and to the hospital based on diagnosis-related group codes. The mean actual payment to the surgeon for a free flap was $2300.60. This payment was 91.6 percent ($2300 out of $2510) of the calculated payment if all payments had been reimbursed by Medicare. Total charges and total payment to the hospital for the 58 patients were $19,148,852 and $2,765,552, respectively. After covering direct costs, total hospital revenue (i.e., margin) was $1,056,886. The mostly commonly assigned diagnosis-related group code was 482 (n = 35). According to the fee schedule for that code, if Medicare had been the insurance plan for these 35 patients, the mean payment to the hospital would have been $45,840. The actual mean hospital payment was $44,133. This actual hospital payment represents 96 percent of the calculated Medicare hospital payment ($44,133 of $45,840). Free-flap reconstruction of the head and neck generates substantial revenue for the hospital. For their mutual benefit, hospitals should join with physicians in contract negotiations of physician reimbursement with insurance companies. Bolstered reimbursement figures would better attract and retain skilled surgeons dedicated to microvascular reconstruction.
NASA Technical Reports Server (NTRS)
Mcbeath, Giorgio; Ghorashi, Bahman; Chun, Kue
1993-01-01
A thermal NO(x) prediction model is developed to interface with a CFD, k-epsilon based code. A converged solution from the CFD code is the input to the postprocessing model for prediction of thermal NO(x). The model uses a decoupled analysis to estimate the equilibrium level of (NO(x))e which is the constant rate limit. This value is used to estimate the flame (NO(x)) and in turn predict the rate of formation at each node using a two-step Zeldovich mechanism. The rate is fixed on the NO(x) production rate plot by estimating the time to reach equilibrium by a differential analysis based on the reaction: O + N2 = NO + N. The rate is integrated in the nonequilibrium time space based on the residence time at each node in the computational domain. The sum of all nodal predictions yields the total NO(x) level.
Class of near-perfect coded apertures
NASA Technical Reports Server (NTRS)
Cannon, T. M.; Fenimore, E. E.
1977-01-01
Coded aperture imaging of gamma ray sources has long promised an improvement in the sensitivity of various detector systems. The promise has remained largely unfulfilled, however, for either one of two reasons. First, the encoding/decoding method produces artifacts, which even in the absence of quantum noise, restrict the quality of the reconstructed image. This is true of most correlation-type methods. Second, if the decoding procedure is of the deconvolution variety, small terms in the transfer function of the aperture can lead to excessive noise in the reconstructed image. It is proposed to circumvent both of these problems by use of a uniformly redundant array (URA) as the coded aperture in conjunction with a special correlation decoding method.
A project based on multi-configuration Dirac-Fock calculations for plasma spectroscopy
NASA Astrophysics Data System (ADS)
Comet, M.; Pain, J.-C.; Gilleron, F.; Piron, R.
2017-09-01
We present a project dedicated to hot plasma spectroscopy based on a Multi-Configuration Dirac-Fock (MCDF) code, initially developed by J. Bruneau. The code is briefly described and the use of the transition state method for plasma spectroscopy is detailed. Then an opacity code for local-thermodynamic-equilibrium plasmas using MCDF data, named OPAMCDF, is presented. Transition arrays for which the number of lines is too large to be handled in a Detailed Line Accounting (DLA) calculation can be modeled within the Partially Resolved Transition Array method or using the Unresolved Transition Arrays formalism in jj-coupling. An improvement of the original Partially Resolved Transition Array method is presented which gives a better agreement with DLA computations. Comparisons with some absorption and emission experimental spectra are shown. Finally, the capability of the MCDF code to compute atomic data required for collisional-radiative modeling of plasma at non local thermodynamic equilibrium is illustrated. In addition to photoexcitation, this code can be used to calculate photoionization, electron impact excitation and ionization cross-sections as well as autoionization rates in the Distorted-Wave or Close Coupling approximations. Comparisons with cross-sections and rates available in the literature are discussed.
Radiation calculation in non-equilibrium shock layer
NASA Astrophysics Data System (ADS)
Dubois, Joanne
2005-05-01
The purpose of the work was to investigate confidence in radiation predictions on an entry probe body in high temperature conditions taking the Huygens probe as an example. Existing engineering flowfield codes for shock tube and blunt body simulations were used and updated when necessary to compute species molar fractions and flow field parameters. An interface to the PARADE radiation code allowed radiative emission estimates to the body surface to be made. A validation of the radiative models in equilibrium conditions was first made with published data and by comparison with shock tube test case data from the IUSTI TCM2 facility with Titan like atmosphere test gas. Further verifications were made in non-equilibrium with published computations. These comparisons were initially made using a Boltzmann assumption for the electronic states of CN. An attempt was also made to use pseudo species for the individual electronic states of CN. Assumptions made in this analysis are described and a further comparison with shock tube data undertaken. Several CN radiation datasets have been used, and while improvements to the modelling tools have been made, it seems that considerable uncertainty remains in the modelling of the non-equilibrium emission using simple engineering methods.
An implicit flux-split algorithm to calculate hypersonic flowfields in chemical equilibrium
NASA Technical Reports Server (NTRS)
Palmer, Grant
1987-01-01
An implicit, finite-difference, shock-capturing algorithm that calculates inviscid, hypersonic flows in chemical equilibrium is presented. The flux vectors and flux Jacobians are differenced using a first-order, flux-split technique. The equilibrium composition of the gas is determined by minimizing the Gibbs free energy at every node point. The code is validated by comparing results over an axisymmetric hemisphere against previously published results. The algorithm is also applied to more practical configurations. The accuracy, stability, and versatility of the algorithm have been promising.
Scalable Coding of Plenoptic Images by Using a Sparse Set and Disparities.
Li, Yun; Sjostrom, Marten; Olsson, Roger; Jennehag, Ulf
2016-01-01
One of the light field capturing techniques is the focused plenoptic capturing. By placing a microlens array in front of the photosensor, the focused plenoptic cameras capture both spatial and angular information of a scene in each microlens image and across microlens images. The capturing results in a significant amount of redundant information, and the captured image is usually of a large resolution. A coding scheme that removes the redundancy before coding can be of advantage for efficient compression, transmission, and rendering. In this paper, we propose a lossy coding scheme to efficiently represent plenoptic images. The format contains a sparse image set and its associated disparities. The reconstruction is performed by disparity-based interpolation and inpainting, and the reconstructed image is later employed as a prediction reference for the coding of the full plenoptic image. As an outcome of the representation, the proposed scheme inherits a scalable structure with three layers. The results show that plenoptic images are compressed efficiently with over 60 percent bit rate reduction compared with High Efficiency Video Coding intra coding, and with over 20 percent compared with an High Efficiency Video Coding block copying mode.
Galián, José A; Rosato, Marcela; Rosselló, Josep A
2014-03-01
Multigene families have provided opportunities for evolutionary biologists to assess molecular evolution processes and phylogenetic reconstructions at deep and shallow systematic levels. However, the use of these markers is not free of technical and analytical challenges. Many evolutionary studies that used the nuclear 5S rDNA gene family rarely used contiguous 5S coding sequences due to the routine use of head-to-tail polymerase chain reaction primers that are anchored to the coding region. Moreover, the 5S coding sequences have been concatenated with independent, adjacent gene units in many studies, creating simulated chimeric genes as the raw data for evolutionary analysis. This practice is based on the tacitly assumed, but rarely tested, hypothesis that strict intra-locus concerted evolution processes are operating in 5S rDNA genes, without any empirical evidence as to whether it holds for the recovered data. The potential pitfalls of analysing the patterns of molecular evolution and reconstructing phylogenies based on these chimeric genes have not been assessed to date. Here, we compared the sequence integrity and phylogenetic behavior of entire versus concatenated 5S coding regions from a real data set obtained from closely related plant species (Medicago, Fabaceae). Our results suggest that within arrays sequence homogenization is partially operating in the 5S coding region, which is traditionally assumed to be highly conserved. Consequently, concatenating 5S genes increases haplotype diversity, generating novel chimeric genotypes that most likely do not exist within the genome. In addition, the patterns of gene evolution are distorted, leading to incorrect haplotype relationships in some evolutionary reconstructions.
Lakshmanan, Manu N.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.
2016-01-01
Abstract. A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice. PMID:26962543
Lakshmanan, Manu N; Greenberg, Joel A; Samei, Ehsan; Kapadia, Anuj J
2016-01-01
A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice.
Cellular nonlinear networks for strike-point localization at JET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arena, P.; Fortuna, L.; Bruno, M.
2005-11-15
At JET, the potential of fast image processing for real-time purposes is thoroughly investigated. Particular attention is devoted to smart sensors based on system on chip technology. The data of the infrared cameras were processed with a chip implementing a cellular nonlinear network (CNN) structure so as to support and complement the magnetic diagnostics in the real-time localization of the strike-point position in the divertor. The circuit consists of two layers of complementary metal-oxide semiconductor components, the first being the sensor and the second implementing the actual CNN. This innovative hardware has made it possible to determine the position ofmore » the maximum thermal load with a time resolution of the order of 30 ms. Good congruency has been found with the measurement from the thermocouples in the divertor, proving the potential of the infrared data in locating the region of the maximum thermal load. The results are also confirmed by JET magnetic codes, both those used for the equilibrium reconstructions and those devoted to the identification of the plasma boundary.« less
NASA Astrophysics Data System (ADS)
Visnjevic, Vjeran; Herman, Frédéric; Licul, Aleksandar
2016-04-01
With the end of the Last Glacial Maximum (LGM), about 20 000 years ago, ended the most recent long-lasting cold phase in Earth's history. We recently developed a model that describes large-scale erosion and its response to climate and dynamical changes with the application to the Alps for the LGM period. Here we will present an inverse approach we have recently developed to infer the LGM mass balance from known ice extent data, focusing on a glacier or ice cap. The ice flow model is developed using the shallow ice approximation and the developed codes are accelerated using GPUs capabilities. The mass balance field is the constrained variable defined by the balance rate β and the equilibrium line altitude (ELA), where c is the cutoff value: b = max(βṡ(S(z) - ELA), c) We show that such a mass balance can be constrained from the observed past ice extent and ice thickness. We are also investigating several different geostatistical methods to constrain spatially variable mass balance, and derive uncertainties on each of the mass balance parameters.
ELM Behavior in High- βp EAST-Demonstration Plasmas on DIII-D
NASA Astrophysics Data System (ADS)
Li, G. Q.; Gong, X. Z.; Garofalo, A. M.; Lao, L. L.; Meneghini, O.; Snyder, P. B.; Ren, Q. L.; Ding, S. Y.; Guo, W. F.; Qian, J. P.; Wan, B. N.; Xu, G. S.; Holcomb, C. T.; Solomon, W. M.
2015-11-01
In the DIII-D high- βp EAST-demonstration experiment, for several similar discharges when the experimental parameters such as the toroidal magnetic field or ECH power are varied slightly, the changes in ELM frequency response are observed to be much larger. Kinetic EFIT equilibrium reconstructions for these discharges have been performed, which suggest that the ELM frequency changes are likely due to the variations of pedestal width, height, and edge current density. Kinetic profile analyses further indicate that the strong ITB that are located at large minor radii (rho=0.6 ~0.7) in these discharges are affecting the pedestal structure. The ITB could broaden the pedestal width and decrease the pedestal height, thus changing the ELM frequency and size. With the GATO and ELITE MHD codes, the linear growth rates and mode structures of these ELMs are analyzed. The impact of ITB on the ELMs behavior will be discussed. Work supported by China MOST under 2014GB106001 and 2015GB102001 and US DOE under DE-FC02-04ER54698 and DE-FG03-95ER54309.
Real-time diamagnetic flux measurements on ASDEX Upgrade.
Giannone, L; Geiger, B; Bilato, R; Maraschek, M; Odstrčil, T; Fischer, R; Fuchs, J C; McCarthy, P J; Mertens, V; Schuhbeck, K H
2016-05-01
Real-time diamagnetic flux measurements are now available on ASDEX Upgrade. In contrast to the majority of diamagnetic flux measurements on other tokamaks, no analog summation of signals is necessary for measuring the change in toroidal flux or for removing contributions arising from unwanted coupling to the plasma and poloidal field coil currents. To achieve the highest possible sensitivity, the diamagnetic measurement and compensation coil integrators are triggered shortly before plasma initiation when the toroidal field coil current is close to its maximum. In this way, the integration time can be chosen to measure only the small changes in flux due to the presence of plasma. Two identical plasma discharges with positive and negative magnetic field have shown that the alignment error with respect to the plasma current is negligible. The measured diamagnetic flux is compared to that predicted by TRANSP simulations. The poloidal beta inferred from the diamagnetic flux measurement is compared to the values calculated from magnetic equilibrium reconstruction codes. The diamagnetic flux measurement and TRANSP simulation can be used together to estimate the coupled power in discharges with dominant ion cyclotron resonance heating.
NIMROD Modeling of Sawtooth Modes Using Hot-Particle Closures
NASA Astrophysics Data System (ADS)
Kruger, Scott; Jenkins, T. G.; Held, E. D.; King, J. R.
2015-11-01
In DIII-D shot 96043, RF heating gives rise to an energetic ion population that alters the sawtooth stability boundary, replacing conventional sawtooth cycles by longer-period, larger-amplitude `giant sawtooth' oscillations. We explore the use of particle-in-cell closures within the NIMROD code to numerically represent the RF-induced hot-particle distribution, and investigate the role of this distribution in determining the altered mode onset threshold and subsequent nonlinear evolution. Equilibrium reconstructions from the experimental data are used to enable these detailed validation studies. Effects of other parameters on the sawtooth behavior, such as the plasma Lundquist number and hot-particle beta-fraction, are also considered. The fast energetic particles present many challenges for the PIC closure. We review new algorithm and performance improvements to address these challenges, and provide a preliminary assessment of the efficacy of the PIC closure versus a continuum model for energetic particle modeling. We also compare our results with those of, and discuss plans for a more complete validation campaign for this discharge. Supported by US Department of Energy via the SciDAC Center for Extended MHD Modeling (CEMM).
Visually lossless compression of digital hologram sequences
NASA Astrophysics Data System (ADS)
Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.
2010-01-01
Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.
Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment.
Meng, Bowen; Pratx, Guillem; Xing, Lei
2011-12-01
Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT∕CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. In this work, we accelerated the Feldcamp-Davis-Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT∕CT reconstruction algorithm. Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10(-7). Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. An ultrafast, reliable and scalable 4D CBCT∕CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment.
Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment
Meng, Bowen; Pratx, Guillem; Xing, Lei
2011-01-01
Purpose: Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT/CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. Methods: In this work, we accelerated the Feldcamp–Davis–Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT/CT reconstruction algorithm. Results: Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10−7. Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. Conclusions: An ultrafast, reliable and scalable 4D CBCT/CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment. PMID:22149842
A Newton method for the magnetohydrodynamic equilibrium equations
NASA Astrophysics Data System (ADS)
Oliver, Hilary James
We have developed and implemented a (J, B) space Newton method to solve the full nonlinear three dimensional magnetohydrodynamic equilibrium equations in toroidal geometry. Various cases have been run successfully, demonstrating significant improvement over Picard iteration, including a 3D stellarator equilibrium at β = 2%. The algorithm first solves the equilibrium force balance equation for the current density J, given a guess for the magnetic field B. This step is taken from the Picard-iterative PIES 3D equilibrium code. Next, we apply Newton's method to Ampere's Law by expansion of the functional J(B), which is defined by the first step. An analytic calculation in magnetic coordinates, of how the Pfirsch-Schlüter currents vary in the plasma in response to a small change in the magnetic field, yields the Newton gradient term (analogous to ∇f . δx in Newton's method for f(x) = 0). The algorithm is computationally feasible because we do this analytically, and because the gradient term is flux surface local when expressed in terms of a vector potential in an Ar=0 gauge. The equations are discretized by a hybrid spectral/offset grid finite difference technique, and leading order radial dependence is factored from Fourier coefficients to improve finite- difference accuracy near the polar-like origin. After calculating the Newton gradient term we transfer the equation from the magnetic grid to a fixed background grid, which greatly improves the code's performance.
NASA Technical Reports Server (NTRS)
Gordon, Sanford; Mcbride, Bonnie J.
1994-01-01
This report presents the latest in a number of versions of chemical equilibrium and applications programs developed at the NASA Lewis Research Center over more than 40 years. These programs have changed over the years to include additional features and improved calculation techniques and to take advantage of constantly improving computer capabilities. The minimization-of-free-energy approach to chemical equilibrium calculations has been used in all versions of the program since 1967. The two principal purposes of this report are presented in two parts. The first purpose, which is accomplished here in part 1, is to present in detail a number of topics of general interest in complex equilibrium calculations. These topics include mathematical analyses and techniques for obtaining chemical equilibrium; formulas for obtaining thermodynamic and transport mixture properties and thermodynamic derivatives; criteria for inclusion of condensed phases; calculations at a triple point; inclusion of ionized species; and various applications, such as constant-pressure or constant-volume combustion, rocket performance based on either a finite- or infinite-chamber-area model, shock wave calculations, and Chapman-Jouguet detonations. The second purpose of this report, to facilitate the use of the computer code, is accomplished in part 2, entitled 'Users Manual and Program Description'. Various aspects of the computer code are discussed, and a number of examples are given to illustrate its versatility.
NASA Astrophysics Data System (ADS)
Arndt, S.; Merkel, P.; Monticello, D. A.; Reiman, A. H.
1999-04-01
Fixed- and free-boundary equilibria for Wendelstein 7-X (W7-X) [W. Lotz et al., Plasma Physics and Controlled Nuclear Fusion Research 1990 (Proc. 13th Int. Conf. Washington, DC, 1990), (International Atomic Energy Agency, Vienna, 1991), Vol. 2, p. 603] configurations are calculated using the Princeton Iterative Equilibrium Solver (PIES) [A. H. Reiman et al., Comput. Phys. Commun., 43, 157 (1986)] to deal with magnetic islands and stochastic regions. Usually, these W7-X configurations require a large number of iterations for PIES convergence. Here, two methods have been successfully tested in an attempt to decrease the number of iterations needed for convergence. First, periodic sequences of different blending parameters are used. Second, the initial guess is vastly improved by using results of the Variational Moments Equilibrium Code (VMEC) [S. P. Hirshmann et al., Phys. Fluids 26, 3553 (1983)]. Use of these two methods have allowed verification of the Hamada condition and tendency of "self-healing" of islands has been observed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, Srdjan; Piro, Markus H.A.
Thermochimica is a software library that determines a unique combination of phases and their compositions at thermochemical equilibrium. Thermochimica can be used for stand-alone calculations or it can be directly coupled to other codes. This release of the software does not have a graphical user interface (GUI) and it can be executed from the command line or from an Application Programming Interface (API). Also, it is not intended for thermodynamic model development or for constructing phase diagrams. The main purpose of the software is to be directly coupled with a multi-physics code to provide material properties and boundary conditions formore » various physical phenomena. Significant research efforts have been dedicated to enhance computational performance through advanced algorithm development, such as improved estimation techniques and non-linear solvers. Various useful parameters can be provided as output from Thermochimica, such as: determination of which phases are stable at equilibrium, the mass of solution species and phases at equilibrium, mole fractions of solution phase constituents, thermochemical activities (which are related to partial pressures for gaseous species), chemical potentials of solution species and phases, and integral Gibbs energy (referenced relative to standard state). The overall goal is to provide an open source computational tool to enhance the predictive capability of multi-physics codes without significantly impeding computational performance.« less
Yeu, In Won; Park, Jaehong; Han, Gyuseung; Hwang, Cheol Seong; Choi, Jung-Hae
2017-09-06
A detailed understanding of the atomic configuration of the compound semiconductor surface, especially after reconstruction, is very important for the device fabrication and performance. While there have been numerous experimental studies using the scanning probe techniques, further theoretical studies on surface reconstruction are necessary to promote the clear understanding of the origins and development of such subtle surface structures. In this work, therefore, a pressure-temperature surface reconstruction diagram was constructed for the model case of the InAs (001) surface considering both the vibrational entropy and configurational entropy based on the density functional theory. Notably, the equilibrium fraction of various reconstructions was determined as a function of the pressure and temperature, not as a function of the chemical potential, which largely facilitated the direct comparison with the experiments. By taking into account the entropy effects, the coexistence of the multiple reconstructions and the fractional change of each reconstruction by the thermodynamic condition were predicted and were in agreement with the previous experimental observations. This work provides the community with a useful framework for such type of theoretical studies.
Coding and transmission of subband coded images on the Internet
NASA Astrophysics Data System (ADS)
Wah, Benjamin W.; Su, Xiao
2001-09-01
Subband-coded images can be transmitted in the Internet using either the TCP or the UDP protocol. Delivery by TCP gives superior decoding quality but with very long delays when the network is unreliable, whereas delivery by UDP has negligible delays but with degraded quality when packets are lost. Although images are delivered currently over the Internet by TCP, we study in this paper the use of UDP to deliver multi-description reconstruction-based subband-coded images. First, in order to facilitate recovery from UDP packet losses, we propose a joint sender-receiver approach for designing optimized reconstruction-based subband transform (ORB-ST) in multi-description coding (MDC). Second, we carefully evaluate the delay-quality trade-offs between the TCP delivery of SDC images and the UDP and combined TCP/UDP delivery of MDC images. Experimental results show that our proposed ORB-ST performs well in real Internet tests, and UDP and combined TCP/UDP delivery of MDC images provide a range of attractive alternatives to TCP delivery.
Observations and Thermochemical Calculations for Hot-Jupiter Atmospheres
NASA Astrophysics Data System (ADS)
Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver; Cubillos, Patricio; Stemm, Madison
2015-01-01
I present Spitzer eclipse observations for WASP-14b and WASP-43b, an open source tool for thermochemical equilibrium calculations, and components of an open source tool for atmospheric parameter retrieval from spectroscopic data. WASP-14b is a planet that receives high irradiation from its host star, yet, although theory does not predict it, the planet hosts a thermal inversion. The WASP-43b eclipses have signal-to-noise ratios of ~25, one of the largest among exoplanets. To assess these planets' atmospheric composition and thermal structure, we developed an open-source Bayesian Atmospheric Radiative Transfer (BART) code. My dissertation tasks included developing a Thermochemical Equilibrium Abundances (TEA) code, implementing the eclipse geometry calculation in BART's radiative transfer module, and generating parameterized pressure and temperature profiles so the radiative-transfer module can be driven by the statistical module.To initialize the radiative-transfer calculation in BART, TEA calculates the equilibrium abundances of gaseous molecular species at a given temperature and pressure. It uses the Gibbs-free-energy minimization method with an iterative Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. The code is tested against the original method developed by White at al. (1958), the analytic method developed by Burrows and Sharp (1999), and the Newton-Raphson method implemented in the open-source Chemical Equilibrium with Applications (CEA) code. TEA, written in Python, is modular, documented, and available to the community via the open-source development site GitHub.com.Support for this work was provided by NASA Headquarters under the NASA Earth and Space Science Fellowship Program, grant NNX12AL83H, by NASA through an award issued by JPL/Caltech, and through the Science Mission Directorate's Planetary Atmospheres Program, grant NNX12AI69G.
Heenan, Patrick R; Yu, Hao; Siewny, Matthew G W; Perkins, Thomas T
2018-03-28
Precisely quantifying the energetics that drive the folding of membrane proteins into a lipid bilayer remains challenging. More than 15 years ago, atomic force microscopy (AFM) emerged as a powerful tool to mechanically extract individual membrane proteins from a lipid bilayer. Concurrently, fluctuation theorems, such as the Jarzynski equality, were applied to deduce equilibrium free energies (ΔG 0 ) from non-equilibrium single-molecule force spectroscopy records. The combination of these two advances in single-molecule studies deduced the free-energy of the model membrane protein bacteriorhodopsin in its native lipid bilayer. To elucidate this free-energy landscape at a higher resolution, we applied two recent developments. First, as an input to the reconstruction, we used force-extension curves acquired with a 100-fold higher time resolution and 10-fold higher force precision than traditional AFM studies of membrane proteins. Next, by using an inverse Weierstrass transform and the Jarzynski equality, we removed the free energy associated with the force probe and determined the molecular free-energy landscape of the molecule under study, bacteriorhodopsin. The resulting landscape yielded an average unfolding free energy per amino acid (aa) of 1.0 ± 0.1 kcal/mol, in agreement with past single-molecule studies. Moreover, on a smaller spatial scale, this high-resolution landscape also agreed with an equilibrium measurement of a particular three-aa transition in bacteriorhodopsin that yielded 2.7 kcal/mol/aa, an unexpectedly high value. Hence, while average unfolding ΔG 0 per aa is a useful metric, the derived high-resolution landscape details significant local variation from the mean. More generally, we demonstrated that, as anticipated, the inverse Weierstrass transform is an efficient means to reconstruct free-energy landscapes from AFM data.
NASA Astrophysics Data System (ADS)
Heenan, Patrick R.; Yu, Hao; Siewny, Matthew G. W.; Perkins, Thomas T.
2018-03-01
Precisely quantifying the energetics that drive the folding of membrane proteins into a lipid bilayer remains challenging. More than 15 years ago, atomic force microscopy (AFM) emerged as a powerful tool to mechanically extract individual membrane proteins from a lipid bilayer. Concurrently, fluctuation theorems, such as the Jarzynski equality, were applied to deduce equilibrium free energies (ΔG0) from non-equilibrium single-molecule force spectroscopy records. The combination of these two advances in single-molecule studies deduced the free-energy of the model membrane protein bacteriorhodopsin in its native lipid bilayer. To elucidate this free-energy landscape at a higher resolution, we applied two recent developments. First, as an input to the reconstruction, we used force-extension curves acquired with a 100-fold higher time resolution and 10-fold higher force precision than traditional AFM studies of membrane proteins. Next, by using an inverse Weierstrass transform and the Jarzynski equality, we removed the free energy associated with the force probe and determined the molecular free-energy landscape of the molecule under study, bacteriorhodopsin. The resulting landscape yielded an average unfolding free energy per amino acid (aa) of 1.0 ± 0.1 kcal/mol, in agreement with past single-molecule studies. Moreover, on a smaller spatial scale, this high-resolution landscape also agreed with an equilibrium measurement of a particular three-aa transition in bacteriorhodopsin that yielded 2.7 kcal/mol/aa, an unexpectedly high value. Hence, while average unfolding ΔG0 per aa is a useful metric, the derived high-resolution landscape details significant local variation from the mean. More generally, we demonstrated that, as anticipated, the inverse Weierstrass transform is an efficient means to reconstruct free-energy landscapes from AFM data.
On Asymptotically Good Ramp Secret Sharing Schemes
NASA Astrophysics Data System (ADS)
Geil, Olav; Martin, Stefano; Martínez-Peñas, Umberto; Matsumoto, Ryutaroh; Ruano, Diego
Asymptotically good sequences of linear ramp secret sharing schemes have been intensively studied by Cramer et al. in terms of sequences of pairs of nested algebraic geometric codes. In those works the focus is on full privacy and full reconstruction. In this paper we analyze additional parameters describing the asymptotic behavior of partial information leakage and possibly also partial reconstruction giving a more complete picture of the access structure for sequences of linear ramp secret sharing schemes. Our study involves a detailed treatment of the (relative) generalized Hamming weights of the considered codes.
NASA Astrophysics Data System (ADS)
Woitke, P.; Helling, Ch.; Hunter, G. H.; Millard, J. D.; Turner, G. E.; Worters, M.; Blecic, J.; Stock, J. W.
2018-06-01
We have introduced a fast and versatile computer code, GGCHEM, to determine the chemical composition of gases in thermo-chemical equilibrium down to 100 K, with or without equilibrium condensation. We have reviewed the data for molecular equilibrium constants, kp(T), from several sources and discussed which functional fits are most suitable for low temperatures. We benchmarked our results against another chemical equilibrium code. We collected Gibbs free energies, ΔGf⊖, for about 200 solid and liquid species from the NIST-JANAF database and the geophysical database SUPCRTBL. We discussed the condensation sequence of the elements with solar abundances in phase equilibrium down to 100 K. Once the major magnesium silicates Mg2SiO4[s] and MgSiO3[s] have formed, the dust to gas mass ratio jumps to a value of about 0.0045 which is significantly lower than the often assumed value of 0.01. Silicate condensation is found to increase the carbon to oxygen ratio (C/O) in the gas from its solar value of 0.55 up to 0.71, and, by the additional intake of water and hydroxyl into the solid matrix, the formation of phyllosilicates at temperatures below 400 K increases the gaseous C/O further to about 0.83. Metallic tungsten (W) is the first condensate found to become thermodynamically stable around 1600-2200 K (depending on pressure), several hundreds of Kelvin before subsequent materials such as zirconium dioxide (ZrO2) or corundum (Al2O3) can condense. We briefly discuss whether tungsten, despite its low abundance of 2 × 10-7 times the silicon abundance, could provide the first seed particles for astrophysical dust formation. GGCHEM code is publicly available at http://https://github.com/pw31/GGchemTable D.1 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/614/A1
Dual-camera design for coded aperture snapshot spectral imaging.
Wang, Lizhi; Xiong, Zhiwei; Gao, Dahua; Shi, Guangming; Wu, Feng
2015-02-01
Coded aperture snapshot spectral imaging (CASSI) provides an efficient mechanism for recovering 3D spectral data from a single 2D measurement. However, since the reconstruction problem is severely underdetermined, the quality of recovered spectral data is usually limited. In this paper we propose a novel dual-camera design to improve the performance of CASSI while maintaining its snapshot advantage. Specifically, a beam splitter is placed in front of the objective lens of CASSI, which allows the same scene to be simultaneously captured by a grayscale camera. This uncoded grayscale measurement, in conjunction with the coded CASSI measurement, greatly eases the reconstruction problem and yields high-quality 3D spectral data. Both simulation and experimental results demonstrate the effectiveness of the proposed method.
Global magnetosphere simulations using constrained-transport Hall-MHD with CWENO reconstruction
NASA Astrophysics Data System (ADS)
Lin, L.; Germaschewski, K.; Maynard, K. M.; Abbott, S.; Bhattacharjee, A.; Raeder, J.
2013-12-01
We present a new CWENO (Centrally-Weighted Essentially Non-Oscillatory) reconstruction based MHD solver for the OpenGGCM global magnetosphere code. The solver was built using libMRC, a library for creating efficient parallel PDE solvers on structured grids. The use of libMRC gives us access to its core functionality of providing an automated code generation framework which takes a user provided PDE right hand side in symbolic form to generate an efficient, computer architecture specific, parallel code. libMRC also supports block-structured adaptive mesh refinement and implicit-time stepping through integration with the PETSc library. We validate the new CWENO Hall-MHD solver against existing solvers both in standard test problems as well as in global magnetosphere simulations.
Adaptive temporal compressive sensing for video with motion estimation
NASA Astrophysics Data System (ADS)
Wang, Yeru; Tang, Chaoying; Chen, Yueting; Feng, Huajun; Xu, Zhihai; Li, Qi
2018-04-01
In this paper, we present an adaptive reconstruction method for temporal compressive imaging with pixel-wise exposure. The motion of objects is first estimated from interpolated images with a designed coding mask. With the help of motion estimation, image blocks are classified according to the degree of motion and reconstructed with the corresponding dictionary, which was trained beforehand. Both the simulation and experiment results show that the proposed method can obtain accurate motion information before reconstruction and efficiently reconstruct compressive video.
Vertical Position and Current Profile Measurements by Faraday-effect Polarimetry On EAST tokamak
NASA Astrophysics Data System (ADS)
Ding, Weixing; Liu, H. Q.; Jie, Y. X.; Brower, D. L.; Qian, J. P.; Zou, Z. Y.; Lian, H.; Wang, S. X.; Luo, Z. P.; Xiao, B. J.; Ucla Team; Asipp Team
2017-10-01
A primary goal for ITER and prospective fusion power reactors is to achieve controlled long-pulse/steady-state burning plasmas. For elongated divertor plasmas, both the vertical position and current profile have to be precisely controlled to optimize performance and prevent disruptions. An eleven-channel laser-based POlarimeter-INTerferometer (POINT) system has been developed for measuring the internal magnetic field in the EAST tokamak and can be used to obtain the plasma current profile and vertical position. Current profiles are determined from equilibrium reconstruction including internal magnetic field measurements as internal constraints. Horizontally-viewing chords at/near the mid-plane allow us to determine plasma vertical position non-inductively with subcentimeter spatial resolution and time response up to 1 s. The polarimeter-based position measurement, which does not require equilibrium reconstruction, is benchmarked against conventional flux loop measurements and can be exploited for feedback control. Work supported by US DOE through Grants No. DE-FG02-01ER54615 and No. DC-SC0010469.
Booth, D.B.
1986-01-01
An estimate of the sliding velocity and basal meltwater discharge of the Puget lobe of the Cordilleran ice sheet can be calculated from its reconstructed extent, altitude, and mass balance. Lobe dimensions and surface altitudes are inferred from ice limits and flow-direction indicators. Net annual mass balance and total ablation are calculated from relations empirically derived from modern maritime glaciers. An equilibrium-line altitude between 1200 and 1250 m is calculated for the maximum glacial advance (ca. 15,000 yr B.P.) during the Vashon Stade of the Fraser Glaciation. This estimate is in accord with geologic data and is insensitive to plausible variability in the parameters used in the reconstruction. Resultant sliding velocities are as much as 650 m/a at the equilibrium line, decreasing both up- and downglacier. Such velocities for an ice sheet of this size are consistent with nonsurging behavior. Average meltwater discharge increases monotonically downglacier to 3000 m3/sec at the terminus and is of a comparable magnitude to ice discharge over much of the glacier's ablation area. Palcoclimatic inferences derived from this reconstruction are consistent with previous, independently derived studies of late Pleistocene temperature and precipitation in the Pacific Northwest. ?? 1986.
Tomographic diagnostics of nonthermal plasmas
NASA Astrophysics Data System (ADS)
Denisova, Natalia
2009-10-01
In the previous work [1], we discussed a ``technology'' of tomographic method and relations between the tomographic diagnostics in thermal (equilibrium) and nonthermal (nonequilibrium) plasma sources. The conclusion has been made that tomographic reconstruction in thermal plasma sources is the standard procedure at present, which can provide much useful information on the plasma structure and its evolution in time, while the tomographic reconstruction of nonthermal plasma has a great potential at making a contribution to understanding the fundamental problem of substance behavior in strongly nonequilibrium conditions. Using medical terminology, one could say, that tomographic diagnostics of the equilibrium plasma sources studies their ``anatomic'' structure, while reconstruction of the nonequilibrium plasma is similar to the ``physiological'' examination: it is directed to study the physical mechanisms and processes. The present work is focused on nonthermal plasma research. The tomographic diagnostics is directed to study spatial structures formed in the gas discharge plasmas under the influence of electrical and gravitational fields. The ways of plasma ``self-organization'' in changing and extreme conditions are analyzed. The analysis has been made using some examples from our practical tomographic diagnostics of nonthermal plasma sources, such as low-pressure capacitive and inductive discharges. [0pt] [1] Denisova N. Plasma diagnostics using computed tomography method // IEEE Trans. Plasma Sci. 2009 37 4 502.
NASA Technical Reports Server (NTRS)
Thompson, Richard A.; Lee, Kam-Pui; Gupta, Roop N.
1991-01-01
The computer codes developed here provide self-consistent thermodynamic and transport properties for equilibrium air for temperatures from 500 to 30000 K over a temperature range of 10 (exp -4) to 10 (exp -2) atm. These properties are computed through the use of temperature dependent curve fits for discrete values of pressure. Interpolation is employed for intermediate values of pressure. The curve fits are based on mixture values calculated from an 11-species air model. Individual species properties used in the mixture relations are obtained from a recent study by the present authors. A review and discussion of the sources and accuracy of the curve fitted data used herein are given in NASA RP 1260.
NASA Astrophysics Data System (ADS)
Morgan, K. D.; Jarboe, T. R.; Hossack, A. C.; Chandra, R. N.; Everson, C. J.
2017-12-01
The HIT-SI3 experiment uses a set of inductively driven helicity injectors to apply a non-axisymmetric current drive on the edge of the plasma, driving an axisymmetric spheromak equilibrium in a central confinement volume. These helicity injectors drive a non-axisymmetric perturbation that oscillates in time, with relative temporal phasing of the injectors modifying the mode structure of the applied perturbation. A set of three experimental discharges with different perturbation spectra are modelled using the NIMROD extended magnetohydrodynamics code, and comparisons are made to both magnetic and fluid measurements. These models successfully capture the bulk dynamics of both the perturbation and the equilibrium, though disagreements related to the pressure gradients experimentally measured exist.
Integrated modeling applications for tokamak experiments with OMFIT
NASA Astrophysics Data System (ADS)
Meneghini, O.; Smith, S. P.; Lao, L. L.; Izacard, O.; Ren, Q.; Park, J. M.; Candy, J.; Wang, Z.; Luna, C. J.; Izzo, V. A.; Grierson, B. A.; Snyder, P. B.; Holland, C.; Penna, J.; Lu, G.; Raum, P.; McCubbin, A.; Orlov, D. M.; Belli, E. A.; Ferraro, N. M.; Prater, R.; Osborne, T. H.; Turnbull, A. D.; Staebler, G. M.
2015-08-01
One modeling framework for integrated tasks (OMFIT) is a comprehensive integrated modeling framework which has been developed to enable physics codes to interact in complicated workflows, and support scientists at all stages of the modeling cycle. The OMFIT development follows a unique bottom-up approach, where the framework design and capabilities organically evolve to support progressive integration of the components that are required to accomplish physics goals of increasing complexity. OMFIT provides a workflow for easily generating full kinetic equilibrium reconstructions that are constrained by magnetic and motional Stark effect measurements, and kinetic profile information that includes fast-ion pressure modeled by a transport code. It was found that magnetic measurements can be used to quantify the amount of anomalous fast-ion diffusion that is present in DIII-D discharges, and provide an estimate that is consistent with what would be needed for transport simulations to match the measured neutron rates. OMFIT was used to streamline edge-stability analyses, and evaluate the effect of resonant magnetic perturbation (RMP) on the pedestal stability, which have been found to be consistent with the experimental observations. The development of a five-dimensional numerical fluid model for estimating the effects of the interaction between magnetohydrodynamic (MHD) and microturbulence, and its systematic verification against analytic models was also supported by the framework. OMFIT was used for optimizing an innovative high-harmonic fast wave system proposed for DIII-D. For a parallel refractive index {{n}\\parallel}>3 , the conditions for strong electron-Landau damping were found to be independent of launched {{n}\\parallel} and poloidal angle. OMFIT has been the platform of choice for developing a neural-network based approach to efficiently perform a non-linear multivariate regression of local transport fluxes as a function of local dimensionless parameters. Transport predictions for thousands of DIII-D discharges showed excellent agreement with the power balance calculations across the whole plasma radius and over a broad range of operating regimes. Concerning predictive transport simulations, the framework made possible the design and automation of a workflow that enables self-consistent predictions of kinetic profiles and the plasma equilibrium. It is found that the feedback between the transport fluxes and plasma equilibrium can significantly affect the kinetic profiles predictions. Such a rich set of results provide tangible evidence of how bottom-up approaches can potentially provide a fast track to integrated modeling solutions that are functional, cost-effective, and in sync with the research effort of the community.
Regenerable biocide delivery unit, volume 2
NASA Technical Reports Server (NTRS)
Atwater, James E.; Wheeler, Richard R., Jr.
1992-01-01
Source code for programs dealing with the following topics are presented: (1) life cycle test stand-parametric test stand control (in BASIC); (2) simultaneous aqueous iodine equilibria-true equilibrium (in C); (3) simultaneous aqueous iodine equilibria-pseudo-equilibrium (in C); (4) pseudo-(fast)-equilibrium with iodide initially present (in C); (5) solution of simultaneous iodine rate expressions (Mathematica); (6) 2nd order kinetics of I2-formic acid in humidity condensate (Mathematica); (7) prototype RMCV onboard microcontroller (CAMBASIC); (8) prototype RAM data dump to PC (in BASIC); and (9) prototype real time data transfer to PC (in BASIC).
A robust coding scheme for packet video
NASA Technical Reports Server (NTRS)
Chen, Y. C.; Sayood, Khalid; Nelson, D. J.
1991-01-01
We present a layered packet video coding algorithm based on a progressive transmission scheme. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.
A robust coding scheme for packet video
NASA Technical Reports Server (NTRS)
Chen, Yun-Chung; Sayood, Khalid; Nelson, Don J.
1992-01-01
A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolery, T.J.
1992-09-14
EQ3NR is an aqueous solution speciation-solubility modeling code. It is part of the EQ3/6 software package for geochemical modeling. It computes the thermodynamic state of an aqueous solution by determining the distribution of chemical species, including simple ions, ion pairs, and complexes, using standard state thermodynamic data and various equations which describe the thermodynamic activity coefficients of these species. The input to the code describes the aqueous solution in terms of analytical data, including total (analytical) concentrations of dissolved components and such other parameters as the pH, pHCl, Eh, pe, and oxygen fugacity. The input may also include a desiredmore » electrical balancing adjustment and various constraints which impose equilibrium with special pure minerals, solid solution end-member components (of specified mole fractions), and gases (of specified fugacities). The code evaluates the degree of disequilibrium in terms of the saturation index (SI = 1og Q/K) and the thermodynamic affinity (A = {minus}2.303 RT log Q/K) for various reactions, such as mineral dissolution or oxidation-reduction in the aqueous solution itself. Individual values of Eh, pe, oxygen fugacity, and Ah (redox affinity) are computed for aqueous redox couples. Equilibrium fugacities are computed for gas species. The code is highly flexible in dealing with various parameters as either model inputs or outputs. The user can specify modification or substitution of equilibrium constants at run time by using options on the input file.« less
Nada: A new code for studying self-gravitating tori around black holes
NASA Astrophysics Data System (ADS)
Montero, Pedro J.; Font, José A.; Shibata, Masaru
2008-09-01
We present a new two-dimensional numerical code called Nada designed to solve the full Einstein equations coupled to the general relativistic hydrodynamics equations. The code is mainly intended for studies of self-gravitating accretion disks (or tori) around black holes, although it is also suitable for regular spacetimes. Concerning technical aspects the Einstein equations are formulated and solved in the code using a formulation of the standard 3+1 Arnowitt-Deser-Misner canonical formalism system, the so-called Baumgarte-Shapiro Shibata-Nakamura approach. A key feature of the code is that derivative terms in the spacetime evolution equations are computed using a fourth-order centered finite difference approximation in conjunction with the Cartoon method to impose the axisymmetry condition under Cartesian coordinates (the choice in Nada), and the puncture/moving puncture approach to carry out black hole evolutions. Correspondingly, the general relativistic hydrodynamics equations are written in flux-conservative form and solved with high-resolution, shock-capturing schemes. We perform and discuss a number of tests to assess the accuracy and expected convergence of the code, namely, (single) black hole evolutions, shock tubes, and evolutions of both spherical and rotating relativistic stars in equilibrium, the gravitational collapse of a spherical relativistic star leading to the formation of a black hole. In addition, paving the way for specific applications of the code, we also present results from fully general relativistic numerical simulations of a system formed by a black hole surrounded by a self-gravitating torus in equilibrium.
From GCode to STL: Reconstruct Models from 3D Printing as a Service
NASA Astrophysics Data System (ADS)
Baumann, Felix W.; Schuermann, Martin; Odefey, Ulrich; Pfeil, Markus
2017-12-01
The authors present a method to reverse engineer 3D printer specific machine instructions (GCode) to a point cloud representation and then a STL (Stereolithography) file format. GCode is a machine code that is used for 3D printing among other applications, such as CNC routers. Such code files contain instructions for the 3D printer to move and control its actuator, in case of Fused Deposition Modeling (FDM), the printhead that extrudes semi-molten plastics. The reverse engineering method presented here is based on the digital simulation of the extrusion process of FDM type 3D printing. The reconstructed models and pointclouds do not accommodate for hollow structures, such as holes or cavities. The implementation is performed in Python and relies on open source software and libraries, such as Matplotlib and OpenCV. The reconstruction is performed on the model’s extrusion boundary and considers mechanical imprecision. The complete reconstruction mechanism is available as a RESTful (Representational State Transfer) Web service.
Development of a 1.5D plasma transport code for coupling to full orbit runaway electron simulations
NASA Astrophysics Data System (ADS)
Lore, J. D.; Del Castillo-Negrete, D.; Baylor, L.; Carbajal, L.
2017-10-01
A 1.5D (1D radial transport + 2D equilibrium geometry) plasma transport code is being developed to simulate runaway electron generation, mitigation, and avoidance by coupling to the full-orbit kinetic electron transport code KORC. The 1.5D code solves the time-dependent 1D flux surface averaged transport equations with sources for plasma density, pressure, and poloidal magnetic flux, along with the Grad-Shafranov equilibrium equation for the 2D flux surface geometry. Disruption mitigation is simulated by introducing an impurity neutral gas `pellet', with impurity densities and electron cooling calculated from ionization, recombination, and line emission rate coefficients. Rapid cooling of the electrons increases the resistivity, inducing an electric field which can be used as an input to KORC. The runaway electron current is then included in the parallel Ohm's law in the transport equations. The 1.5D solver will act as a driver for coupled simulations to model effects such as timescales for thermal quench, runaway electron generation, and pellet impurity mixtures for runaway avoidance. Current progress on the code and details of the numerical algorithms will be presented. Work supported by the US DOE under DE-AC05-00OR22725.
Kumar, Manoj; Vijayakumar, A; Rosen, Joseph
2017-09-14
We present a lensless, interferenceless incoherent digital holography technique based on the principle of coded aperture correlation holography. The acquired digital hologram by this technique contains a three-dimensional image of some observed scene. Light diffracted by a point object (pinhole) is modulated using a random-like coded phase mask (CPM) and the intensity pattern is recorded and composed as a point spread hologram (PSH). A library of PSHs is created using the same CPM by moving the pinhole to all possible axial locations. Intensity diffracted through the same CPM from an object placed within the axial limits of the PSH library is recorded by a digital camera. The recorded intensity this time is composed as the object hologram. The image of the object at any axial plane is reconstructed by cross-correlating the object hologram with the corresponding component of the PSH library. The reconstruction noise attached to the image is suppressed by various methods. The reconstruction results of multiplane and thick objects by this technique are compared with regular lens-based imaging.
Methods of evaluating the effects of coding on SAR data
NASA Technical Reports Server (NTRS)
Dutkiewicz, Melanie; Cumming, Ian
1993-01-01
It is recognized that mean square error (MSE) is not a sufficient criterion for determining the acceptability of an image reconstructed from data that has been compressed and decompressed using an encoding algorithm. In the case of Synthetic Aperture Radar (SAR) data, it is also deemed to be insufficient to display the reconstructed image (and perhaps error image) alongside the original and make a (subjective) judgment as to the quality of the reconstructed data. In this paper we suggest a number of additional evaluation criteria which we feel should be included as evaluation metrics in SAR data encoding experiments. These criteria have been specifically chosen to provide a means of ensuring that the important information in the SAR data is preserved. The paper also presents the results of an investigation into the effects of coding on SAR data fidelity when the coding is applied in (1) the signal data domain, and (2) the image domain. An analysis of the results highlights the shortcomings of the MSE criterion, and shows which of the suggested additional criterion have been found to be most important.
Investigation of the n = 1 resistive wall modes in the ITER high-mode confinement
NASA Astrophysics Data System (ADS)
Zheng, L. J.; Kotschenreuther, M. T.; Valanju, P.
2017-06-01
The n = 1 resistive wall mode (RWM) stability of ITER high-mode confinement is investigated with bootstrap current included for equilibrium, together with the rotation and diamagnetic drift effects for stability. Here, n is the toroidal mode number. We use the CORSICA code for computing the free boundary equilibrium and AEGIS code for stability. We find that the inclusion of bootstrap current for equilibrium is critical. It can reduce the local magnetic shear in the pedestal, so that the infernal mode branches can develop. Consequently, the n = 1 modes become unstable without a stabilizing wall at a considerably lower beta limit, driven by the steep pressure gradient in the pedestal. Typical values of the wall position stabilize the ideal mode, but give rise to the ‘pedestal’ resistive wall modes. We find that the rotation can contribute a stabilizing effect on RWMs and the diamagnetic drift effects can further improve the stability in the co-current rotation case. But, generally speaking, the rotation stabilization effects are not as effective as the case without including the bootstrap current effects on equilibrium. We also find that the diamagnetic drift effects are actually destabilizing when there is a counter-current rotation.
radEq Add-On Module for CFD Solver Loci-CHEM
NASA Technical Reports Server (NTRS)
McCloud, Peter
2013-01-01
Loci-CHEM to be applied to flow velocities where surface radiation due to heating from compression and friction becomes significant. The module adds a radiation equilibrium boundary condition to the computational fluid dynamics (CFD) code to produce accurate results. The module expanded the upper limit for accurate CFD solutions of Loci-CHEM from Mach 4 to Mach 10 based on Space Shuttle Orbiter Re-Entry trajectories. Loci-CHEM already has a very promising architecture and performance, but absence of radiation equilibrium boundary condition limited the application of Loci-CHEM to below Mach 4. The immediate advantage of the add-on module is that it allows Loci-CHEM to work with supersonic flows up to Mach 10. This transformed Loci-CHEM from a rocket engine- heritage CFD code with general subsonic and low-supersonic applications, to an aeroheating code with hypersonic applications. The follow-on advantage of the module is that it is a building block for additional add-on modules that will solve for the heating generated at Mach numbers higher than 10.
Energy spectrum of 208Pb(n,x) reactions
NASA Astrophysics Data System (ADS)
Tel, E.; Kavun, Y.; Özdoǧan, H.; Kaplan, A.
2018-02-01
Fission and fusion reactor technologies have been investigated since 1950's on the world. For reactor technology, fission and fusion reaction investigations are play important role for improve new generation technologies. Especially, neutron reaction studies have an important place in the development of nuclear materials. So neutron effects on materials should study as theoretically and experimentally for improve reactor design. For this reason, Nuclear reaction codes are very useful tools when experimental data are unavailable. For such circumstances scientists created many nuclear reaction codes such as ALICE/ASH, CEM95, PCROSS, TALYS, GEANT, FLUKA. In this study we used ALICE/ASH, PCROSS and CEM95 codes for energy spectrum calculation of outgoing particles from Pb bombardment by neutron. While Weisskopf-Ewing model has been used for the equilibrium process in the calculations, full exciton, hybrid and geometry dependent hybrid nuclear reaction models have been used for the pre-equilibrium process. The calculated results have been discussed and compared with the experimental data taken from EXFOR.
Step patterns on vicinal reconstructed surfaces
NASA Astrophysics Data System (ADS)
Vilfan, Igor
1996-04-01
Step patterns on vicinal (2 × 1) reconstructed surfaces of noble metals Au(110) and Pt(110), miscut towards the (100) orientation, are investigated. The free energy of the reconstructed surface with a network of crossing opposite steps is calculated in the strong chirality regime when the steps cannot make overhangs. It is explained why the steps are not perpendicular to the direction of the miscut but form in equilibrium a network of crossing steps which make the surface to look like a fish skin. The network formation is the consequence of competition between the — predominantly elastic — energy loss and entropy gain. It is in agreement with recent scanning tunnelling microscopy observations on vicinal Au(110) and Pt(110) surfaces.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lazerson, Samuel A.; Loizu, Joaquim; Hirshman, Steven
The VMEC nonlinear ideal MHD equilibrium code [S. P. Hirshman and J. C. Whitson, Phys. Fluids 26, 3553 (1983)] is compared against analytic linear ideal MHD theory in a screw-pinch-like configuration. The focus of such analysis is to verify the ideal MHD response at magnetic surfaces which possess magnetic transform (ι) which is resonant with spectral values of the perturbed boundary harmonics. A large aspect ratio circular cross section zero-beta equilibrium is considered. This equilibrium possess a rational surface with safety factor q = 2 at a normalized flux value of 0.5. A small resonant boundary perturbation is introduced, excitingmore » a response at the resonant rational surface. The code is found to capture the plasma response as predicted by a newly developed analytic theory that ensures the existence of nested flux surfaces by allowing for a jump in rotational transform (ι=1/q). The VMEC code satisfactorily reproduces these theoretical results without the necessity of an explicit transform discontinuity (Δι) at the rational surface. It is found that the response across the rational surfaces depends upon both radial grid resolution and local shear (dι/dΦ, where ι is the rotational transform and Φ the enclosed toroidal flux). Calculations of an implicit Δι suggest that it does not arise due to numerical artifacts (attributed to radial finite differences in VMEC) or existence conditions for flux surfaces as predicted by linear theory (minimum values of Δι). Scans of the rotational transform profile indicate that for experimentally relevant levels of transform shear the response becomes increasing localised. Furthermore, careful examination of a large experimental tokamak equilibrium, with applied resonant fields, indicates that this shielding response is present, suggesting the phenomena is not limited to this verification exercise.« less
NASA Astrophysics Data System (ADS)
Malik, Matej; Grosheintz, Luc; Mendonça, João M.; Grimm, Simon L.; Lavie, Baptiste; Kitzmann, Daniel; Tsai, Shang-Min; Burrows, Adam; Kreidberg, Laura; Bedell, Megan; Bean, Jacob L.; Stevenson, Kevin B.; Heng, Kevin
2017-02-01
We present the open-source radiative transfer code named HELIOS, which is constructed for studying exoplanetary atmospheres. In its initial version, the model atmospheres of HELIOS are one-dimensional and plane-parallel, and the equation of radiative transfer is solved in the two-stream approximation with nonisotropic scattering. A small set of the main infrared absorbers is employed, computed with the opacity calculator HELIOS-K and combined using a correlated-k approximation. The molecular abundances originate from validated analytical formulae for equilibrium chemistry. We compare HELIOS with the work of Miller-Ricci & Fortney using a model of GJ 1214b, and perform several tests, where we find: model atmospheres with single-temperature layers struggle to converge to radiative equilibrium; k-distribution tables constructed with ≳ 0.01 cm-1 resolution in the opacity function (≲ {10}3 points per wavenumber bin) may result in errors ≳ 1%-10% in the synthetic spectra; and a diffusivity factor of 2 approximates well the exact radiative transfer solution in the limit of pure absorption. We construct “null-hypothesis” models (chemical equilibrium, radiative equilibrium, and solar elemental abundances) for six hot Jupiters. We find that the dayside emission spectra of HD 189733b and WASP-43b are consistent with the null hypothesis, while the latter consistently underpredicts the observed fluxes of WASP-8b, WASP-12b, WASP-14b, and WASP-33b. We demonstrate that our results are somewhat insensitive to the choice of stellar models (blackbody, Kurucz, or PHOENIX) and metallicity, but are strongly affected by higher carbon-to-oxygen ratios. The code is publicly available as part of the Exoclimes Simulation Platform (exoclime.net).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shafer, Morgan W; Battaglia, D. J.; Unterberg, Ezekial A
A new tangential 2D Soft X-Ray Imaging System (SXRIS) is being designed to examine the edge magnetic island structure in the lower X-point region of DIII-D. A synthetic diagnostic calculation coupled to 3D emissivity estimates is used to generate phantom images. Phillips-Tikhonov regularization is used to invert the phantom images for comparison to the original emissivity model. Noise level, island size, and equilibrium accuracy are scanned to assess the feasibility of detecting edge island structures. Models of typical DIII-D discharges indicate integration times > 1 ms with accurate equilibrium reconstruction are needed for small island (< 3 cm) detection.
Song, Yang; Hamtaei, Ehsan; Sethi, Sean K; Yang, Guang; Xie, Haibin; Mark Haacke, E
2017-12-01
To introduce a new approach to reconstruct high definition vascular images using COnstrained Data Extrapolation (CODE) and evaluate its capability in estimating vessel area and stenosis. CODE is based on the constraint that the full width half maximum of a vessel can be accurately estimated and, since it represents the best estimate for the width of the object, higher k-space data can be generated from this information. To demonstrate the potential of extracting high definition vessel edges using low resolution data, both simulated and human data were analyzed to better visualize the vessels and to quantify both area and stenosis measurements. The results from CODE using one-fourth of the fully sampled k-space data were compared with a compressed sensing (CS) reconstruction approach using the same total amount of data but spread out between the center of k-space and the outer portions of the original k-space to accelerate data acquisition by a factor of four. For a sufficiently high signal-to-noise ratio (SNR) such as 16 (8), we found that objects as small as 3 voxels in the 25% under-sampled data (6 voxels when zero-filled) could be used for CODE and CS and provide an estimate of area with an error <5% (10%). For estimating up to a 70% stenosis with an SNR of 4, CODE was found to be more robust to noise than CS having a smaller variance albeit a larger bias. Reconstruction times were >200 (30) times faster for CODE compared to CS in the simulated (human) data. CODE was capable of producing sharp sub-voxel edges and accurately estimating stenosis to within 5% for clinically relevant studies of vessels with a width of at least 3pixels in the low resolution images. Copyright © 2017 Elsevier Inc. All rights reserved.
High resolution x-ray CMT: Reconstruction methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, J.K.
This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less
NASA Astrophysics Data System (ADS)
Mann, Stephen
2009-10-01
Understanding how chemically derived processes control the construction and organization of matter across extended and multiple length scales is of growing interest in many areas of materials research. Here we review present equilibrium and non-equilibrium self-assembly approaches to the synthetic construction of discrete hybrid (inorganic-organic) nano-objects and higher-level nanostructured networks. We examine a range of synthetic modalities under equilibrium conditions that give rise to integrative self-assembly (supramolecular wrapping, nanoscale incarceration and nanostructure templating) or higher-order self-assembly (programmed/directed aggregation). We contrast these strategies with processes of transformative self-assembly that use self-organizing media, reaction-diffusion systems and coupled mesophases to produce higher-level hybrid structures under non-equilibrium conditions. Key elements of the constructional codes associated with these processes are identified with regard to existing theoretical knowledge, and presented as a heuristic guideline for the rational design of hybrid nano-objects and nanomaterials.
Fundamental Limits of Delay and Security in Device-to-Device Communication
2013-01-01
systematic MDS (maximum distance separable) codes and random binning strategies that achieve a Pareto optimal delayreconstruction tradeoff. The erasure MD...file, and a coding scheme based on erasure compression and Slepian-Wolf binning is presented. The coding scheme is shown to provide a Pareto optimal...ble) codes and random binning strategies that achieve a Pareto optimal delay- reconstruction tradeoff. The erasure MD setup is then used to propose a
Intermittent Fermi-Pasta-Ulam Dynamics at Equilibrium
NASA Astrophysics Data System (ADS)
Campbell, David; Danieli, Carlo; Flach, Sergej
The equilibrium value of an observable defines a manifold in the phase space of an ergodic and equipartitioned many-body syste. A typical trajectory pierces that manifold infinitely often as time goes to infinity. We use these piercings to measure both the relaxation time of the lowest frequency eigenmode of the Fermi-Pasta-Ulam chain, as well as the fluctuations of the subsequent dynamics in equilibrium. We show that previously obtained scaling laws for equipartition times are modified at low energy density due to an unexpected slowing down of the relaxation. The dynamics in equilibrium is characterized by a power-law distribution of excursion times far off equilibrium, with diverging variance. The long excursions arise from sticky dynamics close to regular orbits in the phase space. Our method is generalizable to large classes of many-body systems. The authors acknowledge financial support from IBS (Project Code IBS-R024-D1).
Morphology of the utricular otolith organ in the toadfish, Opsanus tau.
Boyle, Richard; Ehsanian, Reza; Mofrad, Alireza; Popova, Yekaterina; Varelas, Joseph
2018-06-15
The utricle provides the vestibular reflex pathways with the sensory codes of inertial acceleration of self-motion and head orientation with respect to gravity to control balance and equilibrium. Here we present an anatomical description of this structure in the adult oyster toadfish and establish a morphological basis for interpretation of subsequent functional studies. Light, scanning, and transmission electron microscopy techniques were applied to visualize the sensory epithelium at varying levels of detail, its neural innervation and its synaptic organization. Scanning electron microscopy was used to visualize otolith mass and morphological polarization patterns of hair cells. Afferent nerve fibers were visualized following labeling with biocytin, and light microscope images were used to make three-dimensional (3-D) reconstructions of individual labeled afferents to identify dendritic morphology with respect to epithelial location. Transmission electron micrographs were compiled to create a serial 3-D reconstruction of a labeled afferent over a segment of its dendritic field and to examine the cell-afferent synaptic contacts. Major observations are: a well-defined striola, medial and lateral extra-striolar regions with a zonal organization of hair bundles; prominent lacinia projecting laterally; dependence of hair cell density on macular location; narrow afferent dendritic fields that follow the hair bundle polarization; synaptic specializations issued by afferents are typically directed towards a limited number of 7-13 hair cells, but larger dendritic fields in the medial extra-striola can be associated with > 20 hair cells also; and hair cell synaptic bodies can be confined to only an individual afferent or can synapse upon several afferents. © 2018 Wiley Periodicals, Inc.
Epoch of Reionization : An Investigation of the Semi-Analytic 21CMMC Code
NASA Astrophysics Data System (ADS)
Miller, Michelle
2018-01-01
After the Big Bang the universe was filled with neutral hydrogen that began to cool and collapse into the first structures. These first stars and galaxies began to emit radiation that eventually ionized all of the neutral hydrogen in the universe. 21CMMC is a semi-numerical code that takes simulated boxes of this ionized universe from another code called 21cmFAST. Mock measurements are taken from the simulated boxes in 21cmFAST. Those measurements are thrown into 21CMMC and help us determine three major parameters of this simulated universe: virial temperature, mean free path, and ionization efficiency. My project tests the robustness of 21CMMC on universe simulations other than 21cmFAST to see whether 21CMMC can properly reconstruct early universe parameters given a mock “measurement” in the form of power spectra. We determine that while two of the three EoR parameters (Virial Temperature and Efficiency) have some reconstructability, the mean free path parameter in the code is the least robust. This requires development of the 21CMMC code.
Tracking Equilibrium and Nonequilibrium Shifts in Data with TREND.
Xu, Jia; Van Doren, Steven R
2017-01-24
Principal component analysis (PCA) discovers patterns in multivariate data that include spectra, microscopy, and other biophysical measurements. Direct application of PCA to crowded spectra, images, and movies (without selecting peaks or features) was shown recently to identify their equilibrium or temporal changes. To enable the community to utilize these capabilities with a wide range of measurements, we have developed multiplatform software named TREND to Track Equilibrium and Nonequilibrium population shifts among two-dimensional Data frames. TREND can also carry this out by independent component analysis. We highlight a few examples of finding concurrent processes. TREND extracts dual phases of binding to two sites directly from the NMR spectra of the titrations. In a cardiac movie from magnetic resonance imaging, TREND resolves principal components (PCs) representing breathing and the cardiac cycle. TREND can also reconstruct the series of measurements from selected PCs, as illustrated for a biphasic, NMR-detected titration and the cardiac MRI movie. Fidelity of reconstruction of series of NMR spectra or images requires more PCs than needed to plot the largest population shifts. TREND reads spectra from many spectroscopies in the most common formats (JCAMP-DX and NMR) and multiple movie formats. The TREND package thus provides convenient tools to resolve the processes recorded by diverse biophysical methods. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Chambers, Lin Hartung
1994-01-01
The theory for radiation emission, absorption, and transfer in a thermochemical nonequilibrium flow is presented. The expressions developed reduce correctly to the limit at equilibrium. To implement the theory in a practical computer code, some approximations are used, particularly the smearing of molecular radiation. Details of these approximations are presented and helpful information is included concerning the use of the computer code. This user's manual should benefit both occasional users of the Langley Optimized Radiative Nonequilibrium (LORAN) code and those who wish to use it to experiment with improved models or properties.
Experimental Study of Super-Resolution Using a Compressive Sensing Architecture
2015-03-01
Intelligence 24(9), 1167–1183 (2002). [3] Lin, Z. and Shum, H.-Y., “Fundamental limits of reconstruction-based superresolution algorithms under local...IEEE Transactions on 52, 1289–1306 (April 2006). [9] Marcia, R. and Willett, R., “Compressive coded aperture superresolution image reconstruction,” in
Dharmaraj, Christopher D; Thadikonda, Kishan; Fletcher, Anthony R; Doan, Phuc N; Devasahayam, Nallathamby; Matsumoto, Shingo; Johnson, Calvin A; Cook, John A; Mitchell, James B; Subramanian, Sankaran; Krishna, Murali C
2009-01-01
Three-dimensional Oximetric Electron Paramagnetic Resonance Imaging using the Single Point Imaging modality generates unpaired spin density and oxygen images that can readily distinguish between normal and tumor tissues in small animals. It is also possible with fast imaging to track the changes in tissue oxygenation in response to the oxygen content in the breathing air. However, this involves dealing with gigabytes of data for each 3D oximetric imaging experiment involving digital band pass filtering and background noise subtraction, followed by 3D Fourier reconstruction. This process is rather slow in a conventional uniprocessor system. This paper presents a parallelization framework using OpenMP runtime support and parallel MATLAB to execute such computationally intensive programs. The Intel compiler is used to develop a parallel C++ code based on OpenMP. The code is executed on four Dual-Core AMD Opteron shared memory processors, to reduce the computational burden of the filtration task significantly. The results show that the parallel code for filtration has achieved a speed up factor of 46.66 as against the equivalent serial MATLAB code. In addition, a parallel MATLAB code has been developed to perform 3D Fourier reconstruction. Speedup factors of 4.57 and 4.25 have been achieved during the reconstruction process and oximetry computation, for a data set with 23 x 23 x 23 gradient steps. The execution time has been computed for both the serial and parallel implementations using different dimensions of the data and presented for comparison. The reported system has been designed to be easily accessible even from low-cost personal computers through local internet (NIHnet). The experimental results demonstrate that the parallel computing provides a source of high computational power to obtain biophysical parameters from 3D EPR oximetric imaging, almost in real-time.
HO-CHUNK: Radiation Transfer code
NASA Astrophysics Data System (ADS)
Whitney, Barbara A.; Wood, Kenneth; Bjorkman, J. E.; Cohen, Martin; Wolff, Michael J.
2017-11-01
HO-CHUNK calculates radiative equilibrium temperature solution, thermal and PAH/vsg emission, scattering and polarization in protostellar geometries. It is useful for computing spectral energy distributions (SEDs), polarization spectra, and images.
Hoffman, John; Young, Stefano; Noo, Frédéric; McNitt-Gray, Michael
2016-03-01
With growing interest in quantitative imaging, radiomics, and CAD using CT imaging, the need to explore the impacts of acquisition and reconstruction parameters has grown. This usually requires extensive access to the scanner on which the data were acquired and its workflow is not designed for large-scale reconstruction projects. Therefore, the authors have developed a freely available, open-source software package implementing a common reconstruction method, weighted filtered backprojection (wFBP), for helical fan-beam CT applications. FreeCT_wFBP is a low-dependency, GPU-based reconstruction program utilizing c for the host code and Nvidia CUDA C for GPU code. The software is capable of reconstructing helical scans acquired with arbitrary pitch-values, and sampling techniques such as flying focal spots and a quarter-detector offset. In this work, the software has been described and evaluated for reconstruction speed, image quality, and accuracy. Speed was evaluated based on acquisitions of the ACR CT accreditation phantom under four different flying focal spot configurations. Image quality was assessed using the same phantom by evaluating CT number accuracy, uniformity, and contrast to noise ratio (CNR). Finally, reconstructed mass-attenuation coefficient accuracy was evaluated using a simulated scan of a FORBILD thorax phantom and comparing reconstructed values to the known phantom values. The average reconstruction time evaluated under all flying focal spot configurations was found to be 17.4 ± 1.0 s for a 512 row × 512 column × 32 slice volume. Reconstructions of the ACR phantom were found to meet all CT Accreditation Program criteria including CT number, CNR, and uniformity tests. Finally, reconstructed mass-attenuation coefficient values of water within the FORBILD thorax phantom agreed with original phantom values to within 0.0001 mm(2)/g (0.01%). FreeCT_wFBP is a fast, highly configurable reconstruction package for third-generation CT available under the GNU GPL. It shows good performance with both clinical and simulated data.
Chung, Kevin C.; Song, Jae W.; Shauver, Melissa J.; Cullison, Terry M.; Noone, R. Barrett
2011-01-01
Background To evaluate the case mix of plastic surgeons in their early years of practice by examining candidate case-logs submitted for the Oral Examination. Methods De-identified data from 2000–2009 consisting of case-logs submitted by young plastic surgery candidates for the Oral Examination were analyzed. Data consisted of exam year, CPT (Current Procedural Terminology) Codes and the designation of each CPT code as cosmetic or reconstructive by the candidate, and patient age and gender. Subgroup analyses for comprehensive, cosmetic, craniomaxillofacial, and hand surgery modules were performed by using the CPT code list designated by the American Board of Plastic Surgery Maintenance of Certification in Plastic Surgery ( ) module framework. Results We examined case-logs from a yearly average of 261 candidates over 10 years. Wider variations in yearly percent change in median cosmetic surgery case volumes (−62.5% to 30%) were observed when compared to the reconstructive surgery case volumes (−18.0% to 25.7%). Compared to cosmetic surgery cases per candidate, which varied significantly from year-to-year (p<0.0001), reconstructive surgery cases per candidate did not vary significantly (p=0.954). Subgroup analyses of proportions of types of surgical procedures based on CPT code categories, revealed hand surgery to be the least performed procedure relative to comprehensive, craniomaxillofacial, and cosmetic surgery procedures. Conclusions Graduates of plastic surgery training programs are committed to performing a broad spectrum of reconstructive and cosmetic surgical procedures in their first year of practice. However, hand surgery continues to have a small presence in the practice profiles of young plastic surgeons. PMID:21788850
Liquid rocket combustor computer code development
NASA Technical Reports Server (NTRS)
Liang, P. Y.
1985-01-01
The Advanced Rocket Injector/Combustor Code (ARICC) that has been developed to model the complete chemical/fluid/thermal processes occurring inside rocket combustion chambers are highlighted. The code, derived from the CONCHAS-SPRAY code originally developed at Los Alamos National Laboratory incorporates powerful features such as the ability to model complex injector combustion chamber geometries, Lagrangian tracking of droplets, full chemical equilibrium and kinetic reactions for multiple species, a fractional volume of fluid (VOF) description of liquid jet injection in addition to the gaseous phase fluid dynamics, and turbulent mass, energy, and momentum transport. Atomization and droplet dynamic models from earlier generation codes are transplated into the present code. Currently, ARICC is specialized for liquid oxygen/hydrogen propellants, although other fuel/oxidizer pairs can be easily substituted.
Funding analysis of bilateral autologous free-flap breast reconstructions in Australia.
Sinha, Shiba; Ruskin, Olivia; McCombe, David; Morrison, Wayne; Webb, Angela
2015-08-01
Bilateral breast reconstructions are being increasingly performed. Autologous free-flap reconstructions represent the gold standard for post-mastectomy breast reconstruction but are resource intensive. This study aims to investigate the difference between hospital reimbursement and true cost of bilateral autologous free-flap reconstructions. Retrospective analysis of patients who underwent bilateral autologous free-flap reconstructions at a single Australian tertiary referral centre was performed. Hospital reimbursement was determined from coding analysis. A true cost analysis was also performed. Comparisons were made considering the effect of timing, indication and complications of the procedure. Forty-six bilateral autologous free-flap procedures were performed (87 deep inferior epigastric perforators (DIEPs), four superficial inferior epigastric artery perforator flaps (SIEAs) and one muscle-sparing free transverse rectus abdominis myocutaneous flap (MS-TRAM)). The mean funding discrepancy between hospital reimbursement and actual cost was $12,137 ± $8539 (mean ± standard deviation (SD)) (n = 46). Twenty-four per cent (n = 11) of the cases had been coded inaccurately. If these cases were excluded from analysis, the mean funding discrepancy per case was $9168 ± $7453 (n = 35). Minor and major complications significantly increased the true cost and funding discrepancy (p = 0.02). Bilateral free-flap breast reconstructions performed in Australian public hospitals result in a funding discrepancy. Failure to be economically viable threatens the provision of this procedure in the public system. Plastic surgeons and hospital managers need to adopt measures in order to make these gold-standard procedures cost neutral. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Development of a 3-D upwind PNS code for chemically reacting hypersonic flowfields
NASA Technical Reports Server (NTRS)
Tannehill, J. C.; Wadawadigi, G.
1992-01-01
Two new parabolized Navier-Stokes (PNS) codes were developed to compute the three-dimensional, viscous, chemically reacting flow of air around hypersonic vehicles such as the National Aero-Space Plane (NASP). The first code (TONIC) solves the gas dynamic and species conservation equations in a fully coupled manner using an implicit, approximately-factored, central-difference algorithm. This code was upgraded to include shock fitting and the capability of computing the flow around complex body shapes. The revised TONIC code was validated by computing the chemically-reacting (M(sub infinity) = 25.3) flow around a 10 deg half-angle cone at various angles of attack and the Ames All-Body model at 0 deg angle of attack. The results of these calculations were in good agreement with the results from the UPS code. One of the major drawbacks of the TONIC code is that the central-differencing of fluxes across interior flowfield discontinuities tends to introduce errors into the solution in the form of local flow property oscillations. The second code (UPS), originally developed for a perfect gas, has been extended to permit either perfect gas, equilibrium air, or nonequilibrium air computations. The code solves the PNS equations using a finite-volume, upwind TVD method based on Roe's approximate Riemann solver that was modified to account for real gas effects. The dissipation term associated with this algorithm is sufficiently adaptive to flow conditions that, even when attempting to capture very strong shock waves, no additional smoothing is required. For nonequilibrium calculations, the code solves the fluid dynamic and species continuity equations in a loosely-coupled manner. This code was used to calculate the hypersonic, laminar flow of chemically reacting air over cones at various angles of attack. In addition, the flow around the McDonnel Douglas generic option blended-wing-body was computed and comparisons were made between the perfect gas, equilibrium air, and the nonequilibrium air results.
Bahlman, Joseph W.; Swartz, Sharon M.; Riskin, Daniel K.; Breuer, Kenneth S.
2013-01-01
Gliding is an efficient form of travel found in every major group of terrestrial vertebrates. Gliding is often modelled in equilibrium, where aerodynamic forces exactly balance body weight resulting in constant velocity. Although the equilibrium model is relevant for long-distance gliding, such as soaring by birds, it may not be realistic for shorter distances between trees. To understand the aerodynamics of inter-tree gliding, we used direct observation and mathematical modelling. We used videography (60–125 fps) to track and reconstruct the three-dimensional trajectories of northern flying squirrels (Glaucomys sabrinus) in nature. From their trajectories, we calculated velocities, aerodynamic forces and force coefficients. We determined that flying squirrels do not glide at equilibrium, and instead demonstrate continuously changing velocities, forces and force coefficients, and generate more lift than needed to balance body weight. We compared observed glide performance with mathematical simulations that use constant force coefficients, a characteristic of equilibrium glides. Simulations with varying force coefficients, such as those of live squirrels, demonstrated better whole-glide performance compared with the theoretical equilibrium state. Using results from both the observed glides and the simulation, we describe the mechanics and execution of inter-tree glides, and then discuss how gliding behaviour may relate to the evolution of flapping flight. PMID:23256188
Bahlman, Joseph W; Swartz, Sharon M; Riskin, Daniel K; Breuer, Kenneth S
2013-03-06
Gliding is an efficient form of travel found in every major group of terrestrial vertebrates. Gliding is often modelled in equilibrium, where aerodynamic forces exactly balance body weight resulting in constant velocity. Although the equilibrium model is relevant for long-distance gliding, such as soaring by birds, it may not be realistic for shorter distances between trees. To understand the aerodynamics of inter-tree gliding, we used direct observation and mathematical modelling. We used videography (60-125 fps) to track and reconstruct the three-dimensional trajectories of northern flying squirrels (Glaucomys sabrinus) in nature. From their trajectories, we calculated velocities, aerodynamic forces and force coefficients. We determined that flying squirrels do not glide at equilibrium, and instead demonstrate continuously changing velocities, forces and force coefficients, and generate more lift than needed to balance body weight. We compared observed glide performance with mathematical simulations that use constant force coefficients, a characteristic of equilibrium glides. Simulations with varying force coefficients, such as those of live squirrels, demonstrated better whole-glide performance compared with the theoretical equilibrium state. Using results from both the observed glides and the simulation, we describe the mechanics and execution of inter-tree glides, and then discuss how gliding behaviour may relate to the evolution of flapping flight.
High dynamic range coding imaging system
NASA Astrophysics Data System (ADS)
Wu, Renfan; Huang, Yifan; Hou, Guangqi
2014-10-01
We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.
NASA Technical Reports Server (NTRS)
Bertin, J. J.; Graumann, B. W.
1973-01-01
Numerical codes were developed to calculate the two dimensional flow field which results when supersonic flow encounters double wedge configurations whose angles are such that a type 4 pattern occurs. The flow field model included the shock interaction phenomena for a delta wing orbiter. Two numerical codes were developed, one which used the perfect gas relations and a second which incorporated a Mollier table to define equilibrium air properties. The two codes were used to generate theoretical surface pressure and heat transfer distributions for velocities from 3,821 feet per second to an entry condition of 25,000 feet per second.
Data Parallel Line Relaxation (DPLR) Code User Manual: Acadia - Version 4.01.1
NASA Technical Reports Server (NTRS)
Wright, Michael J.; White, Todd; Mangini, Nancy
2009-01-01
Data-Parallel Line Relaxation (DPLR) code is a computational fluid dynamic (CFD) solver that was developed at NASA Ames Research Center to help mission support teams generate high-value predictive solutions for hypersonic flow field problems. The DPLR Code Package is an MPI-based, parallel, full three-dimensional Navier-Stokes CFD solver with generalized models for finite-rate reaction kinetics, thermal and chemical non-equilibrium, accurate high-temperature transport coefficients, and ionized flow physics incorporated into the code. DPLR also includes a large selection of generalized realistic surface boundary conditions and links to enable loose coupling with external thermal protection system (TPS) material response and shock layer radiation codes.
Evolutionary computation applied to the reconstruction of 3-D surface topography in the SEM.
Kodama, Tetsuji; Li, Xiaoyuan; Nakahira, Kenji; Ito, Dai
2005-10-01
A genetic algorithm has been applied to the line profile reconstruction from the signals of the standard secondary electron (SE) and/or backscattered electron detectors in a scanning electron microscope. This method solves the topographical surface reconstruction problem as one of combinatorial optimization. To extend this optimization approach for three-dimensional (3-D) surface topography, this paper considers the use of a string coding where a 3-D surface topography is represented by a set of coordinates of vertices. We introduce the Delaunay triangulation, which attains the minimum roughness for any set of height data to capture the fundamental features of the surface being probed by an electron beam. With this coding, the strings are processed with a class of hybrid optimization algorithms that combine genetic algorithms and simulated annealing algorithms. Experimental results on SE images are presented.
Post-flight trajectory reconstruction of suborbital free-flyers using GPS raw data
NASA Astrophysics Data System (ADS)
Ivchenko, N.; Yuan, Y.; Linden, E.
2017-08-01
This paper describes the reconstruction of postflight trajectories of suborbital free flying units by using logged GPS raw data. We took the reconstruction as a global least squares optimization problem, using both the pseudo-range and Doppler observables, and solved it by using the trust-region-reflective algorithm, which enabled navigational solutions of high accuracy. The code tracking was implemented with a large number of correlators and least squares curve fitting, in order to improve the precision of the code start times, while a more conventional phased lock loop was used for Doppler tracking. We proposed a weighting scheme to account for fast signal strength variation due to free-flier fast rotation, and a penalty for jerk to achieve a smooth solution. We applied these methods to flight data of two suborbital free flying units launched on REXUS 12 sounding rocket, reconstructing the trajectory, receiver clock error and wind up rates. The trajectory exhibits a parabola with the apogee around 80 km, and the velocity profile shows the details of payloadwobbling. The wind up rates obtained match the measurements from onboard angular rate sensors.
High-accuracy 3D measurement system based on multi-view and structured light
NASA Astrophysics Data System (ADS)
Li, Mingyue; Weng, Dongdong; Li, Yufeng; Zhang, Longbin; Zhou, Haiyun
2013-12-01
3D surface reconstruction is one of the most important topics in Spatial Augmented Reality (SAR). Using structured light is a simple and rapid method to reconstruct the objects. In order to improve the precision of 3D reconstruction, we present a high-accuracy multi-view 3D measurement system based on Gray-code and Phase-shift. We use a camera and a light projector that casts structured light patterns on the objects. In this system, we use only one camera to take photos on the left and right sides of the object respectively. In addition, we use VisualSFM to process the relationships between each perspective, so the camera calibration can be omitted and the positions to place the camera are no longer limited. We also set appropriate exposure time to make the scenes covered by gray-code patterns more recognizable. All of the points above make the reconstruction more precise. We took experiments on different kinds of objects, and a large number of experimental results verify the feasibility and high accuracy of the system.
Validation of the Chemistry Module for the Euler Solver in Unified Flow Solver
2012-03-01
traveling through the atmosphere there are three types of flow regimes that exist; the first is the continuum regime, second is the rarified regime and...The second method has been used in a program called Unified Flow Solver (UFS). UFS is currently being developed under collaborative efforts the Air...thermal non-equilibrium case and finally to a thermo-chemical non- equilibrium case. The data from the simulations will be compared to a second code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilcox, R. S.; Wingen, Andreas; Cianciosa, Mark R.
Some recent experimental observations have found turbulent fluctuation structures that are non-axisymmetric in a tokamak with applied 3D fields. Here, two fluid resistive effects are shown to produce changes relevant to turbulent transport in the modeled 3D magnetohydrodynamic (MHD) equilibrium of tokamak pedestals with these 3D fields applied. Ideal MHD models are insufficient to reproduce the relevant effects. By calculating the ideal 3D equilibrium using the VMEC code, the geometric shaping parameters that determine linear turbulence stability, including the normal curvature and local magnetic shear, are shown to be only weakly modified by applied 3D fields in the DIII-D tokamak.more » These ideal MHD effects are therefore not sufficient to explain the observed changes to fluctuations and transport. Using the M3D-C1 code to model the 3D equilibrium, density is shown to be redistributed on flux surfaces in the pedestal when resistive two fluid effects are included, while islands are screened by rotation in this region. Furthermore, the redistribution of density results in density and pressure gradient scale lengths that vary within pedestal flux surfaces between different helically localized flux tubes. This would produce different drive terms for trapped electron mode and kinetic ballooning mode turbulence, the latter of which is expected to be the limiting factor for pedestal pressure gradients in DIII-D.« less
Wilcox, R. S.; Wingen, Andreas; Cianciosa, Mark R.; ...
2017-07-28
Some recent experimental observations have found turbulent fluctuation structures that are non-axisymmetric in a tokamak with applied 3D fields. Here, two fluid resistive effects are shown to produce changes relevant to turbulent transport in the modeled 3D magnetohydrodynamic (MHD) equilibrium of tokamak pedestals with these 3D fields applied. Ideal MHD models are insufficient to reproduce the relevant effects. By calculating the ideal 3D equilibrium using the VMEC code, the geometric shaping parameters that determine linear turbulence stability, including the normal curvature and local magnetic shear, are shown to be only weakly modified by applied 3D fields in the DIII-D tokamak.more » These ideal MHD effects are therefore not sufficient to explain the observed changes to fluctuations and transport. Using the M3D-C1 code to model the 3D equilibrium, density is shown to be redistributed on flux surfaces in the pedestal when resistive two fluid effects are included, while islands are screened by rotation in this region. Furthermore, the redistribution of density results in density and pressure gradient scale lengths that vary within pedestal flux surfaces between different helically localized flux tubes. This would produce different drive terms for trapped electron mode and kinetic ballooning mode turbulence, the latter of which is expected to be the limiting factor for pedestal pressure gradients in DIII-D.« less
Research on compression performance of ultrahigh-definition videos
NASA Astrophysics Data System (ADS)
Li, Xiangqun; He, Xiaohai; Qing, Linbo; Tao, Qingchuan; Wu, Di
2017-11-01
With the popularization of high-definition (HD) images and videos (1920×1080 pixels and above), there are even 4K (3840×2160) television signals and 8 K (8192×4320) ultrahigh-definition videos. The demand for HD images and videos is increasing continuously, along with the increasing data volume. The storage and transmission cannot be properly solved only by virtue of the expansion capacity of hard disks and the update and improvement of transmission devices. Based on the full use of the coding standard high-efficiency video coding (HEVC), super-resolution reconstruction technology, and the correlation between the intra- and the interprediction, we first put forward a "division-compensation"-based strategy to further improve the compression performance of a single image and frame I. Then, by making use of the above thought and HEVC encoder and decoder, a video compression coding frame is designed. HEVC is used inside the frame. Last, with the super-resolution reconstruction technology, the reconstructed video quality is further improved. The experiment shows that by the proposed compression method for a single image (frame I) and video sequence here, the performance is superior to that of HEVC in a low bit rate environment.
Post-tsunami beach recovery in Thailand: A case for punctuated equilibrium in coastal dynamics
NASA Astrophysics Data System (ADS)
Switzer, Adam D.; Gouramanis, Chris; Bristow, Charles; Yeo, Jeffrey; Kruawun, Jankaew; Rubin, Charles; Sin Lee, Ying; Tien Dat, Pham
2017-04-01
A morpho-geophysical investigation of two beaches in Thailand over the last decade shows that they have completely recovered from the 2004 Indian Ocean tsunami (IOT) without any human intervention. Although the beach systems show contrasting styles of recovery in both cases natural processes have reconstructed the beaches to comparable pre-tsunami morphologies in under a decade, demonstrating the existence of punctuated equilibrium in coastal systems and the resilience of natural systems to catastrophic events. Through a combination of remote sensing, field surveys and shallow geophysics we reconstruct the post-event recovery of beaches at Phra Thong Island, a remote, near pristine site that was severely impacted by the IOT. We identify periods of aggradation, progradation and washover sedimentation that match with local events including a storm in November 2007. The rapid recovery of these systems implies that majority of sediment scoured by the tsunami was not transported far offshore but remained in the littoral zone within reach of fair-weather waves that returned it (the sediment) to the beach naturally.
Nonlinear climate sensitivity and its implications for future greenhouse warming.
Friedrich, Tobias; Timmermann, Axel; Tigchelaar, Michelle; Elison Timm, Oliver; Ganopolski, Andrey
2016-11-01
Global mean surface temperatures are rising in response to anthropogenic greenhouse gas emissions. The magnitude of this warming at equilibrium for a given radiative forcing-referred to as specific equilibrium climate sensitivity ( S )-is still subject to uncertainties. We estimate global mean temperature variations and S using a 784,000-year-long field reconstruction of sea surface temperatures and a transient paleoclimate model simulation. Our results reveal that S is strongly dependent on the climate background state, with significantly larger values attained during warm phases. Using the Representative Concentration Pathway 8.5 for future greenhouse radiative forcing, we find that the range of paleo-based estimates of Earth's future warming by 2100 CE overlaps with the upper range of climate simulations conducted as part of the Coupled Model Intercomparison Project Phase 5 (CMIP5). Furthermore, we find that within the 21st century, global mean temperatures will very likely exceed maximum levels reconstructed for the last 784,000 years. On the basis of temperature data from eight glacial cycles, our results provide an independent validation of the magnitude of current CMIP5 warming projections.
Nonlinear climate sensitivity and its implications for future greenhouse warming
Friedrich, Tobias; Timmermann, Axel; Tigchelaar, Michelle; Elison Timm, Oliver; Ganopolski, Andrey
2016-01-01
Global mean surface temperatures are rising in response to anthropogenic greenhouse gas emissions. The magnitude of this warming at equilibrium for a given radiative forcing—referred to as specific equilibrium climate sensitivity (S)—is still subject to uncertainties. We estimate global mean temperature variations and S using a 784,000-year-long field reconstruction of sea surface temperatures and a transient paleoclimate model simulation. Our results reveal that S is strongly dependent on the climate background state, with significantly larger values attained during warm phases. Using the Representative Concentration Pathway 8.5 for future greenhouse radiative forcing, we find that the range of paleo-based estimates of Earth’s future warming by 2100 CE overlaps with the upper range of climate simulations conducted as part of the Coupled Model Intercomparison Project Phase 5 (CMIP5). Furthermore, we find that within the 21st century, global mean temperatures will very likely exceed maximum levels reconstructed for the last 784,000 years. On the basis of temperature data from eight glacial cycles, our results provide an independent validation of the magnitude of current CMIP5 warming projections. PMID:28861462
Dependency of Tearing Mode Stability on Current and Pressure Profiles in DIII-D Hybrid Discharges
NASA Astrophysics Data System (ADS)
Kim, K.; Park, J. M.; Murakami, M.; La Haye, R. J.; Na, Y.-S.; SNU/ORAU; ORNL; Atomics, General; SNU; DIII-D Team
2016-10-01
Understanding the physics of the onset and evolution of tearing modes (TMs) in tokamak plasmas is important for high- β steady-state operation. Based on DIII-D steady-state hybrid experiments with accurate equilibrium reconstruction and well-measured plasma profiles, the 2/1 tearing mode can be more stable with increasing local current and pressure gradient at rational surface and with lower pressure peaking and plasma inductance. The tearing stability index Δ', estimated by the Rutherford equation with experimental mode growth rate was validated against Δ' calculated by linear eigenvalue solver (PEST3); preliminary comprehensive MHD modeling by NIMROD reproduced the TM onset reasonably well. We present a novel integrated modeling for the purpose of predicting TM onset in experiment by combining a model equilibrium reconstruction using IPS/FASTRAN, linear stability Δ' calculation using PEST3, and fitting formula for critical Δ' from NIMROD. Work supported in part by the US DoE under DE-AC05-06OR23100, DE-AC05-00OR22725, and DEFC02-04ER54698.
Numerical Modeling of Mixing and Venting from Explosions in Bunkers
NASA Astrophysics Data System (ADS)
Liu, Benjamin
2005-07-01
2D and 3D numerical simulations were performed to study the dynamic interaction of explosion products in a concrete bunker with ambient air, stored chemical or biological warfare (CBW) agent simulant, and the surrounding walls and structure. The simulations were carried out with GEODYN, a multi-material, Godunov-based Eulerian code, that employs adaptive mesh refinement and runs efficiently on massively parallel computer platforms. Tabular equations of state were used for all materials with the exception of any high explosives employed, which were characterized with conventional JWL models. An appropriate constitutive model was used to describe the concrete. Interfaces between materials were either tracked with a volume-of-fluid method that used high-order reconstruction to specify the interface location and orientation, or a capturing approach was employed with the assumption of local thermal and mechanical equilibrium. A major focus of the study was to estimate the extent of agent heating that could be obtained prior to venting of the bunker and resultant agent dispersal. Parameters investigated included the bunker construction, agent layout, energy density in the bunker and the yield-to-agent mass ratio. Turbulent mixing was found to be the dominant heat transfer mechanism for heating the agent.
Simulation of MST tokamak discharges with resonant magnetic perturbations
NASA Astrophysics Data System (ADS)
Cornille, B. S.; Sovinec, C. R.; Chapman, B. E.; Dubois, A.; McCollam, K. J.; Munaretto, S.
2016-10-01
Nonlinear MHD modeling of MST tokamak plasmas with an applied resonant magnetic perturbation (RMP) reveals degradation of flux surfaces that may account for the experimentally observed suppression of runaway electrons with the RMP. Runaway electrons are routinely generated in MST tokamak discharges with low plasma density. When an m = 3 RMP is applied these electrons are strongly suppressed, while an m = 1 RMP of comparable amplitude has little effect. The computations are performed using the NIMROD code and use reconstructed equilibrium states of MST tokamak plasmas with q (0) < 1 and q (a) = 2.2 . Linear computations show that the (1 , 1) -kink and (2 , 2) -tearing modes are unstable, and nonlinear simulations produce sawtoothing with a period of approximately 0.5 ms, which is comparable to the period of MHD activity observed experimentally. Adding an m = 3 RMP in the computation degrades flux surfaces in the outer region of the plasma, while no degradation occurs with an m = 1 RMP. The outer flux surface degradation with the m = 3 RMP, combined with the sawtooth-induced distortion of flux surfaces in the core, may account for the observed suppression of runaway electrons. Work supported by DOE Grant DE-FC02-08ER54975.
Well-balanced Schemes for Gravitationally Stratified Media
NASA Astrophysics Data System (ADS)
Käppeli, R.; Mishra, S.
2015-10-01
We present a well-balanced scheme for the Euler equations with gravitation. The scheme is capable of maintaining exactly (up to machine precision) a discrete hydrostatic equilibrium without any assumption on a thermodynamic variable such as specific entropy or temperature. The well-balanced scheme is based on a local hydrostatic pressure reconstruction. Moreover, it is computationally efficient and can be incorporated into any existing algorithm in a straightforward manner. The presented scheme improves over standard ones especially when flows close to a hydrostatic equilibrium have to be simulated. The performance of the well-balanced scheme is demonstrated on an astrophysically relevant application: a toy model for core-collapse supernovae.
Jones, P D. [University of East Anglia, Norwich, United Kingdom; Wigley, T. M. L. [University of East Anglia, Norwich, United Kingdom; Briffa, K. R. [University of East Anglia, Norwich, United Kingdom
2012-01-01
Real and reconstructed measurements of monthly mean pressure data have been constructed for Europe for 1780 through 1980 and North America for 1858 through 1980. The reconstructions use early pressure, temperature, and precipitation data from a variety of sources including World Weather Records, meteorological and national archives, circulation maps, and daily chart series. Each record contains the year, monthly mean pressure, quality code, and annual mean pressure. These reconstructed gridded monthly pressures provide a reliable historical record of mean sea-level pressures for Europe and North America. The data are in two files: pressure reconstructions for Europe (1.47 MB) and for North America (0.72 MB).
NASA Technical Reports Server (NTRS)
Weilmuenster, K. J.; Hamilton, H. H., II
1983-01-01
A computer code HALIS, designed to compute the three dimensional flow about shuttle like configurations at angles of attack greater than 25 deg, is described. Results from HALIS are compared where possible with an existing flow field code; such comparisons show excellent agreement. Also, HALIS results are compared with experimental pressure distributions on shuttle models over a wide range of angle of attack. These comparisons are excellent. It is demonstrated that the HALIS code can incorporate equilibrium air chemistry in flow field computations.
Development of the general interpolants method for the CYBER 200 series of supercomputers
NASA Technical Reports Server (NTRS)
Stalnaker, J. F.; Robinson, M. A.; Spradley, L. W.; Kurzius, S. C.; Thoenes, J.
1988-01-01
The General Interpolants Method (GIM) is a 3-D, time-dependent, hybrid procedure for generating numerical analogs of the conservation laws. This study is directed toward the development and application of the GIM computer code for fluid dynamic research applications as implemented for the Cyber 200 series of supercomputers. An elliptic and quasi-parabolic version of the GIM code are discussed. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and an implicit finite difference scheme are also included.
Neutron displacement cross-sections for tantalum and tungsten at energies up to 1 GeV
NASA Astrophysics Data System (ADS)
Broeders, C. H. M.; Konobeyev, A. Yu.; Villagrasa, C.
2005-06-01
The neutron displacement cross-section has been evaluated for tantalum and tungsten at energies from 10 -5 eV up to 1 GeV. The nuclear optical model, the intranuclear cascade model combined with the pre-equilibrium and evaporation models were used for the calculations. The number of defects produced by recoil atoms nuclei in materials was calculated by the Norgett, Robinson, Torrens model and by the approach combining calculations using the binary collision approximation model and the results of the molecular dynamics simulation. The numerical calculations were done using the NJOY code, the ECIS96 code, the MCNPX code and the IOTA code.
Upwind MacCormack Euler solver with non-equilibrium chemistry
NASA Technical Reports Server (NTRS)
Sherer, Scott E.; Scott, James N.
1993-01-01
A computer code, designated UMPIRE, is currently under development to solve the Euler equations in two dimensions with non-equilibrium chemistry. UMPIRE employs an explicit MacCormack algorithm with dissipation introduced via Roe's flux-difference split upwind method. The code also has the capability to employ a point-implicit methodology for flows where stiffness is introduced through the chemical source term. A technique consisting of diagonal sweeps across the computational domain from each corner is presented, which is used to reduce storage and execution requirements. Results depicting one dimensional shock tube flow for both calorically perfect gas and thermally perfect, dissociating nitrogen are presented to verify current capabilities of the program. Also, computational results from a chemical reactor vessel with no fluid dynamic effects are presented to check the chemistry capability and to verify the point implicit strategy.
Glynn, P.D.
1991-01-01
The computer code MBSSAS uses two-parameter Margules-type excess-free-energy of mixing equations to calculate thermodynamic equilibrium, pure-phase saturation, and stoichiometric saturation states in binary solid-solution aqueous-solution (SSAS) systems. Lippmann phase diagrams, Roozeboom diagrams, and distribution-coefficient diagrams can be constructed from the output data files, and also can be displayed by MBSSAS (on IBM-PC compatible computers). MBSSAS also will calculate accessory information, such as the location of miscibility gaps, spinodal gaps, critical-mixing points, alyotropic extrema, Henry's law solid-phase activity coefficients, and limiting distribution coefficients. Alternatively, MBSSAS can use such information (instead of the Margules, Guggenheim, or Thompson and Waldbaum excess-free-energy parameters) to calculate the appropriate excess-free-energy of mixing equation for any given SSAS system. ?? 1991.
Simulation of a hydrocarbon fueled scramjet exhaust
NASA Technical Reports Server (NTRS)
Leng, J.
1982-01-01
Exhaust nozzle flow fields for a fully integrated, hydrocarbon burning scramjet were calculated for flight conditions of M (undisturbed free stream) = 4 at 6.1 km altitude and M (undisturbed free stream) = 6 at 30.5 km altitude. Equilibrium flow, frozen flow, and finite rate chemistry effects are considered. All flow fields were calculated by method of characteristics. Finite rate chemistry results were evaluated by a one dimensional code (Bittker) using streamtube area distributions extracted from the equilibrium flow field, and compared to very slow artificial rate cases for the same streamtube area distribution. Several candidate substitute gas mixtures, designed to simulate the gas dynamics of the real engine exhaust flow, were examined. Two mixtures are found to give excellent simulations of the specified exhaust flow fields when evaluated by the same method of characteristics computer code.
Stabilization of the SIESTA MHD Equilibrium Code Using Rapid Cholesky Factorization
NASA Astrophysics Data System (ADS)
Hirshman, S. P.; D'Azevedo, E. A.; Seal, S. K.
2016-10-01
The SIESTA MHD equilibrium code solves the discretized nonlinear MHD force F ≡ J X B - ∇p for a 3D plasma which may contain islands and stochastic regions. At each nonlinear evolution step, it solves a set of linearized MHD equations which can be written r ≡ Ax - b = 0, where A is the linearized MHD Hessian matrix. When the solution norm | x| is small enough, the nonlinear force norm will be close to the linearized force norm | r| 0 obtained using preconditioned GMRES. In many cases, this procedure works well and leads to a vanishing nonlinear residual (equilibrium) after several iterations in SIESTA. In some cases, however, | x|>1 results and the SIESTA code has to be restarted to obtain nonlinear convergence. In order to make SIESTA more robust and avoid such restarts, we have implemented a new rapid QR factorization of the Hessian which allows us to rapidly and accurately solve the least-squares problem AT r = 0, subject to the condition | x|<1. This avoids large contributions to the nonlinear force terms and in general makes the convergence sequence of SIESTA much more stable. The innovative rapid QR method is based on a pairwise row factorization of the tri-diagonal Hessian. It provides a complete Cholesky factorization while preserving the memory allocation of A. This work was supported by the U.S. D.O.E. contract DE-AC05-00OR22725.
Numerical simulation of hypersonic inlet flows with equilibrium or finite rate chemistry
NASA Technical Reports Server (NTRS)
Yu, Sheng-Tao; Hsieh, Kwang-Chung; Shuen, Jian-Shun; Mcbride, Bonnie J.
1988-01-01
An efficient numerical program incorporated with comprehensive high temperature gas property models has been developed to simulate hypersonic inlet flows. The computer program employs an implicit lower-upper time marching scheme to solve the two-dimensional Navier-Stokes equations with variable thermodynamic and transport properties. Both finite-rate and local-equilibrium approaches are adopted in the chemical reaction model for dissociation and ionization of the inlet air. In the finite rate approach, eleven species equations coupled with fluid dynamic equations are solved simultaneously. In the local-equilibrium approach, instead of solving species equations, an efficient chemical equilibrium package has been developed and incorporated into the flow code to obtain chemical compositions directly. Gas properties for the reaction products species are calculated by methods of statistical mechanics and fit to a polynomial form for C(p). In the present study, since the chemical reaction time is comparable to the flow residence time, the local-equilibrium model underpredicts the temperature in the shock layer. Significant differences of predicted chemical compositions in shock layer between finite rate and local-equilibrium approaches have been observed.
Equilibrium β-limits in classical stellarators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loizu, Joaquim; Hudson, S. R.; Nuhrenberg, C.
Here, a numerical investigation is carried out to understand the equilibrium β-limit in a classical stellarator. The stepped-pressure equilibrium code is used in order to assess whether or not magnetic islands and stochastic field-lines can emerge at high β. Two modes of operation are considered: a zero-net-current stellarator and a fixed-iota stellarator. Despite the fact that relaxation is allowed, the former is shown to maintain good flux surfaces up to the equilibrium β-limit predicted by ideal-magnetohydrodynamics (MHD), above which a separatrix forms. The latter, which has no ideal equilibrium β-limit, is shown to develop regions of magnetic islands and chaosmore » at sufficiently high β, thereby providing a ‘non-ideal β-limit’. Perhaps surprisingly, however, the value of β at which the Shafranov shift of the axis reaches a fraction of the minor radius follows in all cases the scaling laws predicted by ideal-MHD. We compare our results to the High-Beta-Stellarator theory of Freidberg and derive a new prediction for the non-ideal equilibrium β-limit above which chaos emerges.« less
Equilibrium β-limits in classical stellarators
Loizu, Joaquim; Hudson, S. R.; Nuhrenberg, C.; ...
2017-11-17
Here, a numerical investigation is carried out to understand the equilibrium β-limit in a classical stellarator. The stepped-pressure equilibrium code is used in order to assess whether or not magnetic islands and stochastic field-lines can emerge at high β. Two modes of operation are considered: a zero-net-current stellarator and a fixed-iota stellarator. Despite the fact that relaxation is allowed, the former is shown to maintain good flux surfaces up to the equilibrium β-limit predicted by ideal-magnetohydrodynamics (MHD), above which a separatrix forms. The latter, which has no ideal equilibrium β-limit, is shown to develop regions of magnetic islands and chaosmore » at sufficiently high β, thereby providing a ‘non-ideal β-limit’. Perhaps surprisingly, however, the value of β at which the Shafranov shift of the axis reaches a fraction of the minor radius follows in all cases the scaling laws predicted by ideal-MHD. We compare our results to the High-Beta-Stellarator theory of Freidberg and derive a new prediction for the non-ideal equilibrium β-limit above which chaos emerges.« less
Teaching an Old Dog an Old Trick: FREE-FIX and Free-Boundary Axisymmetric MHD Equilibrium
NASA Astrophysics Data System (ADS)
Guazzotto, Luca
2015-11-01
A common task in plasma physics research is the calculation of an axisymmetric equilibrium for tokamak modeling. The main unknown of the problem is the magnetic poloidal flux ψ. The easiest approach is to assign the shape of the plasma and only solve the equilibrium problem in the plasma / closed-field-lines region (the ``fixed-boundary approach''). Often, one may also need the vacuum fields, i.e. the equilibrium in the open-field-lines region, requiring either coil currents or ψ on some closed curve outside the plasma to be assigned (the ``free-boundary approach''). Going from one approach to the other is a textbook problem, involving the calculation of Green's functions and surface integrals in the plasma. However, no tools are readily available to perform this task. Here we present a code (FREE-FIX) to compute a boundary condition for a free-boundary equilibrium given only the corresponding fixed-boundary equilibrium. An improvement to the standard solution method, allowing for much faster calculations, is presented. Applications are discussed. PPPL fund 245139 and DOE grant G00009102.
Zooming in on vibronic structure by lowest-value projection reconstructed 4D coherent spectroscopy
NASA Astrophysics Data System (ADS)
Harel, Elad
2018-05-01
A fundamental goal of chemical physics is an understanding of microscopic interactions in liquids at and away from equilibrium. In principle, this microscopic information is accessible by high-order and high-dimensionality nonlinear optical measurements. Unfortunately, the time required to execute such experiments increases exponentially with the dimensionality, while the signal decreases exponentially with the order of the nonlinearity. Recently, we demonstrated a non-uniform acquisition method based on radial sampling of the time-domain signal [W. O. Hutson et al., J. Phys. Chem. Lett. 9, 1034 (2018)]. The four-dimensional spectrum was then reconstructed by filtered back-projection using an inverse Radon transform. Here, we demonstrate an alternative reconstruction method based on the statistical analysis of different back-projected spectra which results in a dramatic increase in sensitivity and at least a 100-fold increase in dynamic range compared to conventional uniform sampling and Fourier reconstruction. These results demonstrate that alternative sampling and reconstruction methods enable applications of increasingly high-order and high-dimensionality methods toward deeper insights into the vibronic structure of liquids.
Compression of computer generated phase-shifting hologram sequence using AVC and HEVC
NASA Astrophysics Data System (ADS)
Xing, Yafei; Pesquet-Popescu, Béatrice; Dufaux, Frederic
2013-09-01
With the capability of achieving twice the compression ratio of Advanced Video Coding (AVC) with similar reconstruction quality, High Efficiency Video Coding (HEVC) is expected to become the newleading technique of video coding. In order to reduce the storage and transmission burden of digital holograms, in this paper we propose to use HEVC for compressing the phase-shifting digital hologram sequences (PSDHS). By simulating phase-shifting digital holography (PSDH) interferometry, interference patterns between illuminated three dimensional( 3D) virtual objects and the stepwise phase changed reference wave are generated as digital holograms. The hologram sequences are obtained by the movement of the virtual objects and compressed by AVC and HEVC. The experimental results show that AVC and HEVC are efficient to compress PSDHS, with HEVC giving better performance. Good compression rate and reconstruction quality can be obtained with bitrate above 15000kbps.
Non-equilibrium oxidation states of zirconium during early stages of metal oxidation
Ma, Wen; Senanayake, Sanjaya D.; Herbert, F. William; ...
2015-03-11
The chemical state of Zr during the initial, self-limiting stage of oxidation on single crystal zirconium (0001), with oxide thickness on the order of 1 nm, was probed by synchrotron x-ray photoelectron spectroscopy. Quantitative analysis of the Zr 3d spectrum by the spectrum reconstruction method demonstrated the formation of Zr 1+, Zr 2+, and Zr 3+ as non-equilibrium oxidation states, in addition to Zr 4+ in the stoichiometric ZrO 2. This finding resolves the long-debated question of whether it is possible to form any valence states between Zr 0 and Zr 4+ at the metal-oxide interface. As a result, themore » presence of local strong electric fields and the minimization of interfacial energy are assessed and demonstrated as mechanisms that can drive the formation of these non-equilibrium valence states of Zr.« less
2017-05-04
Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6390--17-9723 Equilibrium Structures and Absorption Spectra for SixOy-nH2O Molecular...Absorption Spectra for SixOy-nH2O Molecular Clusters using Density Functional Theory L. Huang, S.G. Lambrakos, and L. Massa1 Naval Research Laboratory, Code...and time-dependent density functional theory (TD-DFT). The size of the clusters considered is relatively large compared to those considered in
Fu, Rose; Chang, Michelle Milee; Chen, Margaret; Rohde, Christine Hsu
2017-02-01
Despite research supporting improved psychosocial well-being, quality of life, and survival for patients undergoing postmastectomy breast reconstruction, Asian patients remain one-fifth as likely as Caucasians to choose reconstruction. This study investigates cultural factors, values, and perceptions held by Asian women that might impact breast reconstruction rates. The authors conducted semistructured interviews of immigrant East Asian women treated for breast cancer in the New York metropolitan area, investigating social structure, culture, attitudes toward surgery, and body image. Three investigators independently coded transcribed interviews, and then collectively evaluated them through axial coding of recurring themes. Thirty-five immigrant East Asian women who underwent surgical treatment for breast cancer were interviewed. Emerging themes include functionality, age, perceptions of plastic surgery, inconvenience, community/family, fear of implants, language, and information. Patients spoke about breasts as a function of their roles as a wife or mother, eliminating the need for breasts when these roles were fulfilled. Many addressed the fear of multiple operations. Quality and quantity of information, and communication with practitioners, impacted perceptions about treatment. Reconstructive surgery was often viewed as cosmetic. Community and family played a significant role in decision-making. Asian women are statistically less likely than Caucasians to pursue breast reconstruction. This is the first study to investigate culture-specific perceptions of breast reconstruction. Results from this study can be used to improve cultural competency in addressing patient concerns. Improving access to information regarding treatment options and surgical outcomes may improve informed decision-making among immigrant Asian women.
NON-EQUILIBRIUM HELIUM IONIZATION IN AN MHD SIMULATION OF THE SOLAR ATMOSPHERE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golding, Thomas Peter; Carlsson, Mats; Leenaarts, Jorrit, E-mail: thomas.golding@astro.uio.no, E-mail: mats.carlsson@astro.uio.no, E-mail: jorrit.leenaarts@astro.su.se
The ionization state of the gas in the dynamic solar chromosphere can depart strongly from the instantaneous statistical equilibrium commonly assumed in numerical modeling. We improve on earlier simulations of the solar atmosphere that only included non-equilibrium hydrogen ionization by performing a 2D radiation-magnetohydrodynamics simulation featuring non-equilibrium ionization of both hydrogen and helium. The simulation includes the effect of hydrogen Lyα and the EUV radiation from the corona on the ionization and heating of the atmosphere. Details on code implementation are given. We obtain helium ion fractions that are far from their equilibrium values. Comparison with models with local thermodynamicmore » equilibrium (LTE) ionization shows that non-equilibrium helium ionization leads to higher temperatures in wavefronts and lower temperatures in the gas between shocks. Assuming LTE ionization results in a thermostat-like behavior with matter accumulating around the temperatures where the LTE ionization fractions change rapidly. Comparison of DEM curves computed from our models shows that non-equilibrium ionization leads to more radiating material in the temperature range 11–18 kK, compared to models with LTE helium ionization. We conclude that non-equilibrium helium ionization is important for the dynamics and thermal structure of the upper chromosphere and transition region. It might also help resolve the problem that intensities of chromospheric lines computed from current models are smaller than those observed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matenine, D; Cote, G; Mascolo-Fortin, J
2016-06-15
Purpose: Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersections between the photons’ trajectories and the object, also called ray-tracing or system matrix computation. This work evaluates different ways to store the system matrix, aiming to reconstruct dense image grids in reasonable time. Methods: We propose an optimized implementation of the Siddon’s algorithm using graphics processing units (GPUs) with a novel data storage scheme. The algorithm computes a part of the system matrix on demand, typically, for one projection angle. The proposed method was enhanced with accelerating options: storage of larger subsets of themore » system matrix, systematic reuse of data via geometric symmetries, an arithmetic-rich parallel code and code configuration via machine learning. It was tested on geometries mimicking a cone beam CT acquisition of a human head. To realistically assess the execution time, the ray-tracing routines were integrated into a regularized Poisson-based reconstruction algorithm. The proposed scheme was also compared to a different approach, where the system matrix is fully pre-computed and loaded at reconstruction time. Results: Fast ray-tracing of realistic acquisition geometries, which often lack spatial symmetry properties, was enabled via the proposed method. Ray-tracing interleaved with projection and backprojection operations required significant additional time. In most cases, ray-tracing was shown to use about 66 % of the total reconstruction time. In absolute terms, tracing times varied from 3.6 s to 7.5 min, depending on the problem size. The presence of geometrical symmetries allowed for non-negligible ray-tracing and reconstruction time reduction. Arithmetic-rich parallel code and machine learning permitted a modest reconstruction time reduction, in the order of 1 %. Conclusion: Partial system matrix storage permitted the reconstruction of higher 3D image grid sizes and larger projection datasets at the cost of additional time, when compared to the fully pre-computed approach. This work was supported in part by the Fonds de recherche du Quebec - Nature et technologies (FRQ-NT). The authors acknowledge partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council of Canada (Grant No. 432290).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arndt, S.; Merkel, P.; Monticello, D.A.
Fixed- and free-boundary equilibria for Wendelstein 7-X (W7-X) [W. Lotz {ital et al.}, {ital Plasma Physics and Controlled Nuclear Fusion Research 1990} (Proc. 13th Int. Conf. Washington, DC, 1990), (International Atomic Energy Agency, Vienna, 1991), Vol. 2, p. 603] configurations are calculated using the Princeton Iterative Equilibrium Solver (PIES) [A. H. Reiman {ital et al.}, Comput. Phys. Commun., {bold 43}, 157 (1986)] to deal with magnetic islands and stochastic regions. Usually, these W7-X configurations require a large number of iterations for PIES convergence. Here, two methods have been successfully tested in an attempt to decrease the number of iterations neededmore » for convergence. First, periodic sequences of different blending parameters are used. Second, the initial guess is vastly improved by using results of the Variational Moments Equilibrium Code (VMEC) [S. P. Hirshmann {ital et al.}, Phys. Fluids {bold 26}, 3553 (1983)]. Use of these two methods have allowed verification of the Hamada condition and tendency of {open_quotes}self-healing{close_quotes} of islands has been observed. {copyright} {ital 1999 American Institute of Physics.}« less
Inferring the parameters of a Markov process from snapshots of the steady state
NASA Astrophysics Data System (ADS)
Dettmer, Simon L.; Berg, Johannes
2018-02-01
We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.
Glacier-derived climate for the Younger Dryas in Europe
NASA Astrophysics Data System (ADS)
Pellitero, Ramon; Rea, Brice R.; Spagnolo, Matteo; Hughes, Philip; Braithwaite, Roger; Renssen, Hans; Ivy-Ochs, Susan; Ribolini, Adriano; Bakke, Jostein; Lukas, Sven
2016-04-01
We have reconstructed and calculated the glacier equilibrium line altitudes (ELA) for 120 Younger Dryas palaeoglaciers from Morocco in the south to Svalbard in the north and from Ireland in the west to Turkey in the east. The chronology of these landform were checked and, when derived from cosmogenic dates, these were recalculated based on newer production rates. Frontal moraines/limits for the palaeoglaciers were used to reconstruct palaeoglacier extent by using a GIS tool which implements a discretised solution for the assumption of perfect-plasticity ice rheology for a single flowline and extents this out to a 3D ice surface. From the resulting equilibrium profile, palaeoglaciers palaeo-ELAs were calculated using another GIS tool. Where several glaciers were reconstructed in a region, a single ELA value was generated following the methodology of Osmaston (2005). In order to utilise these ELAs for quantitative palaeo-precipitation reconstructions an independent regional temperature analysis was undertaken. A database of 121 sites was compiled where the temperature was determined from palaeoproxies other than glaciers (e.g. pollen, diatoms, choleoptera, chironimids…) in both terrestrial and offshore environments. These proxy data provides estimates of average annual, summer and winter temperatures. These data were merged and interpolated to generate maps of average temperature for the warmest and coldest months and annual average temperature. From these maps the temperature at the ELA was obtained using a lapse rate of 0.65°C/100m. Using the ELA temperature range and summer maximum in a degree-day model allows determination of the potential melt which can be taken as equivalent to precipitation given the assumption a glacier is in equilibrium with climate. Results show that during the coldest part of the Younger Dryas precipitation was high in the British Isles, the NW of the Iberian Peninsula and the Vosges. There is a general trend for declining precipitation to the east with some regional exceptions. Local rain shadow effects can be seen in NW Scotland, NW Iberian Peninsula, the Balkans and the Alps. Precipitation is lowest for glaciers in N Norway, which appear to have had their Younger Dryas maxima later in the stadial. This is interpreted to be the result of limited precipitation north of the polar front due to the presence of a near permanent sea ice cover.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pusateri, Elise N.; Morris, Heidi E.; Nelson, Eric
2016-10-17
Here, atmospheric electromagnetic pulse (EMP) events are important physical phenomena that occur through both man-made and natural processes. Radiation-induced currents and voltages in EMP can couple with electrical systems, such as those found in satellites, and cause significant damage. Due to the disruptive nature of EMP, it is important to accurately predict EMP evolution and propagation with computational models. CHAP-LA (Compton High Altitude Pulse-Los Alamos) is a state-of-the-art EMP code that solves Maxwell inline images equations for gamma source-induced electromagnetic fields in the atmosphere. In EMP, low-energy, conduction electrons constitute a conduction current that limits the EMP by opposing themore » Compton current. CHAP-LA calculates the conduction current using an equilibrium ohmic model. The equilibrium model works well at low altitudes, where the electron energy equilibration time is short compared to the rise time or duration of the EMP. At high altitudes, the equilibration time increases beyond the EMP rise time and the predicted equilibrium ionization rate becomes very large. The ohmic model predicts an unphysically large production of conduction electrons which prematurely and abruptly shorts the EMP in the simulation code. An electron swarm model, which implicitly accounts for the time evolution of the conduction electron energy distribution, can be used to overcome the limitations exhibited by the equilibrium ohmic model. We have developed and validated an electron swarm model previously in Pusateri et al. (2015). Here we demonstrate EMP damping behavior caused by the ohmic model at high altitudes and show improvements on high-altitude, upward EMP modeling obtained by integrating a swarm model into CHAP-LA.« less
NASA Astrophysics Data System (ADS)
Pusateri, Elise N.; Morris, Heidi E.; Nelson, Eric; Ji, Wei
2016-10-01
Atmospheric electromagnetic pulse (EMP) events are important physical phenomena that occur through both man-made and natural processes. Radiation-induced currents and voltages in EMP can couple with electrical systems, such as those found in satellites, and cause significant damage. Due to the disruptive nature of EMP, it is important to accurately predict EMP evolution and propagation with computational models. CHAP-LA (Compton High Altitude Pulse-Los Alamos) is a state-of-the-art EMP code that solves Maxwell
A new art code for tomographic interferometry
NASA Technical Reports Server (NTRS)
Tan, H.; Modarress, D.
1987-01-01
A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.
Franco, Ademir; Thevissen, Patrick; Coudyzer, Walter; Develter, Wim; Van de Voorde, Wim; Oyen, Raymond; Vandermeulen, Dirk; Jacobs, Reinhilde; Willems, Guy
2013-05-01
Virtual autopsy is a medical imaging technique, using full body computed tomography (CT), allowing for a noninvasive and permanent observation of all body parts. For dental identification clinically and radiologically observed ante-mortem (AM) and post-mortem (PM) oral identifiers are compared. The study aimed to verify if a PM dental charting can be performed on virtual reconstructions of full-body CT's using the Interpol dental codes. A sample of 103 PM full-body CT's was collected from the forensic autopsy files of the Department of Forensic Medicine University Hospitals, KU Leuven, Belgium. For validation purposes, 3 of these bodies underwent a complete dental autopsy, a dental radiological and a full-body CT examination. The bodies were scanned in a Siemens Definition Flash CT Scanner (Siemens Medical Solutions, Germany). The images were examined on 8- and 12-bit screen resolution as three-dimensional (3D) reconstructions and as axial, coronal and sagittal slices. InSpace(®) (Siemens Medical Solutions, Germany) software was used for 3D reconstruction. The dental identifiers were charted on pink PM Interpol forms (F1, F2), using the related dental codes. Optimal dental charting was obtained by combining observations on 3D reconstructions and CT slices. It was not feasible to differentiate between different kinds of dental restoration materials. The 12-bit resolution enabled to collect more detailed evidences, mainly related to positions within a tooth. Oral identifiers, not implemented in the Interpol dental coding were observed. Amongst these, the observed (3D) morphological features of dental and maxillofacial structures are important identifiers. The latter can become particularly more relevant towards the future, not only because of the inherent spatial features, yet also because of the increasing preventive dental treatment, and the decreasing application of dental restorations. In conclusion, PM full-body CT examinations need to be implemented in the PM dental charting protocols and the Interpol dental codes should be adapted accordingly. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
NASA Astrophysics Data System (ADS)
Bradshaw, S. J.
2009-07-01
Context: The effects of non-equilibrium processes on the ionisation state of strongly emitting elements in the solar corona can be extremely difficult to assess and yet they are critically important. For example, there is much interest in dynamic heating events localised in the solar corona because they are believed to be responsible for its high temperature and yet recent work has shown that the hottest (≥107 K) emission predicted to be associated with these events can be observationally elusive due to the difficulty of creating the highly ionised states from which the expected emission arises. This leads to the possibility of observing instruments missing such heating events entirely. Aims: The equations describing the evolution of the ionisaton state are a very stiff system of coupled, partial differential equations whose solution can be numerically challenging and time-consuming. Without access to specialised codes and significant computational resources it is extremely difficult to avoid the assumption of an equilibrium ionisation state even when it clearly cannot be justified. The aim of the current work is to develop a computational tool to allow straightforward calculation of the time-dependent ionisation state for a wide variety of physical circumstances. Methods: A numerical model comprising the system of time-dependent ionisation equations for a particular element and tabulated values of plasma temperature as a function of time is developed. The tabulated values can be the solutions of an analytical model, the output from a numerical code or a set of observational measurements. An efficient numerical method to solve the ionisation equations is implemented. Results: A suite of tests is designed and run to demonstrate that the code provides reliable and accurate solutions for a number of scenarios including equilibration of the ion population and rapid heating followed by thermal conductive cooling. It is found that the solver can evolve the ionisation state to recover exactly the equilibrium state found by an independent, steady-state solver for all temperatures, resolve the extremely small ionisation/recombination timescales associated with rapid temperature changes at high densities, and provide stable and accurate solutions for both dominant and minor ion population fractions. Rapid heating and cooling of low to moderate density plasma is characterised by significant non-equilibrium ionisation conditions. The effective ionisation temperatures are significantly lower than the electron temperature and the values found are in close agreement with the previous work of others. At the very highest densities included in the present study an assumption of equilibrium ionisation is found to be robust. Conclusions: The computational tool presented here provides a straightforward and reliable way to calculate ionisation states for a wide variety of physical circumstances. The numerical code gives results that are accurate and consistent with previous studies, has relatively undemanding computational requirements and is freely available from the author.
Stability properties and fast ion confinement of hybrid tokamak plasma configurations
NASA Astrophysics Data System (ADS)
Graves, J. P.; Brunetti, D.; Pfefferle, D.; Faustin, J. M. P.; Cooper, W. A.; Kleiner, A.; Lanthaler, S.; Patten, H. W.; Raghunathan, M.
2015-11-01
In hybrid scenarios with flat q just above unity, extremely fast growing tearing modes are born from toroidal sidebands of the near resonant ideal internal kink mode. New scalings of the growth rate with the magnetic Reynolds number arise from two fluid effects and sheared toroidal flow. Non-linear saturated 1/1 dominant modes obtained from initial value stability calculation agree with the amplitude of the 1/1 component of a 3D VMEC equilibrium calculation. Viable and realistic equilibrium representation of such internal kink modes allow fast ion studies to be accurately established. Calculations of MAST neutral beam ion distributions using the VENUS-LEVIS code show very good agreement of observed impaired core fast ion confinement when long lived modes occur. The 3D ICRH code SCENIC also enables the establishment of minority RF distributions in hybrid plasmas susceptible to saturated near resonant internal kink modes.
Size and density distribution of very small dust grains in the Barnard 5 cloud
NASA Technical Reports Server (NTRS)
Lis, Dariusz C.; Leung, Chun Ming
1991-01-01
The effects of the temperature fluctuations in small graphite grains on the energy spectrum and the IR surface brightness of an isolated dust cloud heated externally by the interstellar radiation field were investigated using a series of models based on a radiation transport computer code. This code treats self-consistently the thermal coupling between the transient heating of very small dust grains and the equilibrium heating of conventional large grains. The model results were compared with the IRAS observations of the Barnard 5 (B5) cloud, showing that the 25-micron emission of the cloud must be produced by small grains with a 6-10 A radius, which also contribute about 50 percent to the observed 12-micron emission. The remaining 12 micron flux may be produced by the polycyclic aromatic hydrocarbons. The 60-and 100-micron radiation is dominated by emission from large grains heated under equilibrium conditions.
NASA Astrophysics Data System (ADS)
Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi
2014-06-01
This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.
NASA Astrophysics Data System (ADS)
Thaler, Caroline; Millo, Christian; Ader, Magali; Chaduteau, Carine; Guyot, François; Ménez, Bénédicte
2017-02-01
Carbon and oxygen stable isotope compositions of carbonates are widely used to retrieve paleoenvironmental information. However, bias may exist in such reconstructions as carbonate precipitation is often associated with biological activity. Several skeleton-forming eukaryotes have been shown to precipitate carbonates with significant offsets from isotopic equilibrium with water. Although poorly understood, the origin of these biologically-induced isotopic shifts in biogenic carbonates, commonly referred to as "vital effects", could be related to metabolic effects that may not be restricted to mineralizing eukaryotes. The aim of our study was to determine whether microbially-mediated carbonate precipitation can also produce offsets from equilibrium for oxygen isotopes. We present here δ18O values of calcium carbonates formed by the activity of Sporosarcina pasteurii, a carbonatogenic bacterium whose ureolytic activity produces ammonia (thus increasing pH) and dissolved inorganic carbon (DIC) that precipitates as solid carbonates in the presence of Ca2+. We show that the 1000 lnαCaCO3-H2O values for these bacterially-precipitated carbonates are up to 24.7‰ smaller than those expected for precipitation at isotopic equilibrium. A similar experiment run in the presence of carbonic anhydrase (an enzyme able to accelerate oxygen isotope equilibration between DIC and water) resulted in δ18O values of microbial carbonates in line with values expected at isotopic equilibrium with water. These results demonstrate for the first time that bacteria can induce calcium carbonate precipitation in strong oxygen isotope disequilibrium with water, similarly to what is observed for eukaryotes. This disequilibrium effect can be unambiguously ascribed to oxygen isotope disequilibrium between DIC and water inherited from the oxygen isotope composition of the ureolytically produced CO2, probably combined with a kinetic isotope effect during CO2 hydration/hydroxylation. The fact that both disequilibrium effects are triggered by the metabolic production of CO2, which is common in many microbially-mediated carbonation processes, leads us to propose that metabolically-induced offsets from isotopic equilibrium in microbial carbonates may be more common than previously considered. Therefore, precaution should be taken when using the oxygen isotope signature of microbial carbonates for diagenetic and paleoenvironmental reconstructions.
NASA Astrophysics Data System (ADS)
Kohman, T. P.
1995-05-01
The design of a cosmic X- or gamma -ray telescope with least- squares image reconstruction and its simulated operation have been described (Rev. Sci. Instrum. 60, 3396 and 3410 (1989)). Use of an auxiliary open aperture ("limiter") ahead of the coded aperture limits the object field to fewer pixels than detector elements, permitting least-squares reconstruction with improved accuracy in the imaged field; it also yields a uniformly sensitive ("flat") central field. The design has been enhanced to provide for mask-antimask operation. This cancels and eliminates uncertainties in the detector background, and the simulated results have virtually the same statistical accuracy (pixel-by-pixel output-input RMSD) as with a single mask alone. The simulations have been made more realistic by incorporating instrumental blurring of sources. A second-stage least-squares procedure had been developed to determine the precise positions and total fluxes of point sources responsible for clusters of above-background pixels in the field resulting from the first-stage reconstruction. Another program converts source positions in the image plane to celestial coordinates and vice versa, the image being a gnomic projection of a region of the sky.
Combustion chamber analysis code
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.
1993-01-01
A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.
Python Radiative Transfer Emission code (PyRaTE): non-LTE spectral lines simulations
NASA Astrophysics Data System (ADS)
Tritsis, A.; Yorke, H.; Tassis, K.
2018-05-01
We describe PyRaTE, a new, non-local thermodynamic equilibrium (non-LTE) line radiative transfer code developed specifically for post-processing astrochemical simulations. Population densities are estimated using the escape probability method. When computing the escape probability, the optical depth is calculated towards all directions with density, molecular abundance, temperature and velocity variations all taken into account. A very easy-to-use interface, capable of importing data from simulations outputs performed with all major astrophysical codes, is also developed. The code is written in PYTHON using an "embarrassingly parallel" strategy and can handle all geometries and projection angles. We benchmark the code by comparing our results with those from RADEX (van der Tak et al. 2007) and against analytical solutions and present case studies using hydrochemical simulations. The code will be released for public use.
Thermodynamic Analysis of the Combustion of Metallic Materials
NASA Technical Reports Server (NTRS)
Wilson, D. Bruce; Stoltzfus, Joel M.
2000-01-01
Two types of computer codes are available to assist in the thermodynamic analysis of metallic materials combustion. One type of code calculates phase equilibrium data and is represented by CALPHAD. The other type of code calculates chemical reaction by the Gordon-McBride code. The first has seen significant application for alloy-phase diagrams, but only recently has it been considered for oxidation systems. The Gordon-McBride code has been applied to the combustion of metallic materials. Both codes are limited by their treatment of non-ideal solutions and the fact they are limited to treating volatile and gaseous species as ideal. This paper examines the significance of these limitations for combustion of metallic materials. In addition, the applicability of linear-free energy relationships for solid-phase oxidation and their possible extension to liquid-phase systems is examined.
Chroma intra prediction based on inter-channel correlation for HEVC.
Zhang, Xingyu; Gisquet, Christophe; François, Edouard; Zou, Feng; Au, Oscar C
2014-01-01
In this paper, we investigate a new inter-channel coding mode called LM mode proposed for the next generation video coding standard called high efficiency video coding. This mode exploits inter-channel correlation using reconstructed luma to predict chroma linearly with parameters derived from neighboring reconstructed luma and chroma pixels at both encoder and decoder to avoid overhead signaling. In this paper, we analyze the LM mode and prove that the LM parameters for predicting original chroma and reconstructed chroma are statistically the same. We also analyze the error sensitivity of the LM parameters. We identify some LM mode problematic situations and propose three novel LM-like modes called LMA, LML, and LMO to address the situations. To limit the increase in complexity due to the LM-like modes, we propose some fast algorithms with the help of some new cost functions. We further identify some potentially-problematic conditions in the parameter estimation (including regression dilution problem) and introduce a novel model correction technique to detect and correct those conditions. Simulation results suggest that considerable BD-rate reduction can be achieved by the proposed LM-like modes and model correction technique. In addition, the performance gain of the two techniques appears to be essentially additive when combined.
NASA Astrophysics Data System (ADS)
Kudryavtsev, Alexey N.; Kashkovsky, Alexander V.; Borisov, Semyon P.; Shershnev, Anton A.
2017-10-01
In the present work a computer code RCFS for numerical simulation of chemically reacting compressible flows on hybrid CPU/GPU supercomputers is developed. It solves 3D unsteady Euler equations for multispecies chemically reacting flows in general curvilinear coordinates using shock-capturing TVD schemes. Time advancement is carried out using the explicit Runge-Kutta TVD schemes. Program implementation uses CUDA application programming interface to perform GPU computations. Data between GPUs is distributed via domain decomposition technique. The developed code is verified on the number of test cases including supersonic flow over a cylinder.
NASA Astrophysics Data System (ADS)
Stratakis, D.; Kishek, R. A.; Li, H.; Bernal, S.; Walter, M.; Tobin, J.; Quinn, B.; Reiser, M.; O'Shea, P. G.
2006-11-01
Tomography is the technique of reconstructing an image from its projections. It is widely used in the medical community to observe the interior of the human body by processing multiple x-ray images taken at different angles, A few pioneering researchers have adapted tomography to reconstruct detailed phase space maps of charged particle beams. Some questions arise regarding the limitations of tomography technique for space charge dominated beams. For instance is the linear space charge force a valid approximation? Does tomography equally reproduce phase space for complex, experimentally observed, initial particle distributions? Does tomography make any assumptions about the initial distribution? This study explores the use of accurate modeling with the particle-in-cell code WARP to address these questions, using a wide range of different initial distributions in the code. The study also includes a number of experimental results on tomographic phase space mapping performed on the University of Maryland Electron Ring (UMER).
Ampanozi, Garyfalia; Zimmermann, David; Hatch, Gary M; Ruder, Thomas D; Ross, Steffen; Flach, Patricia M; Thali, Michael J; Ebert, Lars C
2012-05-01
The objective of this study was to explore the perception of the legal authorities regarding different report types and visualization techniques for post-mortem radiological findings. A standardized digital questionnaire was developed and the district attorneys in the catchment area of the affiliated Forensic Institute were requested to evaluate four different types of forensic imaging reports based on four cases examples. Each case was described in four different report types (short written report only, gray-scale CT image with figure caption, color-coded CT image with figure caption, 3D-reconstruction with figure caption). The survey participants were asked to evaluate those types of reports regarding understandability, cost effectiveness and overall appropriateness for the courtroom. 3D reconstructions and color-coded CT images accompanied by written report were preferred regarding understandability and cost/effectiveness. 3D reconstructions of the forensic findings reviewed as most adequate for court. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Overview of the NASA Glenn Flux Reconstruction Based High-Order Unstructured Grid Code
NASA Technical Reports Server (NTRS)
Spiegel, Seth C.; DeBonis, James R.; Huynh, H. T.
2016-01-01
A computational fluid dynamics code based on the flux reconstruction (FR) method is currently being developed at NASA Glenn Research Center to ultimately provide a large- eddy simulation capability that is both accurate and efficient for complex aeropropulsion flows. The FR approach offers a simple and efficient method that is easy to implement and accurate to an arbitrary order on common grid cell geometries. The governing compressible Navier-Stokes equations are discretized in time using various explicit Runge-Kutta schemes, with the default being the 3-stage/3rd-order strong stability preserving scheme. The code is written in modern Fortran (i.e., Fortran 2008) and parallelization is attained through MPI for execution on distributed-memory high-performance computing systems. An h- refinement study of the isentropic Euler vortex problem is able to empirically demonstrate the capability of the FR method to achieve super-accuracy for inviscid flows. Additionally, the code is applied to the Taylor-Green vortex problem, performing numerous implicit large-eddy simulations across a range of grid resolutions and solution orders. The solution found by a pseudo-spectral code is commonly used as a reference solution to this problem, and the FR code is able to reproduce this solution using approximately the same grid resolution. Finally, an examination of the code's performance demonstrates good parallel scaling, as well as an implementation of the FR method with a computational cost/degree- of-freedom/time-step that is essentially independent of the solution order of accuracy for structured geometries.
TORUS: Radiation transport and hydrodynamics code
NASA Astrophysics Data System (ADS)
Harries, Tim
2014-04-01
TORUS is a flexible radiation transfer and radiation-hydrodynamics code. The code has a basic infrastructure that includes the AMR mesh scheme that is used by several physics modules including atomic line transfer in a moving medium, molecular line transfer, photoionization, radiation hydrodynamics and radiative equilibrium. TORUS is useful for a variety of problems, including magnetospheric accretion onto T Tauri stars, spiral nebulae around Wolf-Rayet stars, discs around Herbig AeBe stars, structured winds of O supergiants and Raman-scattered line formation in symbiotic binaries, and dust emission and molecular line formation in star forming clusters. The code is written in Fortran 2003 and is compiled using a standard Gnu makefile. The code is parallelized using both MPI and OMP, and can use these parallel sections either separately or in a hybrid mode.
Markovitz, Craig D.; Tang, Tien T.; Edge, David P.; Lim, Hubert H.
2012-01-01
The brain is a densely interconnected network that relies on populations of neurons within and across multiple nuclei to code for features leading to perception and action. However, the neurophysiology field is still dominated by the characterization of individual neurons, rather than simultaneous recordings across multiple regions, without consistent spatial reconstruction of their locations for comparisons across studies. There are sophisticated histological and imaging techniques for performing brain reconstructions. However, what is needed is a method that is relatively easy and inexpensive to implement in a typical neurophysiology lab and provides consistent identification of electrode locations to make it widely used for pooling data across studies and research groups. This paper presents our initial development of such an approach for reconstructing electrode tracks and site locations within the guinea pig inferior colliculus (IC) to identify its functional organization for frequency coding relevant for a new auditory midbrain implant (AMI). Encouragingly, the spatial error associated with different individuals reconstructing electrode tracks for the same midbrain was less than 65 μm, corresponding to an error of ~1.5% relative to the entire IC structure (~4–5 mm diameter sphere). Furthermore, the reconstructed frequency laminae of the IC were consistently aligned across three sampled midbrains, demonstrating the ability to use our method to combine location data across animals. Hopefully, through further improvements in our reconstruction method, it can be used as a standard protocol across neurophysiology labs to characterize neural data not only within the IC but also within other brain regions to help bridge the gap between cellular activity and network function. Clinically, correlating function with location within and across multiple brain regions can guide optimal placement of electrodes for the growing field of neural prosthetics. PMID:22754502
Equilibrium 𝛽-limits in classical stellarators
NASA Astrophysics Data System (ADS)
Loizu, J.; Hudson, S. R.; Nührenberg, C.; Geiger, J.; Helander, P.
2017-12-01
A numerical investigation is carried out to understand the equilibrium -limit in a classical stellarator. The stepped-pressure equilibrium code (Hudson et al., Phys. Plasmas, vol. 19 (11), 2012) is used in order to assess whether or not magnetic islands and stochastic field-lines can emerge at high . Two modes of operation are considered: a zero-net-current stellarator and a fixed-iota stellarator. Despite the fact that relaxation is allowed (Taylor, Rev. Mod. Phys., vol. 58 (3), 1986, pp. 741-763), the former is shown to maintain good flux surfaces up to the equilibrium -limit predicted by ideal-magnetohydrodynamics (MHD), above which a separatrix forms. The latter, which has no ideal equilibrium -limit, is shown to develop regions of magnetic islands and chaos at sufficiently high , thereby providing a `non-ideal -limit'. Perhaps surprisingly, however, the value of at which the Shafranov shift of the axis reaches a fraction of the minor radius follows in all cases the scaling laws predicted by ideal-MHD. We compare our results to the High-Beta-Stellarator theory of Freidberg (Ideal MHD, 2014, Cambridge University Press) and derive a new prediction for the non-ideal equilibrium -limit above which chaos emerges.
Plasma boundary shape control and real-time equilibrium reconstruction on NSTX-U
Boyer, M. D.; Battaglia, D. J.; Mueller, D.; ...
2018-01-25
Here, the upgrade to the National Spherical Torus eXperiment (NSTX-U) included two main improvements: a larger center-stack, enabling higher toroidal field and longer pulse duration, and the addition of three new tangentially aimed neutral beam sources, which increase available heating and current drive, and allow for flexibility in shaping power, torque, current, and particle deposition profiles. To best use these new capabilities and meet the high-performance operational goals of NSTX-U, major upgrades to the NSTX-U control system (NCS) hardware and software have been made. Several control algorithms, including those used for real-time equilibrium reconstruction and shape control, have been upgradedmore » to improve and extend plasma control capabilities. As part of the commissioning phase of first plasma operations, the shape control system was tuned to control the boundary in both inner-wall limited and diverted discharges. It has been used to accurately track the requested evolution of the boundary (including the size of the inner gap between the plasma and central solenoid, which is a challenge for the ST configuration), X-point locations, and strike point locations, enabling repeatable discharge evolutions for scenario development and diagnostic commissioning.« less
Plasma boundary shape control and real-time equilibrium reconstruction on NSTX-U
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyer, M. D.; Battaglia, D. J.; Mueller, D.
Here, the upgrade to the National Spherical Torus eXperiment (NSTX-U) included two main improvements: a larger center-stack, enabling higher toroidal field and longer pulse duration, and the addition of three new tangentially aimed neutral beam sources, which increase available heating and current drive, and allow for flexibility in shaping power, torque, current, and particle deposition profiles. To best use these new capabilities and meet the high-performance operational goals of NSTX-U, major upgrades to the NSTX-U control system (NCS) hardware and software have been made. Several control algorithms, including those used for real-time equilibrium reconstruction and shape control, have been upgradedmore » to improve and extend plasma control capabilities. As part of the commissioning phase of first plasma operations, the shape control system was tuned to control the boundary in both inner-wall limited and diverted discharges. It has been used to accurately track the requested evolution of the boundary (including the size of the inner gap between the plasma and central solenoid, which is a challenge for the ST configuration), X-point locations, and strike point locations, enabling repeatable discharge evolutions for scenario development and diagnostic commissioning.« less
Test of bootstrap current models using high- β p EAST-demonstration plasmas on DIII-D
Ren, Qilong; Lao, Lang L.; Garofalo, Andrea M.; ...
2015-01-12
Magnetic measurements together with kinetic profile and motional Stark effect measurements are used in full kinetic equilibrium reconstructions to test the Sauter and NEO bootstrap current models in a DIII-D high-more » $${{\\beta}_{\\text{p}}}$$ EAST-demonstration experiment. This aims at developing on DIII-D a high bootstrap current scenario to be extended on EAST for a demonstration of true steady-state at high performance and uses EAST-similar operational conditions: plasma shape, plasma current, toroidal magnetic field, total heating power and current ramp-up rate. It is found that the large edge bootstrap current in these high-$${{\\beta}_{\\text{p}}}$$ plasmas allows the use of magnetic measurements to clearly distinguish the two bootstrap current models. In these high collisionality and high-$${{\\beta}_{\\text{p}}}$$ plasmas, the Sauter model overpredicts the peak of the edge current density by about 30%, while the first-principle kinetic NEO model is in close agreement with the edge current density of the reconstructed equilibrium. Furthermore, these results are consistent with recent work showing that the Sauter model largely overestimates the edge bootstrap current at high collisionality.« less
Plasma boundary shape control and real-time equilibrium reconstruction on NSTX-U
NASA Astrophysics Data System (ADS)
Boyer, M. D.; Battaglia, D. J.; Mueller, D.; Eidietis, N.; Erickson, K.; Ferron, J.; Gates, D. A.; Gerhardt, S.; Johnson, R.; Kolemen, E.; Menard, J.; Myers, C. E.; Sabbagh, S. A.; Scotti, F.; Vail, P.
2018-03-01
The upgrade to the National Spherical Torus eXperiment (NSTX-U) included two main improvements: a larger center-stack, enabling higher toroidal field and longer pulse duration, and the addition of three new tangentially aimed neutral beam sources, which increase available heating and current drive, and allow for flexibility in shaping power, torque, current, and particle deposition profiles. To best use these new capabilities and meet the high-performance operational goals of NSTX-U, major upgrades to the NSTX-U control system (NCS) hardware and software have been made. Several control algorithms, including those used for real-time equilibrium reconstruction and shape control, have been upgraded to improve and extend plasma control capabilities. As part of the commissioning phase of first plasma operations, the shape control system was tuned to control the boundary in both inner-wall limited and diverted discharges. It has been used to accurately track the requested evolution of the boundary (including the size of the inner gap between the plasma and central solenoid, which is a challenge for the ST configuration), X-point locations, and strike point locations, enabling repeatable discharge evolutions for scenario development and diagnostic commissioning.
Progress in understanding heavy-ion stopping
NASA Astrophysics Data System (ADS)
Sigmund, P.; Schinner, A.
2016-09-01
We report some highlights of our work with heavy-ion stopping in the energy range where Bethe stopping theory breaks down. Main tools are our binary stopping theory (PASS code), the reciprocity principle, and Paul's data base. Comparisons are made between PASS and three alternative theoretical schemes (CasP, HISTOP and SLPA). In addition to equilibrium stopping we discuss frozen-charge stopping, deviations from linear velocity dependence below the Bragg peak, application of the reciprocity principle in low-velocity stopping, modeling of equilibrium charges, and the significance of the so-called effective charge.
Proton bombarded reactions of Calcium target nuclei
NASA Astrophysics Data System (ADS)
Tel, Eyyup; Sahan, Muhittin; Sarpün, Ismail Hakki; Kavun, Yusuf; Gök, Ali Armagan; Depedelen, Mesut
2017-09-01
In this study, proton bombarded nuclear reactions calculations of Calcium target nuclei have been investigated in the incident proton energy range of 1-50 MeV. The excitation functions for 40Ca target nuclei reactions have been calculated by using PCROSS nuclear reaction calculation code. Weisskopf-Ewing and the full exciton models were used for equilibrium and for pre-equilibrium calculations, respectively. The excitation functions for 40Ca target nuclei reactions (p,α), (p,n), (p,p) have been calculated using the semi-empirical formula Tel et al. [5].
Stable Spheromaks with Profile Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fowler, T K; Jayakumar, R
A spheromak equilibrium with zero edge current is shown to be stable to both ideal MHD and tearing modes that normally produce Taylor relaxation in gun-injected spheromaks. This stable equilibrium differs from the stable Taylor state in that the current density j falls to zero at the wall. Estimates indicate that this current profile could be sustained by non-inductive current drive at acceptable power levels. Stability is determined using the NIMROD code for linear stability analysis. Non-linear NIMROD calculations with non-inductive current drive could point the way to improved fusion reactors.
Reconstructing Folding Energy Landscapes by Single-Molecule Force Spectroscopy
Woodside, Michael T.; Block, Steven M.
2015-01-01
Folding may be described conceptually in terms of trajectories over a landscape of free energies corresponding to different molecular configurations. In practice, energy landscapes can be difficult to measure. Single-molecule force spectroscopy (SMFS), whereby structural changes are monitored in molecules subjected to controlled forces, has emerged as a powerful tool for probing energy landscapes. We summarize methods for reconstructing landscapes from force spectroscopy measurements under both equilibrium and nonequilibrium conditions. Other complementary, but technically less demanding, methods provide a model-dependent characterization of key features of the landscape. Once reconstructed, energy landscapes can be used to study critical folding parameters, such as the characteristic transition times required for structural changes and the effective diffusion coefficient setting the timescale for motions over the landscape. We also discuss issues that complicate measurement and interpretation, including the possibility of multiple states or pathways and the effects of projecting multiple dimensions onto a single coordinate. PMID:24895850
Free energy reconstruction from steered dynamics without post-processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athenes, Manuel, E-mail: Manuel.Athenes@cea.f; Condensed Matter and Materials Division, Physics and Life Sciences Directorate, LLNL, Livermore, CA 94551; Marinica, Mihai-Cosmin
2010-09-20
Various methods achieving importance sampling in ensembles of nonequilibrium trajectories enable one to estimate free energy differences and, by maximum-likelihood post-processing, to reconstruct free energy landscapes. Here, based on Bayes theorem, we propose a more direct method in which a posterior likelihood function is used both to construct the steered dynamics and to infer the contribution to equilibrium of all the sampled states. The method is implemented with two steering schedules. First, using non-autonomous steering, we calculate the migration barrier of the vacancy in Fe-{alpha}. Second, using an autonomous scheduling related to metadynamics and equivalent to temperature-accelerated molecular dynamics, wemore » accurately reconstruct the two-dimensional free energy landscape of the 38-atom Lennard-Jones cluster as a function of an orientational bond-order parameter and energy, down to the solid-solid structural transition temperature of the cluster and without maximum-likelihood post-processing.« less
Nonequilibrium chemistry boundary layer integral matrix procedure
NASA Technical Reports Server (NTRS)
Tong, H.; Buckingham, A. C.; Morse, H. L.
1973-01-01
The development of an analytic procedure for the calculation of nonequilibrium boundary layer flows over surfaces of arbitrary catalycities is described. An existing equilibrium boundary layer integral matrix code was extended to include nonequilibrium chemistry while retaining all of the general boundary condition features built into the original code. For particular application to the pitch-plane of shuttle type vehicles, an approximate procedure was developed to estimate the nonequilibrium and nonisentropic state at the edge of the boundary layer.
Gilroy, Kyle D.; Elnabawy, Ahmed O.; Yang, Tung -Han; ...
2017-04-27
Despite the remarkable success in controlling the synthesis of metal nanocrystals, it still remains a grand challenge to stabilize and preserve the shapes or internal structures of metastable kinetic products. In this work, we address this issue by systematically investigating the surface and bulk reconstructions experienced by a Pd concave icosahedron when subjected to heating up to 600 °C in vacuum. We used in situ high-resolution transmission electron microscopy to identify the equilibration pathways of this far-from-equilibrium structure. We were able to capture key structural transformations occurring during the thermal annealing process, which were mechanistically rationalized by implementing self-consistent plane-wavemore » density functional theory (DFT) calculations. Specifically, the concave icosahedron was found to evolve into a regular icosahedron via surface reconstruction in the range of 200–400 °C, and then transform into a pseudospherical crystalline structure through bulk reconstruction when further heated to 600 °C. As a result, the mechanistic understanding may lead to the development of strategies for enhancing the thermal stability of metal nanocrystals.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poirier, M.; Gaufridy de Dortan, F. de
A collisional-radiative model describing nonlocal-thermodynamic-equilibrium plasmas is developed. It is based on the HULLAC (Hebrew University Lawrence Livermore Atomic Code) suite for the transitions rates, in the zero-temperature radiation field hypothesis. Two variants of the model are presented: the first one is configuration averaged, while the second one is a detailed level version. Comparisons are made between them in the case of a carbon plasma; they show that the configuration-averaged code gives correct results for an electronic temperature T{sub e}=10 eV (or higher) but fails at lower temperatures such as T{sub e}=1 eV. The validity of the configuration-averaged approximation ismore » discussed: the intuitive criterion requiring that the average configuration-energy dispersion must be less than the electron thermal energy turns out to be a necessary but far from sufficient condition. Another condition based on the resolution of a modified rate-equation system is proposed. Its efficiency is emphasized in the case of low-temperature plasmas. Finally, it is shown that near-threshold autoionization cascade processes may induce a severe failure of the configuration-average formalism.« less
NASA Astrophysics Data System (ADS)
Bates, Jason; Schmitt, Andrew; Klapisch, Marcel; Karasik, Max; Obenschain, Steve
2013-10-01
Modifications to the FAST3D code have been made to enhance its ability to simulate the dynamics of plastic ICF targets with high-Z overcoats. This class of problems is challenging computationally due in part to plasma conditions that are not in a state of local thermodynamic equilibrium and to the presence of mixed computational cells containing more than one material. Recently, new opacity tables for gold, palladium and plastic have been generated with an improved version of the STA code. These improved tables provide smoother, higher-fidelity opacity data over a wider range of temperature and density states than before, and contribute to a more accurate treatment of radiative transfer processes in FAST3D simulations. Furthermore, a new, more efficient subroutine known as ``MMEOS'' has been installed in the FAST3D code for determining pressure and temperature equilibrium conditions within cells containing multiple materials. We will discuss these topics, and present new simulation results for high-Z planar-target experiments performed recently on the NIKE Laser Facility. Work supported by DOE/NNSA.
Biological forcing controls the chemistry of the coral exoskeleton
NASA Astrophysics Data System (ADS)
Meibom, A.; Mostefaoui, S.; Cuif, J.; Yurimoto, H.; Dauphin, Y.; Houlbreque, F.; Dunbar, R.; Constantz, B.
2006-12-01
A multitude of marine organisms produce calcium carbonate skeletons that are used extensively to reconstruct water temperature variability of the tropical and subtropical oceans - a key parameter in global climate-change models. Such paleo-climate reconstructions are based on the notion that skeletal oxygen isotopic composition and certain trace-element abundances (e.g., Sr/Ca and Mg/Ca ratios) vary in response to changes in the water temperature. However, it is a fundamental problem that poorly understood biological processes introduce large compositional deviations from thermodynamic equilibrium and hinder precise calibrations of many paleo-climate proxies. Indeed, the role of water temperature in controlling the composition of the skeleton is far from understood. We have studied trace-element abundances as well as oxygen and carbon isotopic compositions of individual skeletal components in the zooxanthellate and non-zooxanthellate corals at ultra-structural, i.e. micrometer to sub-micrometer length scales. From this body of work we draw the following, generalized conclusions: 1) Centers of calcification (COC) are not in equilibrium with seawater. Notably, the Sr/Ca ratio is higher than expected for aragonite equilibrium with seawater at the temperature at which the skeleton was formed. Furthermore, the COC are further away from equilibrium with seawater than fibrous skeleton in terms of stable isotope composition. 2) COC are dramatically different from the fibrous aragonite skeleton in terms of trace element composition. 3) Neither trace element nor stable isotope variations in the fibrous (bulk) part of the skeleton are directly related to changes in SST. In fact, changes in SST can have very little to do with the observed compositional variations. 4) Trace element variations in the fibrous (bulk) part of the skeleton are not related to the activity of zooxanthellae. These observations are directly relevant to the issue of biological versus non-biological control over skeleton composition and will be discussed.
Light field reconstruction robust to signal dependent noise
NASA Astrophysics Data System (ADS)
Ren, Kun; Bian, Liheng; Suo, Jinli; Dai, Qionghai
2014-11-01
Capturing four dimensional light field data sequentially using a coded aperture camera is an effective approach but suffers from low signal noise ratio. Although multiplexing can help raise the acquisition quality, noise is still a big issue especially for fast acquisition. To address this problem, this paper proposes a noise robust light field reconstruction method. Firstly, scene dependent noise model is studied and incorporated into the light field reconstruction framework. Then, we derive an optimization algorithm for the final reconstruction. We build a prototype by hacking an off-the-shelf camera for data capturing and prove the concept. The effectiveness of this method is validated with experiments on the real captured data.
Differential Binary Encoding Method for Calibrating Image Sensors Based on IOFBs
Fernández, Pedro R.; Lázaro-Galilea, José Luis; Gardel, Alfredo; Espinosa, Felipe; Bravo, Ignacio; Cano, Ángel
2012-01-01
Image transmission using incoherent optical fiber bundles (IOFBs) requires prior calibration to obtain the spatial in-out fiber correspondence necessary to reconstruct the image captured by the pseudo-sensor. This information is recorded in a Look-Up Table called the Reconstruction Table (RT), used later for reordering the fiber positions and reconstructing the original image. This paper presents a very fast method based on image-scanning using spaces encoded by a weighted binary code to obtain the in-out correspondence. The results demonstrate that this technique yields a remarkable reduction in processing time and the image reconstruction quality is very good compared to previous techniques based on spot or line scanning, for example. PMID:22666023
NIMROD modeling of quiescent H-mode: reconstruction considerations and saturation mechanism
NASA Astrophysics Data System (ADS)
King, J. R.; Burrell, K. H.; Garofalo, A. M.; Groebner, R. J.; Kruger, S. E.; Pankin, A. Y.; Snyder, P. B.
2017-02-01
The extended-MHD NIMROD code (Sovinec and King 2010 J. Comput. Phys. 229 5803) models broadband-MHD activity from a reconstruction of a quiescent H-mode shot on the DIII-D tokamak (Luxon 2002 Nucl. Fusion 42 614). Computations with the reconstructed toroidal and poloidal ion flows exhibit low-{{n}φ} perturbations ({{n}φ}≃ 1 -5) that grow and saturate into a turbulent-like MHD state. The workflow used to project the reconstructed state onto the NIMROD basis functions re-solves the Grad-Shafranov equation and extrapolates profiles to include scrape-off-layer currents. Evaluation of the transport from the turbulent-like MHD state leads to a relaxation of the density and temperature profiles.
Baad, Rajendra K.; Belgaumi, Uzma; Vibhute, Nupura; Kadashetti, Vidya; Chandrappa, Pramod Redder; Gugwad, Sushma
2015-01-01
The proper identification of a decedent is not only important for humanitarian and emotional reasons, but also for legal and administrative purposes. During the reconstructive identification process, all necessary information is gathered from the unknown body of the victim and hence that an objective reconstructed profile can be established. Denture marking systems are being used in various situations, and a number of direct and indirect methods are reported. We propose that national identification numbers be incorporated in all removable and fixed prostheses, so as to adopt a single and definitive universal personal identification code with the aim of achieving a uniform, standardized, easy, and fast identification method worldwide for forensic identification. PMID:26005294
Temporal compressive imaging for video
NASA Astrophysics Data System (ADS)
Zhou, Qun; Zhang, Linxia; Ke, Jun
2018-01-01
In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.
Gai, Jiading; Obeid, Nady; Holtrop, Joseph L.; Wu, Xiao-Long; Lam, Fan; Fu, Maojing; Haldar, Justin P.; Hwu, Wen-mei W.; Liang, Zhi-Pei; Sutton, Bradley P.
2013-01-01
Several recent methods have been proposed to obtain significant speed-ups in MRI image reconstruction by leveraging the computational power of GPUs. Previously, we implemented a GPU-based image reconstruction technique called the Illinois Massively Parallel Acquisition Toolkit for Image reconstruction with ENhanced Throughput in MRI (IMPATIENT MRI) for reconstructing data collected along arbitrary 3D trajectories. In this paper, we improve IMPATIENT by removing computational bottlenecks by using a gridding approach to accelerate the computation of various data structures needed by the previous routine. Further, we enhance the routine with capabilities for off-resonance correction and multi-sensor parallel imaging reconstruction. Through implementation of optimized gridding into our iterative reconstruction scheme, speed-ups of more than a factor of 200 are provided in the improved GPU implementation compared to the previous accelerated GPU code. PMID:23682203
Toward a CFD nose-to-tail capability - Hypersonic unsteady Navier-Stokes code validation
NASA Technical Reports Server (NTRS)
Edwards, Thomas A.; Flores, Jolen
1989-01-01
Computational fluid dynamics (CFD) research for hypersonic flows presents new problems in code validation because of the added complexity of the physical models. This paper surveys code validation procedures applicable to hypersonic flow models that include real gas effects. The current status of hypersonic CFD flow analysis is assessed with the Compressible Navier-Stokes (CNS) code as a case study. The methods of code validation discussed to beyond comparison with experimental data to include comparisons with other codes and formulations, component analyses, and estimation of numerical errors. Current results indicate that predicting hypersonic flows of perfect gases and equilibrium air are well in hand. Pressure, shock location, and integrated quantities are relatively easy to predict accurately, while surface quantities such as heat transfer are more sensitive to the solution procedure. Modeling transition to turbulence needs refinement, though preliminary results are promising.
PARC Navier-Stokes code upgrade and validation for high speed aeroheating predictions
NASA Technical Reports Server (NTRS)
Liver, Peter A.; Praharaj, Sarat C.; Seaford, C. Mark
1990-01-01
Applications of the PARC full Navier-Stokes code for hypersonic flowfield and aeroheating predictions around blunt bodies such as the Aeroassist Flight Experiment (AFE) and Aeroassisted Orbital Transfer Vehicle (AOTV) are evaluated. Two-dimensional/axisymmetric and three-dimensional perfect gas versions of the code were upgraded and tested against benchmark wind tunnel cases of hemisphere-cylinder, three-dimensional AFE forebody, and axisymmetric AFE and AOTV aerobrake/wake flowfields. PARC calculations are in good agreement with experimental data and results of similar computer codes. Difficulties encountered in flowfield and heat transfer predictions due to effects of grid density, boundary conditions such as singular stagnation line axis and artificial dissipation terms are presented together with subsequent improvements made to the code. The experience gained with the perfect gas code is being currently utilized in applications of an equilibrium air real gas PARC version developed at REMTECH.
An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).
Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling
2018-04-17
Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.
Evolutionary Construction of Block-Based Neural Networks in Consideration of Failure
NASA Astrophysics Data System (ADS)
Takamori, Masahito; Koakutsu, Seiichi; Hamagami, Tomoki; Hirata, Hironori
In this paper we propose a modified gene coding and an evolutionary construction in consideration of failure in evolutionary construction of Block-Based Neural Networks. In the modified gene coding, we arrange the genes of weights on a chromosome in consideration of the position relation of the genes of weight and structure. By the modified gene coding, the efficiency of search by crossover is increased. Thereby, it is thought that improvement of the convergence rate of construction and shortening of construction time can be performed. In the evolutionary construction in consideration of failure, the structure which is adapted for failure is built in the state where failure occured. Thereby, it is thought that BBNN can be reconstructed in a short time at the time of failure. To evaluate the proposed method, we apply it to pattern classification and autonomous mobile robot control problems. The computational experiments indicate that the proposed method can improve convergence rate of construction and shorten of construction and reconstruction time.
Real-time feedback control of the plasma density profile on ASDEX Upgrade
NASA Astrophysics Data System (ADS)
Mlynek, A.; Reich, M.; Giannone, L.; Treutterer, W.; Behler, K.; Blank, H.; Buhler, A.; Cole, R.; Eixenberger, H.; Fischer, R.; Lohs, A.; Lüddecke, K.; Merkel, R.; Neu, G.; Ryter, F.; Zasche, D.; ASDEX Upgrade Team
2011-04-01
The spatial distribution of density in a fusion experiment is of significant importance as it enters in numerous analyses and contributes to the fusion performance. The reconstruction of the density profile is therefore commonly done in offline data analysis. In this paper, we present an algorithm which allows for density profile reconstruction from the data of the submillimetre interferometer and the magnetic equilibrium in real-time. We compare the obtained results to the profiles yielded by a numerically more complex offline algorithm. Furthermore, we present recent ASDEX Upgrade experiments in which we used the real-time density profile for active feedback control of the shape of the density profile.
Labor union members play an OLG repeated game
Kandori, Michihiro; Obayashi, Shinya
2014-01-01
Humans are capable of cooperating with one another even when it is costly and a deviation provides an immediate gain. An important reason is that cooperation is reciprocated or rewarded and deviations are penalized in later stages. For cooperation to be sustainable, not only must rewards and penalties be strong enough but individuals should also have the right incentives to provide rewards and punishments. Codes of conduct with such properties have been studied extensively in game theory (as repeated game equilibria), and the literature on the evolution of cooperation shows how equilibrium behavior might emerge and proliferate in society. We found that community unions, a subclass of labor unions that admits individual affiliations, are ideal to corroborate these theories with reality, because (i) their activities are simple and (ii) they have a structure that closely resembles a theoretical model, the overlapping generations repeated game. A detailed case study of a community union revealed a possible equilibrium that can function under the very limited observability in the union. The equilibrium code of conduct appears to be a natural focal point based on simple heuristic reasoning. The union we studied was created out of necessity for cooperation, without knowing or anticipating how cooperation might be sustained. The union has successfully resolved about 3,000 labor disputes and created a number of offspring. PMID:25024211
Liu, Hui; Chen, Fu; Sun, Huiyong; Li, Dan; Hou, Tingjun
2017-04-11
By means of estimators based on non-equilibrium work, equilibrium free energy differences or potentials of mean force (PMFs) of a system of interest can be computed from biased molecular dynamics (MD) simulations. The approach, however, is often plagued by slow conformational sampling and poor convergence, especially when the solvent effects are taken into account. Here, as a possible way to alleviate the problem, several widely used implicit-solvent models, which are derived from the analytic generalized Born (GB) equation and implemented in the AMBER suite of programs, were employed in free energy calculations based on non-equilibrium work and evaluated for their abilities to emulate explicit water. As a test case, pulling MD simulations were carried out on an alanine polypeptide with different solvent models and protocols, followed by comparisons of the reconstructed PMF profiles along the unfolding coordinate. The results show that when employing the non-equilibrium work method, sampling with an implicit-solvent model is several times faster and, more importantly, converges more rapidly than that with explicit water due to reduction of dissipation. Among the assessed GB models, the Neck variants outperform the OBC and HCT variants in terms of accuracy, whereas their computational costs are comparable. In addition, for the best-performing models, the impact of the solvent-accessible surface area (SASA) dependent nonpolar solvation term was also examined. The present study highlights the advantages of implicit-solvent models for non-equilibrium sampling.
NASA Astrophysics Data System (ADS)
Chen, Yihang; Xiao, Chijie; Yang, Xiaoyi; Wang, Tianbo; Xu, Tianchao; Yu, Yi; Xu, Min; Wang, Long; Lin, Chen; Wang, Xiaogang
2017-10-01
The Laser-driven Ion beam trace probe (LITP) is a new diagnostic method for measuring poloidal magnetic field (Bp) and radial electric field (Er) in tokamaks. LITP injects a laser-driven ion beam into the tokamak, and Bp and Er profiles can be reconstructed using tomography methods. A reconstruction code has been developed to validate the LITP theory, and both 2D reconstruction of Bp and simultaneous reconstruction of Bp and Er have been attained. To reconstruct from experimental data with noise, Maximum Entropy and Gaussian-Bayesian tomography methods were applied and improved according to the characteristics of the LITP problem. With these improved methods, a reconstruction error level below 15% has been attained with a data noise level of 10%. These methods will be further tested and applied in the following LITP experiments. Supported by the ITER-CHINA program 2015GB120001, CHINA MOST under 2012YQ030142 and National Natural Science Foundation Abstract of China under 11575014 and 11375053.
NASA Astrophysics Data System (ADS)
Teplukhina, A. A.; Sauter, O.; Felici, F.; Merle, A.; Kim, D.; the TCV Team; the ASDEX Upgrade Team; the EUROfusion MST1 Team
2017-12-01
The present work demonstrates the capabilities of the transport code RAPTOR as a fast and reliable simulator of plasma profiles for the entire plasma discharge, i.e. from ramp-up to ramp-down. This code focuses, at this stage, on the simulation of electron temperature and poloidal flux profiles using prescribed equilibrium and some kinetic profiles. In this work we extend the RAPTOR transport model to include a time-varying plasma equilibrium geometry and verify the changes via comparison with ATSRA code simulations. In addition a new ad hoc transport model based on constant gradients and suitable for simulations of L-H and H-L mode transitions has been incorporated into the RAPTOR code and validated with rapid simulations of the time evolution of the safety factor and the electron temperature over the entire AUG and TCV discharges. An optimization procedure for the plasma termination phase has also been developed during this work. We define the goal of the optimization as ramping down the plasma current as fast as possible while avoiding any disruptions caused by reaching physical or technical limits. Our numerical study of this problem shows that a fast decrease of plasma elongation during current ramp-down can help in reducing plasma internal inductance. An early transition from H- to L-mode allows us to reduce the drop in poloidal beta, which is also important for plasma MHD stability and control. This work shows how these complex nonlinear interactions can be optimized automatically using relevant cost functions and constraints. Preliminary experimental results for TCV are demonstrated.
An international comparison of reimbursement for DIEAP flap breast reconstruction.
Reid, A W N; Szpalski, C; Sheppard, N N; Morrison, C M; Blondeel, P N
2015-11-01
The deep inferior epigastric artery perforator (DIEAP) flap is currently considered the gold standard for autologous breast reconstruction. With the current economic climate and health cutbacks, we decided to survey reimbursement for DIEAP flaps performed at the main international centres in order to assess whether they are funded consistently. Data were collected confidentially from the main international centres by an anonymous questionnaire. Our results illustrate the wide disparity in international DIEAP flap breast reconstruction reimbursement: a unilateral DIEAP flap performed in New York, USA, attracts €20,759, whereas the same operation in Madrid, Spain, will only be reimbursed for €300. Only 35.7% of the surgeons can set up their own fee. Moreover, 85.7% of the participants estimated that the current fees are insufficient, and most of them feel that we are evolving towards an even lower reimbursement rate. In 55.8% of the countries represented, there is no DIEAP-specific coding; in comparison, 74.4% of the represented countries have a specific coding for transverse rectus abdominis (TRAM) flaps. Finally, despite the fact that DIEAP flaps have become the gold standard for breast reconstruction, they comprise only a small percentage of all the total number of breast reconstruction procedures performed (7-15%), with the only exception being Belgium (40%). Our results demonstrate that DIEAP flap breast reconstruction is inconsistently funded. Unfortunately though, it appears that the current reimbursement offered by many countries may dissuade institutions and surgeons from offering this procedure. However, substantial evidence exists supporting the cost-effectiveness of perforator flaps for breast reconstruction, and, in our opinion, the long-term clinical benefits for our patients are so important that this investment of time and money is absolutely essential. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Qin, Z.; Zhao, J. M.; Liu, L. H.
2018-05-01
The level energies of diatomic molecules calculated by the frequently used Dunham expansion will become less accurate for high-lying vibrational and rotational levels. In this paper, the potential curves for the lower-lying electronic states with accurate spectroscopic constants are reconstructed using the Rydberg-Klein-Rees (RKR) method, which are extrapolated to the dissociation limits by fitting of the theoretical potentials, and the rest of the potential curves are obtained from the ab-initio results in the literature. Solving the rotational dependence of the radial Schrödinger equation over the obtained potential curves, we determine the rovibrational level energies, which are then used to calculate the equilibrium and non-equilibrium thermodynamic properties of N2, N2+, NO, O2, CN, C2, CO and CO+. The partition functions and the specific heats are systematically validated by available data in the literature. Finally, we calculate the radiative source strengths of diatomic molecules in thermodynamic equilibrium, which agree well with the available values in the literature. The spectral radiative intensities for some diatomic molecules in thermodynamic non-equilibrium are calculated and validated by available experimental data.
Individual-based models for adaptive diversification in high-dimensional phenotype spaces.
Ispolatov, Iaroslav; Madhok, Vaibhav; Doebeli, Michael
2016-02-07
Most theories of evolutionary diversification are based on equilibrium assumptions: they are either based on optimality arguments involving static fitness landscapes, or they assume that populations first evolve to an equilibrium state before diversification occurs, as exemplified by the concept of evolutionary branching points in adaptive dynamics theory. Recent results indicate that adaptive dynamics may often not converge to equilibrium points and instead generate complicated trajectories if evolution takes place in high-dimensional phenotype spaces. Even though some analytical results on diversification in complex phenotype spaces are available, to study this problem in general we need to reconstruct individual-based models from the adaptive dynamics generating the non-equilibrium dynamics. Here we first provide a method to construct individual-based models such that they faithfully reproduce the given adaptive dynamics attractor without diversification. We then show that a propensity to diversify can be introduced by adding Gaussian competition terms that generate frequency dependence while still preserving the same adaptive dynamics. For sufficiently strong competition, the disruptive selection generated by frequency-dependence overcomes the directional evolution along the selection gradient and leads to diversification in phenotypic directions that are orthogonal to the selection gradient. Copyright © 2015 Elsevier Ltd. All rights reserved.
Pang, Junbiao; Qin, Lei; Zhang, Chunjie; Zhang, Weigang; Huang, Qingming; Yin, Baocai
2015-12-01
Local coordinate coding (LCC) is a framework to approximate a Lipschitz smooth function by combining linear functions into a nonlinear one. For locally linear classification, LCC requires a coding scheme that heavily determines the nonlinear approximation ability, posing two main challenges: 1) the locality making faraway anchors have smaller influences on current data and 2) the flexibility balancing well between the reconstruction of current data and the locality. In this paper, we address the problem from the theoretical analysis of the simplest local coding schemes, i.e., local Gaussian coding and local student coding, and propose local Laplacian coding (LPC) to achieve the locality and the flexibility. We apply LPC into locally linear classifiers to solve diverse classification tasks. The comparable or exceeded performances of state-of-the-art methods demonstrate the effectiveness of the proposed method.
Sustainable Strategies for the Dynamic Equilibrium of the Urban Stream, Cheonggyecheon
NASA Astrophysics Data System (ADS)
Seo, D.; Kwon, Y.
2018-04-01
Cheonggyecheon, which had been transformed into a 14-lane urban highway and a large underground sewer system, was finally converted back to an urban stream again. Its transformation has been praised as a successful example of urban downtown regeneration and beautification. It is, however, obvious that there have not been prudent ecological considerations since the project’s principal goals were to provide public recreational use and achieve maximum flood control capacity via the use of embankments. For a healthier and sustainable stream environment, Cheonggyecheon should be ecologically re-restored again, based on a dynamic equilibrium model. It must primarily establish a corridor of vegetation, an aquatic transitional zone, and install constructed wetlands nearby which support the water source. The upper streams of Cheonggyecheon should be further restored and supply natural waters. Furthermore, there ultimately needs to be de-channelization for hydrological sustainability. This would vary from merely increasing the sinuosity to thoroughly reconstruct a naturalized stream. Complete dynamic equilibrium of Cheonggyecheon can be accomplished through more fundamental sustainable strategies.
NASA Astrophysics Data System (ADS)
Shahzad, M.; Rizvi, H.; Panwar, A.; Ryu, C. M.
2017-06-01
We have re-visited the existence criterion of the reverse shear Alfven eigenmodes (RSAEs) in the presence of the parallel equilibrium current by numerically solving the eigenvalue equation using a fast eigenvalue solver code KAES. The parallel equilibrium current can bring in the kink effect and is known to be strongly unfavorable for the RSAE. We have numerically estimated the critical value of the toroidicity factor Qtor in a circular tokamak plasma, above which RSAEs can exist, and compared it to the analytical one. The difference between the numerical and analytical critical values is small for low frequency RSAEs, but it increases as the frequency of the mode increases, becoming greater for higher poloidal harmonic modes.
Students’ misconceptions on solubility equilibrium
NASA Astrophysics Data System (ADS)
Setiowati, H.; Utomo, S. B.; Ashadi
2018-05-01
This study investigated the students’ misconceptions of the solubility equilibrium. The participants of the study consisted of 164 students who were in the science class of second year high school. Instrument used is two-tier diagnostic test consisting of 15 items. Responses were marked and coded into four categories: understanding, misconception, understand little without misconception, and not understanding. Semi-structured interviews were carried out with 45 students according to their written responses which reflected different perspectives, to obtain a more elaborated source of data. Data collected from multiple methods were analyzed qualitatively and quantitatively. Based on the data analysis showed that the students misconceptions in all areas in solubility equilibrium. They had more misconceptions such as in the relation of solubility and solubility product, common-ion effect and pH in solubility, and precipitation concept.
A Simple and Accurate Network for Hydrogen and Carbon Chemistry in the Interstellar Medium
NASA Astrophysics Data System (ADS)
Gong, Munan; Ostriker, Eve C.; Wolfire, Mark G.
2017-07-01
Chemistry plays an important role in the interstellar medium (ISM), regulating the heating and cooling of the gas and determining abundances of molecular species that trace gas properties in observations. Although solving the time-dependent equations is necessary for accurate abundances and temperature in the dynamic ISM, a full chemical network is too computationally expensive to incorporate into numerical simulations. In this paper, we propose a new simplified chemical network for hydrogen and carbon chemistry in the atomic and molecular ISM. We compare results from our chemical network in detail with results from a full photodissociation region (PDR) code, and also with the Nelson & Langer (NL99) network previously adopted in the simulation literature. We show that our chemical network gives similar results to the PDR code in the equilibrium abundances of all species over a wide range of densities, temperature, and metallicities, whereas the NL99 network shows significant disagreement. Applying our network to 1D models, we find that the CO-dominated regime delimits the coldest gas and that the corresponding temperature tracks the cosmic-ray ionization rate in molecular clouds. We provide a simple fit for the locus of CO-dominated regions as a function of gas density and column. We also compare with observations of diffuse and translucent clouds. We find that the CO, {{CH}}x, and {{OH}}x abundances are consistent with equilibrium predictions for densities n=100{--}1000 {{cm}}-3, but the predicted equilibrium C abundance is higher than that seen in observations, signaling the potential importance of non-equilibrium/dynamical effects.
Design and Testing of a Liquid Nitrous Oxide and Ethanol Fueled Rocket Engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Youngblood, Stewart
A small-scale, bi-propellant, liquid fueled rocket engine and supporting test infrastructure were designed and constructed at the Energetic Materials Research and Testing Center (EMRTC). This facility was used to evaluate liquid nitrous oxide and ethanol as potential rocket propellants. Thrust and pressure measurements along with high-speed digital imaging of the rocket exhaust plume were made. This experimental data was used for validation of a computational model developed of the rocket engine tested. The developed computational model was utilized to analyze rocket engine performance across a range of operating pressures, fuel-oxidizer mixture ratios, and outlet nozzle configurations. A comparative study ofmore » the modeling of a liquid rocket engine was performed using NASA CEA and Cantera, an opensource equilibrium code capable of being interfaced with MATLAB. One goal of this modeling was to demonstrate the ability of Cantera to accurately model the basic chemical equilibrium, thermodynamics, and transport properties for varied fuel and oxidizer operating conditions. Once validated for basic equilibrium, an expanded MATLAB code, referencing Cantera, was advanced beyond CEAs capabilities to predict rocket engine performance as a function of supplied propellant flow rate and rocket engine nozzle dimensions. Cantera was found to comparable favorably to CEA for making equilibrium calculations, supporting its use as an alternative to CEA. The developed rocket engine performs as predicted, demonstrating the developedMATLAB rocket engine model was successful in predicting real world rocket engine performance. Finally, nitrous oxide and ethanol were shown to perform well as rocket propellants, with specific impulses experimentally recorded in the range of 250 to 260 seconds.« less
Effects of 2D and 3D Error Fields on the SAS Divertor Magnetic Topology
NASA Astrophysics Data System (ADS)
Trevisan, G. L.; Lao, L. L.; Strait, E. J.; Guo, H. Y.; Wu, W.; Evans, T. E.
2016-10-01
The successful design of plasma-facing components in fusion experiments is of paramount importance in both the operation of future reactors and in the modification of operating machines. Indeed, the Small Angle Slot (SAS) divertor concept, proposed for application on the DIII-D experiment, combines a small incident angle at the plasma strike point with a progressively opening slot, so as to better control heat flux and erosion in high-performance tokamak plasmas. Uncertainty quantification of the error fields expected around the striking point provides additional useful information in both the design and the modeling phases of the new divertor, in part due to the particular geometric requirement of the striking flux surfaces. The presented work involves both 2D and 3D magnetic error field analysis on the SAS strike point carried out using the EFIT code for 2D equilibrium reconstruction, V3POST for vacuum 3D computations and the OMFIT integrated modeling framework for data analysis. An uncertainty in the magnetic probes' signals is found to propagate non-linearly as an uncertainty in the striking point and angle, which can be quantified through statistical analysis to yield robust estimates. Work supported by contracts DE-FG02-95ER54309 and DE-FC02-04ER54698.
Hiro and Evans currents in Vertical Disruption Event
NASA Astrophysics Data System (ADS)
Zakharov, Leonid; Xujing Li Team; Sergei Galkin Team
2014-10-01
The notion of Tokamak Magneto-Hydrodynamics (TMHD), which explicitly reflects the anisotropy of a high temperature tokamak plasma is introduced. The set of TMHD equations is formulated for simulations of macroscopic plasma dynamics and disruptions in tokamaks. Free from the Courant restriction on the time step, this set of equations is appropriate for high performance plasmas and does not require any extension of the MHD plasma model. At the same time, TMHD requires the use of magnetic field aligned numerical grids. The TMHD model was used for creation of theory of the Wall Touching Kink and Vertical Modes (WTKM and WTVM), prediction of Hiro and Evans currents, design of an innovative diagnostics for Hiro current measurements, installed on EAST device. While Hiro currents have explained the toroidal asymmetry in the plasma current measurements in JET disruptions, the Evans currents explain the tile current measurements in tokamaks. The recently developed Vertical Disruption Code (VDE) have demonstrated 5 regimes of VDE and confirmed the generation of both Hiro and Evans currents. The results challenge the 24 years long misinterpretation of the tile currents in tokamaks as ``halo'' currents, which were a product of misuse of equilibrium reconstruction for VDE. This work is supported by US DoE Contract No. DE-AC02-09-CH1146.
Modeling Giant Sawtooth Modes in DIII-D using the NIMROD code
NASA Astrophysics Data System (ADS)
Kruger, Scott; Jenkins, Thomas; Held, Eric; King, Jacob; NIMROD Team
2014-10-01
Ongoing efforts to model giant sawtooth cycles in DIII-D shot 96043 using NIMROD are summarized. In this discharge, an energetic ion population induced by RF heating modifies the sawtooth stability boundary, supplanting the conventional sawtooth cycle with longer-period giant sawtooth oscillations of much larger amplitude. NIMROD has the unique capability of being able to use both continuum kinetic and particle-in-cell numerical schemes to model the RF-induced hot-particle distribution effects on the sawtooth stability. This capability is used to numerically investigate the role played by the form of the energetic particle distribution, including a possible high-energy tail drawn out by the RF, to study the sawtooth threshold and subsequent nonlinear evolution. Equilibrium reconstructions from the experimental data are used to enable these detailed validation studies. Effects of other parameters on the sawtooth behavior (such as the plasma Lundquist number and hot-particle β-fraction) are also considered. Ultimately, we hope to assess the degree to which NIMROD's extended MHD model correctly simulates the observed linear onset and nonlinear behavior of the giant sawtooth, and to establish its reliability as a predictive modeling tool for these modes. This work was initiated by the late Dr. Dalton Schnack. Equilibria were provided by Dr. A. Turnbull of General Atomics.
Correcting anthropogenic ocean heat uptake estimates for the Little Ice Age
NASA Astrophysics Data System (ADS)
Gebbie, Geoffrey
2017-04-01
Estimates of anthropogenic ocean heat uptake typically assume that the ocean was in equilibrium during the pre-industrial era. Recent reconstructions of the Common Era, however, show a multi-century surface cooling trend before the Industrial Revolution. Using a time-evolving state estimation method, we find that the 1750 C.E. ocean must have been out of equilibrium in order to fit the H.M.S. Challenger, WOCE, and Argo hydrographic data. When the disequilibrated ocean conditions are taken into account, the inferred ocean heat uptake from 1750-2014 C.E. is revised due to the deep ocean memory of Little Ice Age surface forcing. These effects of ocean disequilibrium should also be considered when interpreting climate sensitivity estimates.
Diffusion and Equilibrium Swelling of Macromolecular Networks by Their Linear Homologs.
1982-10-01
C/ . 29 OYN 6/81 DISTRIBUTION LIST No. Copies No. Copies Dr. L.V. Schmtdt 1 Dr. F. Roberto 1 Assistant Secretary of the Navy Code AFRPL MKPA (RE, and...Scientific Advisor Directorate of Aerospace Sciences Commandant of the Marine Corps Bolling Air Force Base Code RD-1 Washington, D.C. 20332 Washington...Directorate of Chemical Sciences Arlington VA 22217 Bolling Air Force Base t VWashington, D.C. 20332 Mr. David Siegel Office of Naval Research Dr. John S
Naval Weapons Center Plume Radar Frequency Interference Code
1982-10-01
ppm sodium. Both equilibrium and finite rate chemistry during the expansion from the chamber were tried as initial conditions for the plume. In...was too large. The difference between the.e two sets of initial conditions diminished downstream as the chemistry in the plume mixing region began to...Rerkirre Arvliral I Comirlnrnde!- ir.C h ic 1. tVS. Pacific Hice ((Code 3251 1 Corimu tinde r. ’n, r-d I leer. Pearl I atar I Coimniaide r. Sevent
RAVE—a Detector-independent vertex reconstruction toolkit
NASA Astrophysics Data System (ADS)
Waltenberger, Wolfgang; Mitaroff, Winfried; Moser, Fabian
2007-10-01
A detector-independent toolkit for vertex reconstruction (RAVE ) is being developed, along with a standalone framework (VERTIGO ) for testing, analyzing and debugging. The core algorithms represent state of the art for geometric vertex finding and fitting by both linear (Kalman filter) and robust estimation methods. Main design goals are ease of use, flexibility for embedding into existing software frameworks, extensibility, and openness. The implementation is based on modern object-oriented techniques, is coded in C++ with interfaces for Java and Python, and follows an open-source approach. A beta release is available. VERTIGO = "vertex reconstruction toolkit and interface to generic objects".
Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook
2015-01-01
Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.
3D tomographic reconstruction using geometrical models
NASA Astrophysics Data System (ADS)
Battle, Xavier L.; Cunningham, Gregory S.; Hanson, Kenneth M.
1997-04-01
We address the issue of reconstructing an object of constant interior density in the context of 3D tomography where there is prior knowledge about the unknown shape. We explore the direct estimation of the parameters of a chosen geometrical model from a set of radiographic measurements, rather than performing operations (segmentation for example) on a reconstructed volume. The inverse problem is posed in the Bayesian framework. A triangulated surface describes the unknown shape and the reconstruction is computed with a maximum a posteriori (MAP) estimate. The adjoint differentiation technique computes the derivatives needed for the optimization of the model parameters. We demonstrate the usefulness of the approach and emphasize the techniques of designing forward and adjoint codes. We use the system response of the University of Arizona Fast SPECT imager to illustrate this method by reconstructing the shape of a heart phantom.
NASA Astrophysics Data System (ADS)
Duan, Aiying; Jiang, Chaowei; Hu, Qiang; Zhang, Huai; Gary, G. Allen; Wu, S. T.; Cao, Jinbin
2017-06-01
Magnetic field extrapolation is an important tool to study the three-dimensional (3D) solar coronal magnetic field, which is difficult to directly measure. Various analytic models and numerical codes exist, but their results often drastically differ. Thus, a critical comparison of the modeled magnetic field lines with the observed coronal loops is strongly required to establish the credibility of the model. Here we compare two different non-potential extrapolation codes, a nonlinear force-free field code (CESE-MHD-NLFFF) and a non-force-free field (NFFF) code, in modeling a solar active region (AR) that has a sigmoidal configuration just before a major flare erupted from the region. A 2D coronal-loop tracing and fitting method is employed to study the 3D misalignment angles between the extrapolated magnetic field lines and the EUV loops as imaged by SDO/AIA. It is found that the CESE-MHD-NLFFF code with preprocessed magnetogram performs the best, outputting a field that matches the coronal loops in the AR core imaged in AIA 94 Å with a misalignment angle of ˜10°. This suggests that the CESE-MHD-NLFFF code, even without using the information of the coronal loops in constraining the magnetic field, performs as good as some coronal-loop forward-fitting models. For the loops as imaged by AIA 171 Å in the outskirts of the AR, all the codes including the potential field give comparable results of the mean misalignment angle (˜30°). Thus, further improvement of the codes is needed for a better reconstruction of the long loops enveloping the core region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Aiying; Zhang, Huai; Jiang, Chaowei
Magnetic field extrapolation is an important tool to study the three-dimensional (3D) solar coronal magnetic field, which is difficult to directly measure. Various analytic models and numerical codes exist, but their results often drastically differ. Thus, a critical comparison of the modeled magnetic field lines with the observed coronal loops is strongly required to establish the credibility of the model. Here we compare two different non-potential extrapolation codes, a nonlinear force-free field code (CESE–MHD–NLFFF) and a non-force-free field (NFFF) code, in modeling a solar active region (AR) that has a sigmoidal configuration just before a major flare erupted from themore » region. A 2D coronal-loop tracing and fitting method is employed to study the 3D misalignment angles between the extrapolated magnetic field lines and the EUV loops as imaged by SDO /AIA. It is found that the CESE–MHD–NLFFF code with preprocessed magnetogram performs the best, outputting a field that matches the coronal loops in the AR core imaged in AIA 94 Å with a misalignment angle of ∼10°. This suggests that the CESE–MHD–NLFFF code, even without using the information of the coronal loops in constraining the magnetic field, performs as good as some coronal-loop forward-fitting models. For the loops as imaged by AIA 171 Å in the outskirts of the AR, all the codes including the potential field give comparable results of the mean misalignment angle (∼30°). Thus, further improvement of the codes is needed for a better reconstruction of the long loops enveloping the core region.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos-Villalobos, Hector J; Gregor, Jens; Bingham, Philip R
2014-01-01
At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. Tomore » overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.« less
Csuros, Miklos; Rogozin, Igor B.; Koonin, Eugene V.
2011-01-01
Protein-coding genes in eukaryotes are interrupted by introns, but intron densities widely differ between eukaryotic lineages. Vertebrates, some invertebrates and green plants have intron-rich genes, with 6–7 introns per kilobase of coding sequence, whereas most of the other eukaryotes have intron-poor genes. We reconstructed the history of intron gain and loss using a probabilistic Markov model (Markov Chain Monte Carlo, MCMC) on 245 orthologous genes from 99 genomes representing the three of the five supergroups of eukaryotes for which multiple genome sequences are available. Intron-rich ancestors are confidently reconstructed for each major group, with 53 to 74% of the human intron density inferred with 95% confidence for the Last Eukaryotic Common Ancestor (LECA). The results of the MCMC reconstruction are compared with the reconstructions obtained using Maximum Likelihood (ML) and Dollo parsimony methods. An excellent agreement between the MCMC and ML inferences is demonstrated whereas Dollo parsimony introduces a noticeable bias in the estimations, typically yielding lower ancestral intron densities than MCMC and ML. Evolution of eukaryotic genes was dominated by intron loss, with substantial gain only at the bases of several major branches including plants and animals. The highest intron density, 120 to 130% of the human value, is inferred for the last common ancestor of animals. The reconstruction shows that the entire line of descent from LECA to mammals was intron-rich, a state conducive to the evolution of alternative splicing. PMID:21935348
Event Reconstruction in the PandaRoot framework
NASA Astrophysics Data System (ADS)
Spataro, Stefano
2012-12-01
The PANDA experiment will study the collisions of beams of anti-protons, with momenta ranging from 2-15 GeV/c, with fixed proton and nuclear targets in the charm energy range, and will be built at the FAIR facility. In preparation for the experiment, the PandaRoot software framework is under development for detector simulation, reconstruction and data analysis, running on an Alien2-based grid. The basic features are handled by the FairRoot framework, based on ROOT and Virtual Monte Carlo, while the PANDA detector specifics and reconstruction code are implemented inside PandaRoot. The realization of Technical Design Reports for the tracking detectors has pushed the finalization of the tracking reconstruction code, which is complete for the Target Spectrometer, and of the analysis tools. Particle Identification algorithms are currently implemented using Bayesian approach and compared to Multivariate Analysis methods. Moreover, the PANDA data acquisition foresees a triggerless operation in which events are not defined by a hardware 1st level trigger decision, but all the signals are stored with time stamps requiring a deconvolution by the software. This has led to a redesign of the software from an event basis to a time-ordered structure. In this contribution, the reconstruction capabilities of the Panda spectrometer will be reported, focusing on the performances of the tracking system and the results for the analysis of physics benchmark channels, as well as the new (and challenging) concept of time-based simulation and its implementation.
Light element opacities of astrophysical interest from ATOMIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colgan, J.; Kilcrease, D. P.; Magee, N. H. Jr.
We present new calculations of local-thermodynamic-equilibrium (LTE) light element opacities from the Los Alamos ATOMIC code for systems of astrophysical interest. ATOMIC is a multi-purpose code that can generate LTE or non-LTE quantities of interest at various levels of approximation. Our calculations, which include fine-structure detail, represent a systematic improvement over previous Los Alamos opacity calculations using the LEDCOP legacy code. The ATOMIC code uses ab-initio atomic structure data computed from the CATS code, which is based on Cowan's atomic structure codes, and photoionization cross section data computed from the Los Alamos ionization code GIPPER. ATOMIC also incorporates a newmore » equation-of-state (EOS) model based on the chemical picture. ATOMIC incorporates some physics packages from LEDCOP and also includes additional physical processes, such as improved free-free cross sections and additional scattering mechanisms. Our new calculations are made for elements of astrophysical interest and for a wide range of temperatures and densities.« less
NASA Astrophysics Data System (ADS)
Keeler, D. G.; Rupper, S.; Schaefer, J. M.; Finkel, R. C.; Maurer, J. M.
2016-12-01
Alpine glaciers constitute an important component of terrestrial paleoclimate records due to, among other characteristics, their high sensitivity to climate change, near global extent, and their integration of myriad climate variables into a single, easily detected signal. Because the glacier equilibrium line altitude (ELA) provides a more explicit representation of climate than many other glacier properties, ELA methods allow for more direct comparisons of multiple glaciers within or between regions. Such comparisons allow for more complete investigations of the ultimate causes of mountain glaciation during specific events. Many studies however tend to focus on a limited number of sites, and employ a large variety of different techniques for ELA reconstruction between studies, making wider climate implications more tenuous. Methods of ELA reconstruction that can be rapidly and consistently applied to an arbitrary number of paleo-glaciers would provide a more accurate portrayal of the changes in climate across a given region. Here we present ELA reconstructions from Egesen Stadial moraines across the European Alps using an ELA model accounting for differences in glacier width, glacier shape, bed topography, ice thickness, and glacier length, including several glaciers constrained to the Younger Dryas using surface exposure dating techniques. We compare reconstructed Younger Dryas ELA values to modern ELA values using the same model, or using end of summer snowline estimates where no glacier is currently present. We further provide uncertainty estimates on the ΔELA using bootstrapped Monte Carlo simulations for the various input parameters. Preliminary results compare favorably to previous glacier studies of the European Younger Dryas, but provide greater context from many glaciers across the region as a whole. Such results allow for a more thorough investigation of the spatial variability and trends in climate during the Younger Dryas across the European Alps, and comparisons of other regions in the future.
NIMROD modeling of quiescent H-mode: Reconstruction considerations and saturation mechanism
King, Jacob R.; Burrell, Keith H.; Garofalo, Andrea M.; ...
2016-09-30
The extended-MHD NIMROD code (Sovinec and King 2010 J. Comput. Phys. 229 5803) models broadband-MHD activity from a reconstruction of a quiescent H-mode shot on the DIII-D tokamak (Luxon 2002 Nucl. Fusion 42 614). Computations with the reconstructed toroidal and poloidal ion flows exhibit low-n Φ perturbations (n Φ ≃1–5) that grow and saturate into a turbulent-like MHD state. The workflow used to project the reconstructed state onto the NIMROD basis functions re-solves the Grad–Shafranov equation and extrapolates profiles to include scrape-off-layer currents. In conclusion, evaluation of the transport from the turbulent-like MHD state leads to a relaxation of themore » density and temperature profiles.« less
NIMROD modeling of quiescent H-mode: Reconstruction considerations and saturation mechanism
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Jacob R.; Burrell, Keith H.; Garofalo, Andrea M.
The extended-MHD NIMROD code (Sovinec and King 2010 J. Comput. Phys. 229 5803) models broadband-MHD activity from a reconstruction of a quiescent H-mode shot on the DIII-D tokamak (Luxon 2002 Nucl. Fusion 42 614). Computations with the reconstructed toroidal and poloidal ion flows exhibit low-n Φ perturbations (n Φ ≃1–5) that grow and saturate into a turbulent-like MHD state. The workflow used to project the reconstructed state onto the NIMROD basis functions re-solves the Grad–Shafranov equation and extrapolates profiles to include scrape-off-layer currents. In conclusion, evaluation of the transport from the turbulent-like MHD state leads to a relaxation of themore » density and temperature profiles.« less
Equilibrium reconstruction in an iron core tokamak using a deterministic magnetisation model
NASA Astrophysics Data System (ADS)
Appel, L. C.; Lupelli, I.; JET Contributors
2018-02-01
In many tokamaks ferromagnetic material, usually referred to as an iron-core, is present in order to improve the magnetic coupling between the solenoid and the plasma. The presence of the iron core in proximity to the plasma changes the magnetic topology with consequent effects on the magnetic field structure and the plasma boundary. This paper considers the problem of obtaining the free-boundary plasma equilibrium solution in the presence of ferromagnetic material based on measured constraints. The current approach employs a model described by O'Brien et al. (1992) in which the magnetisation currents at the iron-air boundary are represented by a set of free parameters and appropriate boundary conditions are enforced via a set of quasi-measurements on the material boundary. This can lead to the possibility of overfitting the data and hiding underlying issues with the measured signals. Although the model typically achieves good fits to measured magnetic signals there are significant discrepancies in the inferred magnetic topology compared with other plasma diagnostic measurements that are independent of the magnetic field. An alternative approach for equilibrium reconstruction in iron-core tokamaks, termed the deterministic magnetisation model is developed and implemented in EFIT++. The iron is represented by a boundary current with the gradients in the magnetisation dipole state generating macroscopic internal magnetisation currents. A model for the boundary magnetisation currents at the iron-air interface is developed using B-Splines enabling continuity to arbitrary order; internal magnetisation currents are allocated to triangulated regions within the iron, and a method to enable adaptive refinement is implemented. The deterministic model has been validated by comparing it with a synthetic 2-D electromagnetic model of JET. It is established that the maximum field discrepancy is less than 1.5 mT throughout the vacuum region enclosing the plasma. The discrepancies of simulated magnetic probe signals are accurate to within 1% for signals with absolute magnitude greater than 100mT; in all other cases agreement is to within 1mT. The effect of neglecting the internal magnetisation currents increases the maximum discrepancy in the vacuum region to >20mT, resulting in errors of 5%-10% in the simulated probe signals. The fact that the previous model neglects the internal magnetisation currents (and also has additional free parameters when fitting the measured data) makes it unsuitable for analysing data in the absence of plasma current. The discrepancy of the poloidal magnetic flux within the vacuum vessel is to within 0.1Wb. Finally the deterministic model is applied to an equilibrium force-balance solution of a JET discharge using experimental data. It is shown that the discrepancies of the outboard separatrix position, and the outer strike-point position inferred from Thomson Scattering and Infrared camera data are much improved beyond the routine equilibrium reconstruction, whereas the discrepancy of the inner strike-point position is similar.
Xia, Yidong; Lou, Jialin; Luo, Hong; ...
2015-02-09
Here, an OpenACC directive-based graphics processing unit (GPU) parallel scheme is presented for solving the compressible Navier–Stokes equations on 3D hybrid unstructured grids with a third-order reconstructed discontinuous Galerkin method. The developed scheme requires the minimum code intrusion and algorithm alteration for upgrading a legacy solver with the GPU computing capability at very little extra effort in programming, which leads to a unified and portable code development strategy. A face coloring algorithm is adopted to eliminate the memory contention because of the threading of internal and boundary face integrals. A number of flow problems are presented to verify the implementationmore » of the developed scheme. Timing measurements were obtained by running the resulting GPU code on one Nvidia Tesla K20c GPU card (Nvidia Corporation, Santa Clara, CA, USA) and compared with those obtained by running the equivalent Message Passing Interface (MPI) parallel CPU code on a compute node (consisting of two AMD Opteron 6128 eight-core CPUs (Advanced Micro Devices, Inc., Sunnyvale, CA, USA)). Speedup factors of up to 24× and 1.6× for the GPU code were achieved with respect to one and 16 CPU cores, respectively. The numerical results indicate that this OpenACC-based parallel scheme is an effective and extensible approach to port unstructured high-order CFD solvers to GPU computing.« less
MHD thrust vectoring of a rocket engine
NASA Astrophysics Data System (ADS)
Labaune, Julien; Packan, Denis; Tholin, Fabien; Chemartin, Laurent; Stillace, Thierry; Masson, Frederic
2016-09-01
In this work, the possibility to use MagnetoHydroDynamics (MHD) to vectorize the thrust of a solid propellant rocket engine exhaust is investigated. Using a magnetic field for vectoring offers a mass gain and a reusability advantage compared to standard gimbaled, elastomer-joint systems. Analytical and numerical models were used to evaluate the flow deviation with a 1 Tesla magnetic field inside the nozzle. The fluid flow in the resistive MHD approximation is calculated using the KRONOS code from ONERA, coupling the hypersonic CFD platform CEDRE and the electrical code SATURNE from EDF. A critical parameter of these simulations is the electrical conductivity, which was evaluated using a set of equilibrium calculations with 25 species. Two models were used: local thermodynamic equilibrium and frozen flow. In both cases, chlorine captures a large fraction of free electrons, limiting the electrical conductivity to a value inadequate for thrust vectoring applications. However, when using chlorine-free propergols with 1% in mass of alkali, an MHD thrust vectoring of several degrees was obtained.
An interactive computer code for calculation of gas-phase chemical equilibrium (EQLBRM)
NASA Technical Reports Server (NTRS)
Pratt, B. S.; Pratt, D. T.
1984-01-01
A user friendly, menu driven, interactive computer program known as EQLBRM which calculates the adiabatic equilibrium temperature and product composition resulting from the combustion of hydrocarbon fuels with air, at specified constant pressure and enthalpy is discussed. The program is developed primarily as an instructional tool to be run on small computers to allow the user to economically and efficiency explore the effects of varying fuel type, air/fuel ratio, inlet air and/or fuel temperature, and operating pressure on the performance of continuous combustion devices such as gas turbine combustors, Stirling engine burners, and power generation furnaces.
Electron-Impact Excitation Cross Sections for Modeling Non-Equilibrium Gas
NASA Technical Reports Server (NTRS)
Huo, Winifred M.; Liu, Yen; Panesi, Marco; Munafo, Alessandro; Wray, Alan; Carbon, Duane F.
2015-01-01
In order to provide a database for modeling hypersonic entry in a partially ionized gas under non-equilibrium, the electron-impact excitation cross sections of atoms have been calculated using perturbation theory. The energy levels covered in the calculation are retrieved from the level list in the HyperRad code. The downstream flow-field is determined by solving a set of continuity equations for each component. The individual structure of each energy level is included. These equations are then complemented by the Euler system of equations. Finally, the radiation field is modeled by solving the radiative transfer equation.
Meshless method for solving fixed boundary problem of plasma equilibrium
NASA Astrophysics Data System (ADS)
Imazawa, Ryota; Kawano, Yasunori; Itami, Kiyoshi
2015-07-01
This study solves the Grad-Shafranov equation with a fixed plasma boundary by utilizing a meshless method for the first time. Previous studies have utilized a finite element method (FEM) to solve an equilibrium inside the fixed separatrix. In order to avoid difficulties of FEM (such as mesh problem, difficulty of coding, expensive calculation cost), this study focuses on the meshless methods, especially RBF-MFS and KANSA's method to solve the fixed boundary problem. The results showed that CPU time of the meshless methods was ten to one hundred times shorter than that of FEM to obtain the same accuracy.
NASA Astrophysics Data System (ADS)
Lin, K. H. E.; Wang, P. K.; Lee, S. Y.; Liao, Y. C.; Fan, I. C.; Liao, H. M.
2017-12-01
The Little ice Age (LIA) is one of the most prominent epochs in paleoclimate reconstruction of the Common Era. While the signals of LIA were generally discovered across hemispheres, wide arrays of regional variability were found, and the reconstructed anomalies were sometimes inconsistent across studies by using various proxy data or historical records. This inconsistency is mainly attributed to limited data coverage at fine resolution that can assist high-resolution climate reconstruction in the continuous spatiotemporal trends. Qing dynasty (1644-1911 CE) of China existed in the coldest period of LIA. Owing to a long-standing tradition that acquired local officials to record odds and social or meteorological events, thousands of local chronicles were left. Zhang eds. (2004) took two decades to compile all these meteorological records in a compendium, for which we then digitized and coded all records into our REACHS database system for reconstructing climate. There were in total 1,435 points (sites) in our database for over 80,000 events in the period of time. After implementing two-rounds coding check for data quality control (accuracy rate 87.2%), multiple indexes were retrieved for reconstructing annually and seasonally resolved temperature and precipitation series for North, Central, and South China. The reconstruction methods include frequency count and grading, with usage of multiple regression models to test sensitivity and to calculate correlations among several reconstructed series. Validation was also conducted through comparison with instrumental data and with other reconstructed series in previous studies. Major research results reveal interannual (3-5 years), decadal (8-12 years), and interdecadal (≈30 years) variabilities with strong regional expressions across East China. Cooling effect was not homogenously distributed in space and time. Flood and drought conditions frequently repeated but the spatiotemporal pattern was variant, indicating likely different climate regimes that can be linked to the dynamism of large atmospheric circulation and East Asian monsoon. Spatiotemporal analysis of extreme events such as typhoons and extreme droughts also indicated similar patterns. More detailed analysis are undertaken to explain the physical mechanisms that can drive these changes.
Effect of Non-Equilibrium Surface Thermochemistry in Simulation of Carbon Based Ablators
NASA Technical Reports Server (NTRS)
Chen, Yih-Kanq; Gokcen, Tahir
2012-01-01
This study demonstrates that coupling of a material thermal response code and a flow solver using non-equilibrium gas/surface interaction model provides time-accurate solutions for the multidimensional ablation of carbon based charring ablators. The material thermal response code used in this study is the Two-dimensional Implicit Thermal-response and AblatioN Program (TITAN), which predicts charring material thermal response and shape change on hypersonic space vehicles. Its governing equations include total energy balance, pyrolysis gas mass conservation, and a three-component decomposition model. The flow code solves the reacting Navier-Stokes equations using Data Parallel Line Relaxation (DPLR) method. Loose coupling between the material response and flow codes is performed by solving the surface mass balance in DPLR and the surface energy balance in TITAN. Thus, the material surface recession is predicted by finite-rate gas/surface interaction boundary conditions implemented in DPLR, and the surface temperature and pyrolysis gas injection rate are computed in TITAN. Two sets of nonequilibrium gas/surface interaction chemistry between air and the carbon surface developed by Park and Zhluktov, respectively, are studied. Coupled fluid-material response analyses of stagnation tests conducted in NASA Ames Research Center arc-jet facilities are considered. The ablating material used in these arc-jet tests was Phenolic Impregnated Carbon Ablator (PICA). Computational predictions of in-depth material thermal response and surface recession are compared with the experimental measurements for stagnation cold wall heat flux ranging from 107 to 1100 Watts per square centimeter.
SPAMCART: a code for smoothed particle Monte Carlo radiative transfer
NASA Astrophysics Data System (ADS)
Lomax, O.; Whitworth, A. P.
2016-10-01
We present a code for generating synthetic spectral energy distributions and intensity maps from smoothed particle hydrodynamics simulation snapshots. The code is based on the Lucy Monte Carlo radiative transfer method, I.e. it follows discrete luminosity packets as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The sources can be extended and/or embedded, and discrete and/or diffuse. The density is not mapped on to a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Secondly, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.
Benchmarking of Improved DPAC Transient Deflagration Analysis Code
Laurinat, James E.; Hensel, Steve J.
2017-09-27
The deflagration pressure analysis code (DPAC) has been upgraded for use in modeling hydrogen deflagration transients. The upgraded code is benchmarked using data from vented hydrogen deflagration tests conducted at the HYDRO-SC Test Facility at the University of Pisa. DPAC originally was written to calculate peak pressures for deflagrations in radioactive waste storage tanks and process facilities at the Savannah River Site. Upgrades include the addition of a laminar flame speed correlation for hydrogen deflagrations and a mechanistic model for turbulent flame propagation, incorporation of inertial effects during venting, and inclusion of the effect of water vapor condensation on vesselmore » walls. In addition, DPAC has been coupled with chemical equilibrium with applications (CEA), a NASA combustion chemistry code. The deflagration tests are modeled as end-to-end deflagrations. As a result, the improved DPAC code successfully predicts both the peak pressures during the deflagration tests and the times at which the pressure peaks.« less
Benchmarking of Improved DPAC Transient Deflagration Analysis Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laurinat, James E.; Hensel, Steve J.
The deflagration pressure analysis code (DPAC) has been upgraded for use in modeling hydrogen deflagration transients. The upgraded code is benchmarked using data from vented hydrogen deflagration tests conducted at the HYDRO-SC Test Facility at the University of Pisa. DPAC originally was written to calculate peak pressures for deflagrations in radioactive waste storage tanks and process facilities at the Savannah River Site. Upgrades include the addition of a laminar flame speed correlation for hydrogen deflagrations and a mechanistic model for turbulent flame propagation, incorporation of inertial effects during venting, and inclusion of the effect of water vapor condensation on vesselmore » walls. In addition, DPAC has been coupled with chemical equilibrium with applications (CEA), a NASA combustion chemistry code. The deflagration tests are modeled as end-to-end deflagrations. As a result, the improved DPAC code successfully predicts both the peak pressures during the deflagration tests and the times at which the pressure peaks.« less
42 CFR 73.3 - HHS select agents and toxins.
Code of Federal Regulations, 2013 CFR
2013-10-01
... replication competent forms of the 1918 pandemic influenza virus containing any portion of the coding regions of all eight gene segments (Reconstructed 1918 Influenza virus) Ricin Rickettsia prowazekii SARS...
42 CFR 73.3 - HHS select agents and toxins.
Code of Federal Regulations, 2014 CFR
2014-10-01
... replication competent forms of the 1918 pandemic influenza virus containing any portion of the coding regions of all eight gene segments (Reconstructed 1918 Influenza virus) Ricin Rickettsia prowazekii SARS...
NASA Astrophysics Data System (ADS)
Ait-Oubba, A.; Coupeau, C.; Durinck, J.; Talea, M.; Grilhé, J.
2018-06-01
In the framework of the continuum elastic theory, the equilibrium positions of Shockley partial dislocations have been determined as a function of their distance from the free surface. It is found that the dissociation width decreases with the decreasing depth, except for a depth range very close to the free surface for which the dissociation width is enlarged. A similar behaviour is also predicted when Shockley dislocation pairs are regularly arranged, whatever the wavelength. These results derived from the elastic theory are compared to STM observations of the reconstructed (1 1 1) surface in gold, which is usually described by a Shockley dislocations network.
3D Material Response Analysis of PICA Pyrolysis Experiments
NASA Technical Reports Server (NTRS)
Oliver, Brandon A.
2017-01-01
Primarily interested in improving ablation modeling for use in inverse reconstruction of flight environments on ablative heat shields. Ablation model is essentially a component of the heat flux sensor, so model uncertainties lead to measurement uncertainties. Non-equilibrium processes have been known to be significant in low density ablators for a long time, but increased accuracy requirements of the reconstruction process necessitates incorporating this physical effect. Attempting to develop a pyrolysis model for implementation in material response based on the PICA data produced by Bessire and Minton. Pyrolysis gas species molar yields as a function of temperature and heating rate. Several problems encountered while trying to fit Arrhenius models to the data led to further investigation of the experimental setup.
Driven Metadynamics: Reconstructing Equilibrium Free Energies from Driven Adaptive-Bias Simulations
2013-01-01
We present a novel free-energy calculation method that constructively integrates two distinct classes of nonequilibrium sampling techniques, namely, driven (e.g., steered molecular dynamics) and adaptive-bias (e.g., metadynamics) methods. By employing nonequilibrium work relations, we design a biasing protocol with an explicitly time- and history-dependent bias that uses on-the-fly work measurements to gradually flatten the free-energy surface. The asymptotic convergence of the method is discussed, and several relations are derived for free-energy reconstruction and error estimation. Isomerization reaction of an atomistic polyproline peptide model is used to numerically illustrate the superior efficiency and faster convergence of the method compared with its adaptive-bias and driven components in isolation. PMID:23795244
Continuous analog of multiplicative algebraic reconstruction technique for computed tomography
NASA Astrophysics Data System (ADS)
Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya
2016-03-01
We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.
NASA Astrophysics Data System (ADS)
Mukherjee, Sayak; Stewart, David; Stewart, William; Lanier, Lewis L.; Das, Jayajit
2017-08-01
Single-cell responses are shaped by the geometry of signalling kinetic trajectories carved in a multidimensional space spanned by signalling protein abundances. It is, however, challenging to assay a large number (more than 3) of signalling species in live-cell imaging, which makes it difficult to probe single-cell signalling kinetic trajectories in large dimensions. Flow and mass cytometry techniques can measure a large number (4 to more than 40) of signalling species but are unable to track single cells. Thus, cytometry experiments provide detailed time-stamped snapshots of single-cell signalling kinetics. Is it possible to use the time-stamped cytometry data to reconstruct single-cell signalling trajectories? Borrowing concepts of conserved and slow variables from non-equilibrium statistical physics we develop an approach to reconstruct signalling trajectories using snapshot data by creating new variables that remain invariant or vary slowly during the signalling kinetics. We apply this approach to reconstruct trajectories using snapshot data obtained from in silico simulations, live-cell imaging measurements, and, synthetic flow cytometry datasets. The application of invariants and slow variables to reconstruct trajectories provides a radically different way to track objects using snapshot data. The approach is likely to have implications for solving matching problems in a wide range of disciplines.
Analysis of Island Formation Due to RMPs in D3D Plasmas Using SIESTA
NASA Astrophysics Data System (ADS)
Hirshman, Steven; Shafer, Morgan; Seal, Sudip; Canik, John
2015-11-01
By varying the initial helical perturbation amplitude of Resonant Magnetic Perturbations (RMPs) applied to a Doublet III-D (DIII-D) plasma, a variety of meta-stable equilibrium are scanned using the SIESTA MHD equilibrium code. It is found that increasing the perturbation strength at the dominant m =2 resonant surface leads to lower MHD energies and significant increases in the equilibrium island widths at the m =2 (and sidebands) surfaces. Island overlap eventually leads to stochastic magnetic fields which correlate well with the experimentally inferred field line structure. The magnitude and spatial phase (around associated rational surfaces) of resonant (shielding) components of the parallel current is shown to be correlated with the magnetic island topology. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.
NASA Technical Reports Server (NTRS)
Miner, E. W.; Anderson, E. C.; Lewis, C. H.
1971-01-01
A computer program is described in detail for laminar, transitional, and/or turbulent boundary-layer flows of non-reacting (perfect gas) and reacting gas mixtures in chemical equilibrium. An implicit finite difference scheme was developed for both two dimensional and axisymmetric flows over bodies, and in rocket nozzles and hypervelocity wind tunnel nozzles. The program, program subroutines, variables, and input and output data are described. Also included is the output from a sample calculation of fully developed turbulent, perfect gas flow over a flat plate. Input data coding forms and a FORTRAN source listing of the program are included. A method is discussed for obtaining thermodynamic and transport property data which are required to perform boundary-layer calculations for reacting gases in chemical equilibrium.
Non-equilibrium condensation of supercritical carbon dioxide in a converging-diverging nozzle
NASA Astrophysics Data System (ADS)
Ameli, Alireza; Afzalifar, Ali; Turunen-Saaresti, Teemu
2017-03-01
Carbon dioxide (CO2) is a promising alternative as a working fluid for future energy conversion and refrigeration cycles. CO2 has low global warming potential compared to refrigerants and supercritical CO2 Brayton cycle ought to have better efficiency than today’s counter parts. However, there are several issues concerning behaviour of supercritical CO2 in aforementioned applications. One of these issues arises due to non-equilibrium condensation of CO2 for some operating conditions in supercritical compressors. This paper investigates the non-equilibrium condensation of carbon dioxide in the course of an expansion from supercritical stagnation conditions in a converging-diverging nozzle. An external look-up table was implemented, using an in-house FORTRAN code, to calculate the fluid properties in supercritical, metastable and saturated regions. This look-up table is coupled with the flow solver and the non-equilibrium condensation model is introduced to the solver using user defined expressions. Numerical results are compared with the experimental measurements. In agreement with the experiment, the distribution of Mach number in the nozzle shows that the flow becomes supersonic in upstream region near the throat where speed of sound is minimum also the equilibrium reestablishment occurs at the outlet boundary condition.
qtcm 0.1.2: A Python Implementation of the Neelin-Zeng Quasi-Equilibrium Tropical Circulation model
NASA Astrophysics Data System (ADS)
Lin, J. W.-B.
2008-10-01
Historically, climate models have been developed incrementally and in compiled languages like Fortran. While the use of legacy compiled languages results in fast, time-tested code, the resulting model is limited in its modularity and cannot take advantage of functionality available with modern computer languages. Here we describe an effort at using the open-source, object-oriented language Python to create more flexible climate models: the package qtcm, a Python implementation of the intermediate-level Neelin-Zeng Quasi-Equilibrium Tropical Circulation model (QTCM1) of the atmosphere. The qtcm package retains the core numerics of QTCM1, written in Fortran to optimize model performance, but uses Python structures and utilities to wrap the QTCM1 Fortran routines and manage model execution. The resulting "mixed language" modeling package allows order and choice of subroutine execution to be altered at run time, and model analysis and visualization to be integrated in interactively with model execution at run time. This flexibility facilitates more complex scientific analysis using less complex code than would be possible using traditional languages alone, and provides tools to transform the traditional "formulate hypothesis → write and test code → run model → analyze results" sequence into a feedback loop that can be executed automatically by the computer.
qtcm 0.1.2: a Python implementation of the Neelin-Zeng Quasi-Equilibrium Tropical Circulation Model
NASA Astrophysics Data System (ADS)
Lin, J. W.-B.
2009-02-01
Historically, climate models have been developed incrementally and in compiled languages like Fortran. While the use of legacy compiled languages results in fast, time-tested code, the resulting model is limited in its modularity and cannot take advantage of functionality available with modern computer languages. Here we describe an effort at using the open-source, object-oriented language Python to create more flexible climate models: the package qtcm, a Python implementation of the intermediate-level Neelin-Zeng Quasi-Equilibrium Tropical Circulation model (QTCM1) of the atmosphere. The qtcm package retains the core numerics of QTCM1, written in Fortran to optimize model performance, but uses Python structures and utilities to wrap the QTCM1 Fortran routines and manage model execution. The resulting "mixed language" modeling package allows order and choice of subroutine execution to be altered at run time, and model analysis and visualization to be integrated in interactively with model execution at run time. This flexibility facilitates more complex scientific analysis using less complex code than would be possible using traditional languages alone, and provides tools to transform the traditional "formulate hypothesis → write and test code → run model → analyze results" sequence into a feedback loop that can be executed automatically by the computer.
A simple model for molecular hydrogen chemistry coupled to radiation hydrodynamics
NASA Astrophysics Data System (ADS)
Nickerson, Sarah; Teyssier, Romain; Rosdahl, Joakim
2018-06-01
We introduce non-equilibrium molecular hydrogen chemistry into the radiation-hydrodynamics code RAMSES-RT. This is an adaptive mesh refinement grid code with radiation hydrodynamics that couples the thermal chemistry of hydrogen and helium to moment-based radiative transfer with the Eddington tensor closure model. The H2 physics that we include are formation on dust grains, gas phase formation, formation by three-body collisions, collisional destruction, photodissociation, photoionisation, cosmic ray ionisation and self-shielding. In particular, we implement the first model for H2 self-shielding that is tied locally to moment-based radiative transfer by enhancing photo-destruction. This self-shielding from Lyman-Werner line overlap is critical to H2 formation and gas cooling. We can now track the non-equilibrium evolution of molecular, atomic, and ionised hydrogen species with their corresponding dissociating and ionising photon groups. Over a series of tests we show that our model works well compared to specialised photodissociation region codes. We successfully reproduce the transition depth between molecular and atomic hydrogen, molecular cooling of the gas, and a realistic Strömgren sphere embedded in a molecular medium. In this paper we focus on test cases to demonstrate the validity of our model on small scales. Our ultimate goal is to implement this in large-scale galactic simulations.
High-Performance 3D Compressive Sensing MRI Reconstruction Using Many-Core Architectures.
Kim, Daehyun; Trzasko, Joshua; Smelyanskiy, Mikhail; Haider, Clifton; Dubey, Pradeep; Manduca, Armando
2011-01-01
Compressive sensing (CS) describes how sparse signals can be accurately reconstructed from many fewer samples than required by the Nyquist criterion. Since MRI scan duration is proportional to the number of acquired samples, CS has been gaining significant attention in MRI. However, the computationally intensive nature of CS reconstructions has precluded their use in routine clinical practice. In this work, we investigate how different throughput-oriented architectures can benefit one CS algorithm and what levels of acceleration are feasible on different modern platforms. We demonstrate that a CUDA-based code running on an NVIDIA Tesla C2050 GPU can reconstruct a 256 × 160 × 80 volume from an 8-channel acquisition in 19 seconds, which is in itself a significant improvement over the state of the art. We then show that Intel's Knights Ferry can perform the same 3D MRI reconstruction in only 12 seconds, bringing CS methods even closer to clinical viability.
Plio-Pleistocene climate sensitivity evaluated using high-resolution CO2 records
NASA Astrophysics Data System (ADS)
Martínez-Botí, M. A.; Foster, G. L.; Chalk, T. B.; Rohling, E. J.; Sexton, P. F.; Lunt, D. J.; Pancost, R. D.; Badger, M. P. S.; Schmidt, D. N.
2015-02-01
Theory and climate modelling suggest that the sensitivity of Earth's climate to changes in radiative forcing could depend on the background climate. However, palaeoclimate data have thus far been insufficient to provide a conclusive test of this prediction. Here we present atmospheric carbon dioxide (CO2) reconstructions based on multi-site boron-isotope records from the late Pliocene epoch (3.3 to 2.3 million years ago). We find that Earth's climate sensitivity to CO2-based radiative forcing (Earth system sensitivity) was half as strong during the warm Pliocene as during the cold late Pleistocene epoch (0.8 to 0.01 million years ago). We attribute this difference to the radiative impacts of continental ice-volume changes (the ice-albedo feedback) during the late Pleistocene, because equilibrium climate sensitivity is identical for the two intervals when we account for such impacts using sea-level reconstructions. We conclude that, on a global scale, no unexpected climate feedbacks operated during the warm Pliocene, and that predictions of equilibrium climate sensitivity (excluding long-term ice-albedo feedbacks) for our Pliocene-like future (with CO2 levels up to maximum Pliocene levels of 450 parts per million) are well described by the currently accepted range of an increase of 1.5 K to 4.5 K per doubling of CO2.
Berzak, L; Jones, A D; Kaita, R; Kozub, T; Logan, N; Majeski, R; Menard, J; Zakharov, L
2010-10-01
The lithium tokamak experiment (LTX) is a modest-sized spherical tokamak (R(0)=0.4 m and a=0.26 m) designed to investigate the low-recycling lithium wall operating regime for magnetically confined plasmas. LTX will reach this regime through a lithium-coated shell internal to the vacuum vessel, conformal to the plasma last-closed-flux surface, and heated to 300-400 °C. This structure is highly conductive and not axisymmetric. The three-dimensional nature of the shell causes the eddy currents and magnetic fields to be three-dimensional as well. In order to analyze the plasma equilibrium in the presence of three-dimensional eddy currents, an extensive array of unique magnetic diagnostics has been implemented. Sensors are designed to survive high temperatures and incidental contact with lithium and provide data on toroidal asymmetries as well as full coverage of the poloidal cross-section. The magnetic array has been utilized to determine the effects of nonaxisymmetric eddy currents and to model the start-up phase of LTX. Measurements from the magnetic array, coupled with two-dimensional field component modeling, have allowed a suitable field null and initial plasma current to be produced. For full magnetic reconstructions, a three-dimensional electromagnetic model of the vacuum vessel and shell is under development.
Benchmarking kinetic calculations of resistive wall mode stability
NASA Astrophysics Data System (ADS)
Berkery, J. W.; Liu, Y. Q.; Wang, Z. R.; Sabbagh, S. A.; Logan, N. C.; Park, J.-K.; Manickam, J.; Betti, R.
2014-05-01
Validating the calculations of kinetic resistive wall mode (RWM) stability is important for confidently predicting RWM stable operating regions in ITER and other high performance tokamaks for disruption avoidance. Benchmarking the calculations of the Magnetohydrodynamic Resistive Spectrum—Kinetic (MARS-K) [Y. Liu et al., Phys. Plasmas 15, 112503 (2008)], Modification to Ideal Stability by Kinetic effects (MISK) [B. Hu et al., Phys. Plasmas 12, 057301 (2005)], and Perturbed Equilibrium Nonambipolar Transport PENT) [N. Logan et al., Phys. Plasmas 20, 122507 (2013)] codes for two Solov'ev analytical equilibria and a projected ITER equilibrium has demonstrated good agreement between the codes. The important particle frequencies, the frequency resonance energy integral in which they are used, the marginally stable eigenfunctions, perturbed Lagrangians, and fluid growth rates are all generally consistent between the codes. The most important kinetic effect at low rotation is the resonance between the mode rotation and the trapped thermal particle's precession drift, and MARS-K, MISK, and PENT show good agreement in this term. The different ways the rational surface contribution was treated historically in the codes is identified as a source of disagreement in the bounce and transit resonance terms at higher plasma rotation. Calculations from all of the codes support the present understanding that RWM stability can be increased by kinetic effects at low rotation through precession drift resonance and at high rotation by bounce and transit resonances, while intermediate rotation can remain susceptible to instability. The applicability of benchmarked kinetic stability calculations to experimental results is demonstrated by the prediction of MISK calculations of near marginal growth rates for experimental marginal stability points from the National Spherical Torus Experiment (NSTX) [M. Ono et al., Nucl. Fusion 40, 557 (2000)].
Nyx: Adaptive mesh, massively-parallel, cosmological simulation code
NASA Astrophysics Data System (ADS)
Almgren, Ann; Beckner, Vince; Friesen, Brian; Lukic, Zarija; Zhang, Weiqun
2017-12-01
Nyx code solves equations of compressible hydrodynamics on an adaptive grid hierarchy coupled with an N-body treatment of dark matter. The gas dynamics in Nyx use a finite volume methodology on an adaptive set of 3-D Eulerian grids; dark matter is represented as discrete particles moving under the influence of gravity. Particles are evolved via a particle-mesh method, using Cloud-in-Cell deposition/interpolation scheme. Both baryonic and dark matter contribute to the gravitational field. In addition, Nyx includes physics for accurately modeling the intergalactic medium; in optically thin limits and assuming ionization equilibrium, the code calculates heating and cooling processes of the primordial-composition gas in an ionizing ultraviolet background radiation field.
Use of high order, periodic orbits in the PIES code
NASA Astrophysics Data System (ADS)
Monticello, Donald; Reiman, Allan
2010-11-01
We have implemented a version of the PIES code (Princeton Iterative Equilibrium SolverootnotetextA. Reiman et al 2007 Nucl. Fusion 47 572) that uses high order periodic orbits to select the surfaces on which straight magnetic field line coordinates will be calculated. The use of high order periodic orbits has increase the robustness and speed of the PIES code. We now have more uniform treatment of in-phase and out-of-phase islands. This new version has better convergence properties and works well with a full Newton scheme. We now have the ability to shrink islands using a bootstrap like current and this includes the m=1 island in tokamaks.
Reconstruction of Bulk Operators within the Entanglement Wedge in Gauge-Gravity Duality
NASA Astrophysics Data System (ADS)
Dong, Xi; Harlow, Daniel; Wall, Aron C.
2016-07-01
In this Letter we prove a simple theorem in quantum information theory, which implies that bulk operators in the anti-de Sitter/conformal field theory (AdS/CFT) correspondence can be reconstructed as CFT operators in a spatial subregion A , provided that they lie in its entanglement wedge. This is an improvement on existing reconstruction methods, which have at most succeeded in the smaller causal wedge. The proof is a combination of the recent work of Jafferis, Lewkowycz, Maldacena, and Suh on the quantum relative entropy of a CFT subregion with earlier ideas interpreting the correspondence as a quantum error correcting code.
Surface reconstruction, figure-ground modulation, and border-ownership.
Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R
2013-01-01
The Differentiation-Integration for Surface Completion (DISC) model aims to explain the reconstruction of visual surfaces. We find the model a valuable contribution to our understanding of figure-ground organization. We point out that, next to border-ownership, neurons in visual cortex code whether surface elements belong to a figure or the background and that this is influenced by attention. We furthermore suggest that there must be strong links between object recognition and figure-ground assignment in order to resolve the status of interior contours. Incorporation of these factors in neurocomputational models will further improve our understanding of surface reconstruction, figure-ground organization, and border-ownership.
Reconstruction of Bulk Operators within the Entanglement Wedge in Gauge-Gravity Duality.
Dong, Xi; Harlow, Daniel; Wall, Aron C
2016-07-08
In this Letter we prove a simple theorem in quantum information theory, which implies that bulk operators in the anti-de Sitter/conformal field theory (AdS/CFT) correspondence can be reconstructed as CFT operators in a spatial subregion A, provided that they lie in its entanglement wedge. This is an improvement on existing reconstruction methods, which have at most succeeded in the smaller causal wedge. The proof is a combination of the recent work of Jafferis, Lewkowycz, Maldacena, and Suh on the quantum relative entropy of a CFT subregion with earlier ideas interpreting the correspondence as a quantum error correcting code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramsdell, J.V. Jr.; Simonen, C.A.; Burk, K.W.
1994-02-01
The purpose of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate radiation doses that individuals may have received from operations at the Hanford Site since 1944. This report deals specifically with the atmospheric transport model, Regional Atmospheric Transport Code for Hanford Emission Tracking (RATCHET). RATCHET is a major rework of the MESOILT2 model used in the first phase of the HEDR Project; only the bookkeeping framework escaped major changes. Changes to the code include (1) significant changes in the representation of atmospheric processes and (2) incorporation of Monte Carlo methods for representing uncertainty in input data, model parameters,more » and coefficients. To a large extent, the revisions to the model are based on recommendations of a peer working group that met in March 1991. Technical bases for other portions of the atmospheric transport model are addressed in two other documents. This report has three major sections: a description of the model, a user`s guide, and a programmer`s guide. These sections discuss RATCHET from three different perspectives. The first provides a technical description of the code with emphasis on details such as the representation of the model domain, the data required by the model, and the equations used to make the model calculations. The technical description is followed by a user`s guide to the model with emphasis on running the code. The user`s guide contains information about the model input and output. The third section is a programmer`s guide to the code. It discusses the hardware and software required to run the code. The programmer`s guide also discusses program structure and each of the program elements.« less
Nanostructure control: Nucleation and diffusion studies for predictable ultra thin film morphologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hershberger, Matthew
This thesis covers PhD research on two systems with unique and interesting physics. The first system is lead (Pb) deposited on the silicon (111) surface with the 7x7 reconstruction. Pb and Si are mutually bulk insoluble resulting in this system being an ideal case for studying metal and semiconductor interactions. Initial Pb deposition causes an amorphous wetting layer to form across to surface. Continued deposition results in Pb(111) island growth. Classic literature has classified this system as the Stranski-Krastanov growth mode although the system is not near equilibrium conditions. Our research shows a growth mode distinctly different than classical expectationsmore » and begins a discussion of reclassifying diffusion and nucleation for systems far away from the well-studied equilibrium cases.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, H.D.
1991-11-01
Several of the technologies being evaluated for the treatment of waste material involve chemical reactions. Our example is the in situ vitrification (ISV) process where electrical energy is used to melt soil and waste into a ``glass like`` material that immobilizes and encapsulates any residual waste. During the ISV process, various chemical reactions may occur that produce significant amounts of products which must be contained and treated. The APOLLO program was developed to assist in predicting the composition of the gases that are formed. Although the development of this program was directed toward ISV applications, it should be applicable tomore » other technologies where chemical reactions are of interest. This document presents the mathematical methodology of the APOLLO computer code. APOLLO is a computer code that calculates the products of both equilibrium and kinetic chemical reactions. The current version, written in FORTRAN, is readily adaptable to existing transport programs designed for the analysis of chemically reacting flow systems. Separate subroutines EQREACT and KIREACT for equilibrium ad kinetic chemistry respectively have been developed. A full detailed description of the numerical techniques used, which include both Lagrange multiplies and a third-order integrating scheme is presented. Sample test problems are presented and the results are in excellent agreement with those reported in the literature.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, H.D.
1991-11-01
Several of the technologies being evaluated for the treatment of waste material involve chemical reactions. Our example is the in situ vitrification (ISV) process where electrical energy is used to melt soil and waste into a glass like'' material that immobilizes and encapsulates any residual waste. During the ISV process, various chemical reactions may occur that produce significant amounts of products which must be contained and treated. The APOLLO program was developed to assist in predicting the composition of the gases that are formed. Although the development of this program was directed toward ISV applications, it should be applicable tomore » other technologies where chemical reactions are of interest. This document presents the mathematical methodology of the APOLLO computer code. APOLLO is a computer code that calculates the products of both equilibrium and kinetic chemical reactions. The current version, written in FORTRAN, is readily adaptable to existing transport programs designed for the analysis of chemically reacting flow systems. Separate subroutines EQREACT and KIREACT for equilibrium ad kinetic chemistry respectively have been developed. A full detailed description of the numerical techniques used, which include both Lagrange multiplies and a third-order integrating scheme is presented. Sample test problems are presented and the results are in excellent agreement with those reported in the literature.« less
Equilibrium states of homogeneous sheared compressible turbulence
NASA Astrophysics Data System (ADS)
Riahi, M.; Lili, T.
2011-06-01
Equilibrium states of homogeneous compressible turbulence subjected to rapid shear is studied using rapid distortion theory (RDT). The purpose of this study is to determine the numerical solutions of unsteady linearized equations governing double correlations spectra evolution. In this work, RDT code developed by authors solves these equations for compressible homogeneous shear flows. Numerical integration of these equations is carried out using a second-order simple and accurate scheme. The two Mach numbers relevant to homogeneous shear flow are the turbulent Mach number Mt, given by the root mean square turbulent velocity fluctuations divided by the speed of sound, and the gradient Mach number Mg which is the mean shear rate times the transverse integral scale of the turbulence divided by the speed of sound. Validation of this code is performed by comparing RDT results with direct numerical simulation (DNS) of [A. Simone, G.N. Coleman, and C. Cambon, Fluid Mech. 330, 307 (1997)] and [S. Sarkar, J. Fluid Mech. 282, 163 (1995)] for various values of initial gradient Mach number Mg0. It was found that RDT is valid for small values of the non-dimensional times St (St < 3.5). It is important to note that RDT is also valid for large values of St (St > 10) in particular for large values of Mg0. This essential feature justifies the resort to RDT in order to determine equilibrium states in the compressible regime.
NASA Astrophysics Data System (ADS)
Tsujimura, T., Ii; Kubo, S.; Takahashi, H.; Makino, R.; Seki, R.; Yoshimura, Y.; Igami, H.; Shimozuma, T.; Ida, K.; Suzuki, C.; Emoto, M.; Yokoyama, M.; Kobayashi, T.; Moon, C.; Nagaoka, K.; Osakabe, M.; Kobayashi, S.; Ito, S.; Mizuno, Y.; Okada, K.; Ejiri, A.; Mutoh, T.
2015-11-01
The central electron temperature has successfully reached up to 7.5 keV in large helical device (LHD) plasmas with a central high-ion temperature of 5 keV and a central electron density of 1.3× {{10}19} m-3. This result was obtained by heating with a newly-installed 154 GHz gyrotron and also the optimisation of injection geometry in electron cyclotron heating (ECH). The optimisation was carried out by using the ray-tracing code ‘LHDGauss’, which was upgraded to include the rapid post-processing three-dimensional (3D) equilibrium mapping obtained from experiments. For ray-tracing calculations, LHDGauss can automatically read the relevant data registered in the LHD database after a discharge, such as ECH injection settings (e.g. Gaussian beam parameters, target positions, polarisation and ECH power) and Thomson scattering diagnostic data along with the 3D equilibrium mapping data. The equilibrium map of the electron density and temperature profiles are then extrapolated into the region outside the last closed flux surface. Mode purity, or the ratio between the ordinary mode and the extraordinary mode, is obtained by calculating the 1D full-wave equation along the direction of the rays from the antenna to the absorption target point. Using the virtual magnetic flux surfaces, the effects of the modelled density profiles and the magnetic shear at the peripheral region with a given polarisation are taken into account. Power deposition profiles calculated for each Thomson scattering measurement timing are registered in the LHD database. The adjustment of the injection settings for the desired deposition profile from the feedback provided on a shot-by-shot basis resulted in an effective experimental procedure.
Siegel, M.D.; Anderholm, S.
1994-01-01
The Culebra Dolomite Member of the Rustler Formation, a thin (10 m) fractured dolomite aquifer, lies approximately 450 m above the repository horizon of the Waste Isolation Pilot Plant (WIPP) in southeastern New Mexico, USA. Salinities of water in the Culebra range roughly from 10,000 to 200,000 mg/L within the WIPP site. A proposed model for the post-Pleistocene hydrochemical evolution of the Culebra tentatively identifies the major sources and sinks for many of the groundwater solutes. Reaction-path simulations with the PHRQPITZ code suggest that the Culebra dolomite is a partial chemical equilibrium system whose composition is controlled by an irreversible process (dissolution of evaporites) and equilibrium with gypsum and calcite. Net geochemical reactions along postulated modern flow paths, calculated with the NETPATH code, include dissolution of halite, carbonate and evaporite salts, and ion exchange. R-mode principal component analysis revealed correlations among the concentrations of Si, Mg, pH, Li, and B that are consistent with several clay-water reactions. The results of the geochemical calculations and mineralogical data are consistent with the following hydrochemical model: 1. (1) solutes are added to the Culebra by dissolution of evaporite minerals 2. (2) the solubilities of gypsum and calcite increase as the salinity increases; these minerals dissolve as chemical equilibrium is maintained between them and the groundwater 3. (3) equilibrium is not maintained between the waters and dolomite; sufficient Mg is added to the waters by dissolution of accessory carnallite or polyhalite such that the degree of dolomite supersaturation increases with ionic strength 4. (4) clays within the fractures and rock matrix exert some control on the distribution of Li, B, Mg, and Si via sorption, ion exchange, and dissolution. ?? 1994.
Four-Dimensional Continuum Gyrokinetic Code: Neoclassical Simulation of Fusion Edge Plasmas
NASA Astrophysics Data System (ADS)
Xu, X. Q.
2005-10-01
We are developing a continuum gyrokinetic code, TEMPEST, to simulate edge plasmas. Our code represents velocity space via a grid in equilibrium energy and magnetic moment variables, and configuration space via poloidal magnetic flux and poloidal angle. The geometry is that of a fully diverted tokamak (single or double null) and so includes boundary conditions for both closed magnetic flux surfaces and open field lines. The 4-dimensional code includes kinetic electrons and ions, and electrostatic field-solver options, and simulates neoclassical transport. The present implementation is a Method of Lines approach where spatial finite-differences (higher order upwinding) and implicit time advancement are used. We present results of initial verification and validation studies: transition from collisional to collisionless limits of parallel end-loss in the scrape-off layer, self-consistent electric field, and the effect of the real X-point geometry and edge plasma conditions on the standard neoclassical theory, including a comparison of our 4D code with other kinetic neoclassical codes and experiments.
140 GHz EC waves propagation and absorption for normal/oblique injection on FTU tokamak
NASA Astrophysics Data System (ADS)
Nowak, S.; Airoldi, A.; Bruschi, A.; Buratti, P.; Cirant, S.; Gandini, F.; Granucci, G.; Lazzaro, E.; Panaccione, L.; Ramponi, G.; Simonetto, A.; Sozzi, C.; Tudisco, O.; Zerbini, M.
1999-09-01
Most of the interest in ECRH experiments is linked to the high localization of EC waves absorption in well known portions of the plasma volume. In order to take full advantage of this capability a reliable code has been developed for beam tracing and absorption calculations. The code is particularly important for oblique (poloidal and toroidal) injection, when the absorbing layer is not simply dependent on the position of the EC resonance only. An experimental estimate of the local heating power density is given by the jump in the time derivative of the local electron pressure at the switching ON of the gyrotron power. The evolution of the temperature profile increase (from ECE polychromator) during the nearly adiabatic phase is also considered for ECRH profile reconstruction. An indirect estimate of optical thickness and of the overall absorption coefficient is given by the measure of the residual e.m. power at the tokamak walls. Beam tracing code predictions of the power deposition profile are compared with experimental estimates. The impact of the finite spatial resolution of the temperature diagnostic on profile reconstruction is also discussed.
Sparse Coding for N-Gram Feature Extraction and Training for File Fragment Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Felix; Quach, Tu-Thach; Wheeler, Jason
File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used tomore » reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers such as support vector machines (SVMs) over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.« less
Toward enhancing the distributed video coder under a multiview video codec framework
NASA Astrophysics Data System (ADS)
Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua
2016-11-01
The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.
Sparse Coding for N-Gram Feature Extraction and Training for File Fragment Classification
Wang, Felix; Quach, Tu-Thach; Wheeler, Jason; ...
2018-04-05
File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used tomore » reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers such as support vector machines (SVMs) over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.« less
Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency.
Zhang, Ying-Ying; Yang, Cai; Zhang, Ping
2017-05-01
In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.
DynamiX, numerical tool for design of next-generation x-ray telescopes.
Chauvin, Maxime; Roques, Jean-Pierre
2010-07-20
We present a new code aimed at the simulation of grazing-incidence x-ray telescopes subject to deformations and demonstrate its ability with two test cases: the Simbol-X and the International X-ray Observatory (IXO) missions. The code, based on Monte Carlo ray tracing, computes the full photon trajectories up to the detector plane, accounting for the x-ray interactions and for the telescope motion and deformation. The simulation produces images and spectra for any telescope configuration using Wolter I mirrors and semiconductor detectors. This numerical tool allows us to study the telescope performance in terms of angular resolution, effective area, and detector efficiency, accounting for the telescope behavior. We have implemented an image reconstruction method based on the measurement of the detector drifts by an optical sensor metrology. Using an accurate metrology, this method allows us to recover the loss of angular resolution induced by the telescope instability. In the framework of the Simbol-X mission, this code was used to study the impacts of the parameters on the telescope performance. In this paper we present detailed performance analysis of Simbol-X, taking into account the satellite motions and the image reconstruction. To illustrate the versatility of the code, we present an additional performance analysis with a particular configuration of IXO.
Enhancing the performance of the light field microscope using wavefront coding
Cohen, Noy; Yang, Samuel; Andalman, Aaron; Broxton, Michael; Grosenick, Logan; Deisseroth, Karl; Horowitz, Mark; Levoy, Marc
2014-01-01
Light field microscopy has been proposed as a new high-speed volumetric computational imaging method that enables reconstruction of 3-D volumes from captured projections of the 4-D light field. Recently, a detailed physical optics model of the light field microscope has been derived, which led to the development of a deconvolution algorithm that reconstructs 3-D volumes with high spatial resolution. However, the spatial resolution of the reconstructions has been shown to be non-uniform across depth, with some z planes showing high resolution and others, particularly at the center of the imaged volume, showing very low resolution. In this paper, we enhance the performance of the light field microscope using wavefront coding techniques. By including phase masks in the optical path of the microscope we are able to address this non-uniform resolution limitation. We have also found that superior control over the performance of the light field microscope can be achieved by using two phase masks rather than one, placed at the objective’s back focal plane and at the microscope’s native image plane. We present an extended optical model for our wavefront coded light field microscope and develop a performance metric based on Fisher information, which we use to choose adequate phase masks parameters. We validate our approach using both simulated data and experimental resolution measurements of a USAF 1951 resolution target; and demonstrate the utility for biological applications with in vivo volumetric calcium imaging of larval zebrafish brain. PMID:25322056
Enhancing the performance of the light field microscope using wavefront coding.
Cohen, Noy; Yang, Samuel; Andalman, Aaron; Broxton, Michael; Grosenick, Logan; Deisseroth, Karl; Horowitz, Mark; Levoy, Marc
2014-10-06
Light field microscopy has been proposed as a new high-speed volumetric computational imaging method that enables reconstruction of 3-D volumes from captured projections of the 4-D light field. Recently, a detailed physical optics model of the light field microscope has been derived, which led to the development of a deconvolution algorithm that reconstructs 3-D volumes with high spatial resolution. However, the spatial resolution of the reconstructions has been shown to be non-uniform across depth, with some z planes showing high resolution and others, particularly at the center of the imaged volume, showing very low resolution. In this paper, we enhance the performance of the light field microscope using wavefront coding techniques. By including phase masks in the optical path of the microscope we are able to address this non-uniform resolution limitation. We have also found that superior control over the performance of the light field microscope can be achieved by using two phase masks rather than one, placed at the objective's back focal plane and at the microscope's native image plane. We present an extended optical model for our wavefront coded light field microscope and develop a performance metric based on Fisher information, which we use to choose adequate phase masks parameters. We validate our approach using both simulated data and experimental resolution measurements of a USAF 1951 resolution target; and demonstrate the utility for biological applications with in vivo volumetric calcium imaging of larval zebrafish brain.
A probabilistic Hu-Washizu variational principle
NASA Technical Reports Server (NTRS)
Liu, W. K.; Belytschko, T.; Besterfield, G. H.
1987-01-01
A Probabilistic Hu-Washizu Variational Principle (PHWVP) for the Probabilistic Finite Element Method (PFEM) is presented. This formulation is developed for both linear and nonlinear elasticity. The PHWVP allows incorporation of the probabilistic distributions for the constitutive law, compatibility condition, equilibrium, domain and boundary conditions into the PFEM. Thus, a complete probabilistic analysis can be performed where all aspects of the problem are treated as random variables and/or fields. The Hu-Washizu variational formulation is available in many conventional finite element codes thereby enabling the straightforward inclusion of the probabilistic features into present codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dey, S.; Kumar, S., E-mail: kumars@phys.jdvu.ac.in; Dey, S. K.
2014-08-11
The authors find that for mechanically milled Ni{sub 0.5}Zn{sub 0.5}Fe{sub 2}O{sub 4} (∼10 nm), the mechanical strain induced enhancement of anisotropy energy helps to retain stable magnetic order. The reduction of magnetization can be prevented by keeping the cation distribution of nanometric ferrites at its equilibrium ratio. Moreover, the sample can be used in coding, storing, and retrieving of binary bit (“0” and “1”) through magnetic field change.
Nested Dissection Interface Reconstruction in Pececillo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jibben, Zechariah Joel; Carlson, Neil N.; Francois, Marianne M.
A nested dissection method for interface reconstruction in a volume tracking framework has been implemented in Pececillo, a mini-app for Truchas, which is the ASC code for casting and additive manufacturing. This method provides a significant improvement over the traditional onion-skin method, which does not appropriately handle T-shaped multimaterial intersections and dynamic contact lines present in additive manufacturing simulations. The resulting implementation lays the groundwork for further research in contact angle estimates and surface tension calculations.
NASA Astrophysics Data System (ADS)
Zhang, Leihong; Pan, Zilan; Liang, Dong; Ma, Xiuhua; Zhang, Dawei
2015-12-01
An optical encryption method based on compressive ghost imaging (CGI) with double random-phase encoding (DRPE), named DRPE-CGI, is proposed. The information is first encrypted by the sender with DRPE, the DRPE-coded image is encrypted by the system of computational ghost imaging with a secret key. The key of N random-phase vectors is generated by the sender and will be shared with the receiver who is the authorized user. The receiver decrypts the DRPE-coded image with the key, with the aid of CGI and a compressive sensing technique, and then reconstructs the original information by the technique of DRPE-decoding. The experiments suggest that cryptanalysts cannot get any useful information about the original image even if they eavesdrop 60% of the key at a given time, so the security of DRPE-CGI is higher than that of the security of conventional ghost imaging. Furthermore, this method can reduce 40% of the information quantity compared with ghost imaging while the qualities of reconstructing the information are the same. It can also improve the quality of the reconstructed plaintext information compared with DRPE-GI with the same sampling times. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.
Noniterative, unconditionally stable numerical techniques for solving condensational and
dissolutional growth equations are given. Growth solutions are compared to Gear-code solutions for
three cases when growth is coupled to reversible equilibrium chemistry. In all cases, ...
NASA Astrophysics Data System (ADS)
Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.
2006-01-01
In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.
NASA Astrophysics Data System (ADS)
Reyes, A. V.; Wolfe, A. P.; Royer, D. L.; Greenwood, D. R.; Tierney, J. E.; Doria, G.; Gagen, M. H.; Siver, P.; Westgate, J.
2016-12-01
Eocene paleoclimate reconstructions are rarely accompanied by parallel estimates of CO2, complicating assessment of the equilibrium climate responses to CO2. We reconstruct temperature, precipitation, and CO2 from latest middle Eocene ( 38 Myrs ago) peats in subarctic Canada, preserved in sediments that record infilling of a kimberlite pipe maar crater. Mutual climatic range analyses of pollen, together with oxygen isotope analyses of a-cellulose from unpermineralized wood and inferenecs from branched glycerol diakyl glycerol tetraethers (GDGTs), reveal a high-latitude humid-temperate forest ecosystem with mean annual temperatures (MATs) >17 °C warmer than present, mean coldest month temperatures above 0 °C, and mean annual precipitation 4x present. Metasequoia stomatal indices and gas-exchange modeling produce median CO2 concentrations of 634 and 432 ppm, respectively, with a consensus median estimate of 494 ppm. Reconstructed MATs are >6 °C warmer than those produced by Eocene climate models forced at 560 ppm CO2, underscoring the capacity for exceptional polar amplification of warming and hydrological intensification under relatively modest CO2 concentrations, once both fast and slow feedbacks become expressed.
Solving free-plasma-boundary problems with the SIESTA MHD code
NASA Astrophysics Data System (ADS)
Sanchez, R.; Peraza-Rodriguez, H.; Reynolds-Barredo, J. M.; Tribaldos, V.; Geiger, J.; Hirshman, S. P.; Cianciosa, M.
2017-10-01
SIESTA is a recently developed MHD equilibrium code designed to perform fast and accurate calculations of ideal MHD equilibria for 3D magnetic configurations. It is an iterative code that uses the solution obtained by the VMEC code to provide a background coordinate system and an initial guess of the solution. The final solution that SIESTA finds can exhibit magnetic islands and stochastic regions. In its original implementation, SIESTA addressed only fixed-boundary problems. This fixed boundary condition somewhat restricts its possible applications. In this contribution we describe a recent extension of SIESTA that enables it to address free-plasma-boundary situations, opening up the possibility of investigating problems with SIESTA in which the plasma boundary is perturbed either externally or internally. As an illustration, the extended version of SIESTA is applied to a configuration of the W7-X stellarator.
NASA Astrophysics Data System (ADS)
Xie, W.; Li, N.; Wu, J.-D.; Hao, X.-L.
2013-11-01
Disaster damages have negative effects on economy, whereas reconstruction investments have positive effects. The aim of this study is to model economic causes of disasters and recovery involving positive effects of reconstruction activities. Computable general equilibrium (CGE) model is a promising approach because it can incorporate these two kinds of shocks into a unified framework and further avoid double-counting problem. In order to factor both shocks in CGE model, direct loss is set as the amount of capital stock reduced on supply side of economy; A portion of investments restore the capital stock in existing period; An investment-driven dynamic model is formulated due to available reconstruction data, and the rest of a given country's saving is set as an endogenous variable. The 2008 Wenchuan Earthquake is selected as a case study to illustrate the model, and three scenarios are constructed: S0 (no disaster occurs), S1 (disaster occurs with reconstruction investment) and S2 (disaster occurs without reconstruction investment). S0 is taken as business as usual, and the differences between S1 and S0 and that between S2 and S0 can be interpreted as economic losses including reconstruction and excluding reconstruction respectively. The study showed that output from S1 is found to be closer to real data than that from S2. S2 overestimates economic loss by roughly two times that under S1. The gap in economic aggregate between S1 and S0 is reduced to 3% in 2011, a level that should take another four years to achieve under S2.
SOPHAEROS code development and its application to falcon tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lajtha, G.; Missirlian, M.; Kissane, M.
1996-12-31
One of the key issues in source-term evaluation in nuclear reactor severe accidents is determination of the transport behavior of fission products released from the degrading core. The SOPHAEROS computer code is being developed to predict fission product transport in a mechanistic way in light water reactor circuits. These applications of the SOPHAEROS code to the Falcon experiments, among others not presented here, indicate that the numerical scheme of the code is robust, and no convergence problems are encountered. The calculation is also very fast being three times longer on a Sun SPARC 5 workstation than real time and typicallymore » {approx} 10 times faster than an identical calculation with the VICTORIA code. The study demonstrates that the SOPHAEROS 1.3 code is a suitable tool for prediction of the vapor chemistry and fission product transport with a reasonable level of accuracy. Furthermore, the fexibility of the code material data bank allows improvement of understanding of fission product transport and deposition in the circuit. Performing sensitivity studies with different chemical species or with different properties (saturation pressure, chemical equilibrium constants) is very straightforward.« less
NASA Technical Reports Server (NTRS)
Mcbride, Bonnie J.; Reno, Martin A.; Gordon, Sanford
1994-01-01
The NASA Lewis chemical equilibrium program with applications continues to be improved and updated. The latest version is CET93. This code, with smaller arrays, has been compiled for use on an IBM or IBM-compatible personal computer and is called CETPC. This report is intended to be primarily a users manual for CET93 and CETPC. It does not repeat the more complete documentation of earlier reports on the equilibrium program. Most of the discussion covers input and output files, two new options (ONLY and comments), example problems, and implementation of CETPC.
Ordered phase and non-equilibrium fluctuation in stock market
NASA Astrophysics Data System (ADS)
Maskawa, Jun-ichi
2002-08-01
We analyze the statistics of daily price change of stock market in the framework of a statistical physics model for the collective fluctuation of stock portfolio. In this model the time series of price changes are coded into the sequences of up and down spins, and the Hamiltonian of the system is expressed by spin-spin interactions as in spin glass models of disordered magnetic systems. Through the analysis of Dow-Jones industrial portfolio consisting of 30 stock issues by this model, we find a non-equilibrium fluctuation mode on the point slightly below the boundary between ordered and disordered phases. The remaining 29 modes are still in disordered phase and well described by Gibbs distribution. The variance of the fluctuation is outlined by the theoretical curve and peculiarly large in the non-equilibrium mode compared with those in the other modes remaining in ordinary phase.
Parallel equilibrium current effect on existence of reversed shear Alfvén eigenmodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Hua-sheng, E-mail: huashengxie@gmail.com; Xiao, Yong, E-mail: yxiao@zju.edu.cn
2015-02-15
A new fast global eigenvalue code, where the terms are segregated according to their physics contents, is developed to study Alfvén modes in tokamak plasmas, particularly, the reversed shear Alfvén eigenmode (RSAE). Numerical calculations show that the parallel equilibrium current corresponding to the kink term is strongly unfavorable for the existence of the RSAE. An improved criterion for the RSAE existence is given for with and without the parallel equilibrium current. In the limits of ideal magnetohydrodynamics (MHD) and zero-pressure, the toroidicity effect is the main possible favorable factor for the existence of the RSAE, which is however usually small.more » This suggests that it is necessary to include additional physics such as kinetic term in the MHD model to overcome the strong unfavorable effect of the parallel current in order to enable the existence of RSAE.« less
Equilibrium radiative heating tables for aerobraking in the Martian atmosphere
NASA Astrophysics Data System (ADS)
Hartung, Lin C.; Sutton, Kenneth; Brauns, Frank
1990-05-01
Studies currently underway for Mars missions often envision the use of aerobraking for orbital capture at Mars. These missions generally involve blunt-nosed vehicles to dissipate the excess energy of the interplanetary transfer. Radiative heating may be of importance in these blunt-body flows because of the highly energetic shock layer around the blunt nose. In addition, the Martian atmosphere contains CO2, whose dissociation products are known to include strong radiators. An inviscid, equilibrium, stagnation point, radiation-coupled flow-field code has been developed for investigating blunt-body atmospheric entry. The method has been compared with ground-based and flight data for air, and reasonable agreement has been found. In the present work, the method was applied to a matrix of conditions in the Martian atmosphere. These conditions encompass most trajectories of interest for Mars exploration spacecraft. The predicted equilibrium radiative heating to the stagnation point of the vehicle is presented.
Equilibrium radiative heating tables for aerobraking in the Martian atmosphere
NASA Technical Reports Server (NTRS)
Hartung, Lin C.; Sutton, Kenneth; Brauns, Frank
1990-01-01
Studies currently underway for Mars missions often envision the use of aerobraking for orbital capture at Mars. These missions generally involve blunt-nosed vehicles to dissipate the excess energy of the interplanetary transfer. Radiative heating may be of importance in these blunt-body flows because of the highly energetic shock layer around the blunt nose. In addition, the Martian atmosphere contains CO2, whose dissociation products are known to include strong radiators. An inviscid, equilibrium, stagnation point, radiation-coupled flow-field code has been developed for investigating blunt-body atmospheric entry. The method has been compared with ground-based and flight data for air, and reasonable agreement has been found. In the present work, the method was applied to a matrix of conditions in the Martian atmosphere. These conditions encompass most trajectories of interest for Mars exploration spacecraft. The predicted equilibrium radiative heating to the stagnation point of the vehicle is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, M.; Ganesh, R.
The dynamics of cylindrically trapped electron plasma has been investigated using a newly developed 2D Electrostatic PIC code that uses unapproximated, mass-included equations of motion for simulation. Exhaustive simulations, covering the entire range of Brillouin ratio, were performed for uniformly filled circular profiles in rigid rotor equilibrium. The same profiles were then loaded away from equilibrium with an initial value of rigid rotation frequency different from that required for radial force balance. Both these sets of simulations were performed for an initial zero-temperature or cold load of the plasma with no spread in either angular velocity or radial velocity. Themore » evolution of the off-equilibrium initial conditions to a steady state involve radial breathing of the profile that scales in amplitude and algebraic growth with Brillouin fraction. For higher Brillouin fractions, the growth of the breathing mode is followed by complex dynamics of spontaneous hollow density structures, excitation of poloidal modes, leading to a monotonically falling density profile.« less
Equilibrium and stability of flow-dominated Plasmas in the Big Red Ball
NASA Astrophysics Data System (ADS)
Siller, Robert; Flanagan, Kenneth; Peterson, Ethan; Milhone, Jason; Mirnov, Vladimir; Forest, Cary
2017-10-01
The equilibrium and linear stability of flow-dominated plasmas are studied numerically using a spectral techniques to model MRI and dynamo experiments in the Big Red Ball device. The equilibrium code solves for steady-state magnetic fields and plasma flows subject to boundary conditions in a spherical domain. It has been benchmarked with NIMROD (non-ideal MHD with rotation - open discussion), Two different flow scenarios are studied. The first scenario creates a differentially rotating toroidal flow that is peaked at the center. This is done to explore the onset of the magnetorotational instability (MRI) in a spherical geometry. The second scenario creates a counter-rotating von Karman-like flow in the presence of a weak magnetic field. This is done to explore the plasma dynamo instability in the limit of a weak applied field. Both scenarios are numerically modeled as axisymmetric flow to create a steady-state equilibrium solution, the stability and normal modes are studied in the lowest toroidal mode number. The details of the observed flow, and the structure of the fastest growing modes will be shown. DoE, NSF.
1985-03-01
interferometry and computer- R - spanwise coordinate, ft assisted tomography ( CAT ) are used to determine the transonic velocity field of a model rotor...and extracting fringe-order functions, the c data are transferred to a CAT code.- The CAT code Ui transmitted wave complex amplitude then calculates...the perturbation velocity in sev- eral planes above the blade surface. The values Ur reference wave complex amplitude from the holography- CAT method
StePar: an automatic code for stellar parameter determination
NASA Astrophysics Data System (ADS)
Tabernero, H. M.; González Hernández, J. I.; Montes, D.
2013-05-01
We introduce a new automatic code (StePar) for determinig stellar atmospheric parameters (T_{eff}, log{g}, ξ and [Fe/H]) in an automated way. StePar employs the 2002 version of the MOOG code (Sneden 1973) and a grid of Kurucz ATLAS9 plane-paralell model atmospheres (Kurucz 1993). The atmospheric parameters are obtained from the EWs of 263 Fe I and 36 Fe II lines (obtained from Sousa et al. 2008, A&A, 487, 373) iterating until the excitation and ionization equilibrium are fullfilled. StePar uses a Downhill Simplex method that minimizes a quadratic form composed by the excitation and ionization equilibrium conditions. Atmospheric parameters determined by StePar are independent of the stellar parameters initial-guess for the problem star, therefore we employ the canonical solar values as initial input. StePar can only deal with FGK stars from F6 to K4, also it can not work with fast rotators, veiled spectra, very metal poor stars or Signal to noise ratio below 30. Optionally StePar can operate with MARCS models (Gustafson et al. 2008, A&A, 486, 951) instead of Kurucz ATLAS9 models, additionally Turbospectrum (Alvarez & Plez 1998, A&A, 330, 1109) can replace the MOOG code and play its role during the parameter determination. StePar has been used to determine stellar parameters for some studies (Tabernero et al. 2012, A&A, 547, A13; Wisniewski et al. 2012, AJ, 143, 107). In addition StePar is being used to obtain parameters for FGK stars from the GAIA-ESO Survey.
Comparing TCV experimental VDE responses with DINA code simulations
NASA Astrophysics Data System (ADS)
Favez, J.-Y.; Khayrutdinov, R. R.; Lister, J. B.; Lukash, V. E.
2002-02-01
The DINA free-boundary equilibrium simulation code has been implemented for TCV, including the full TCV feedback and diagnostic systems. First results showed good agreement with control coil perturbations and correctly reproduced certain non-linear features in the experimental measurements. The latest DINA code simulations, presented in this paper, exploit discharges with different cross-sectional shapes and different vertical instability growth rates which were subjected to controlled vertical displacement events (VDEs), extending previous work with the DINA code on the DIII-D tokamak. The height of the TCV vessel allows observation of the non-linear evolution of the VDE growth rate as regions of different vertical field decay index are crossed. The vertical movement of the plasma is found to be well modelled. For most experiments, DINA reproduces the S-shape of the vertical displacement in TCV with excellent precision. This behaviour cannot be modelled using linear time-independent models because of the predominant exponential shape due to the unstable pole of any linear time-independent model. The other most common equilibrium parameters like the plasma current Ip, the elongation κ, the triangularity δ, the safety factor q, the ratio between the averaged plasma kinetic pressure and the pressure of the poloidal magnetic field at the edge of the plasma βp, and the internal self inductance li also show acceptable agreement. The evolution of the growth rate γ is estimated and compared with the evolution of the closed-loop growth rate calculated with the RZIP linear model, confirming the origin of the observed behaviour.
NASA Astrophysics Data System (ADS)
Yu, Haiqing; Chen, Shuhang; Chen, Yunmei; Liu, Huafeng
2017-05-01
Dynamic positron emission tomography (PET) is capable of providing both spatial and temporal information of radio tracers in vivo. In this paper, we present a novel joint estimation framework to reconstruct temporal sequences of dynamic PET images and the coefficients characterizing the system impulse response function, from which the associated parametric images of the system macro parameters for tracer kinetics can be estimated. The proposed algorithm, which combines statistical data measurement and tracer kinetic models, integrates a dictionary sparse coding (DSC) into a total variational minimization based algorithm for simultaneous reconstruction of the activity distribution and parametric map from measured emission sinograms. DSC, based on the compartmental theory, provides biologically meaningful regularization, and total variation regularization is incorporated to provide edge-preserving guidance. We rely on techniques from minimization algorithms (the alternating direction method of multipliers) to first generate the estimated activity distributions with sub-optimal kinetic parameter estimates, and then recover the parametric maps given these activity estimates. These coupled iterative steps are repeated as necessary until convergence. Experiments with synthetic, Monte Carlo generated data, and real patient data have been conducted, and the results are very promising.
NASA Astrophysics Data System (ADS)
Tian, Lei; Waller, Laura
2017-05-01
Microscope lenses can have either large field of view (FOV) or high resolution, not both. Computational microscopy based on illumination coding circumvents this limit by fusing images from different illumination angles using nonlinear optimization algorithms. The result is a Gigapixel-scale image having both wide FOV and high resolution. We demonstrate an experimentally robust reconstruction algorithm based on a 2nd order quasi-Newton's method, combined with a novel phase initialization scheme. To further extend the Gigapixel imaging capability to 3D, we develop a reconstruction method to process the 4D light field measurements from sequential illumination scanning. The algorithm is based on a 'multislice' forward model that incorporates both 3D phase and diffraction effects, as well as multiple forward scatterings. To solve the inverse problem, an iterative update procedure that combines both phase retrieval and 'error back-propagation' is developed. To avoid local minimum solutions, we further develop a novel physical model-based initialization technique that accounts for both the geometric-optic and 1st order phase effects. The result is robust reconstructions of Gigapixel 3D phase images having both wide FOV and super resolution in all three dimensions. Experimental results from an LED array microscope were demonstrated.
lpNet: a linear programming approach to reconstruct signal transduction networks.
Matos, Marta R A; Knapp, Bettina; Kaderali, Lars
2015-10-01
With the widespread availability of high-throughput experimental technologies it has become possible to study hundreds to thousands of cellular factors simultaneously, such as coding- or non-coding mRNA or protein concentrations. Still, extracting information about the underlying regulatory or signaling interactions from these data remains a difficult challenge. We present a flexible approach towards network inference based on linear programming. Our method reconstructs the interactions of factors from a combination of perturbation/non-perturbation and steady-state/time-series data. We show both on simulated and real data that our methods are able to reconstruct the underlying networks fast and efficiently, thus shedding new light on biological processes and, in particular, into disease's mechanisms of action. We have implemented the approach as an R package available through bioconductor. This R package is freely available under the Gnu Public License (GPL-3) from bioconductor.org (http://bioconductor.org/packages/release/bioc/html/lpNet.html) and is compatible with most operating systems (Windows, Linux, Mac OS) and hardware architectures. bettina.knapp@helmholtz-muenchen.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Reconstructing biochemical pathways from time course data.
Srividhya, Jeyaraman; Crampin, Edmund J; McSharry, Patrick E; Schnell, Santiago
2007-03-01
Time series data on biochemical reactions reveal transient behavior, away from chemical equilibrium, and contain information on the dynamic interactions among reacting components. However, this information can be difficult to extract using conventional analysis techniques. We present a new method to infer biochemical pathway mechanisms from time course data using a global nonlinear modeling technique to identify the elementary reaction steps which constitute the pathway. The method involves the generation of a complete dictionary of polynomial basis functions based on the law of mass action. Using these basis functions, there are two approaches to model construction, namely the general to specific and the specific to general approach. We demonstrate that our new methodology reconstructs the chemical reaction steps and connectivity of the glycolytic pathway of Lactococcus lactis from time course experimental data.
Huet, C; Lemosquet, A; Clairand, I; Rioual, J B; Franck, D; de Carlan, L; Aubineau-Lanièce, I; Bottollier-Depois, J F
2009-01-01
Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. This dose distribution can be assessed by physical dosimetric reconstruction methods. Physical dosimetric reconstruction can be achieved using experimental or numerical techniques. This article presents the laboratory-developed SESAME--Simulation of External Source Accident with MEdical images--tool specific to dosimetric reconstruction of radiological accidents through numerical simulations which combine voxel geometry and the radiation-material interaction MCNP(X) Monte Carlo computer code. The experimental validation of the tool using a photon field and its application to a radiological accident in Chile in December 2005 are also described.
Digital tomosynthesis mammography using a parallel maximum-likelihood reconstruction method
NASA Astrophysics Data System (ADS)
Wu, Tao; Zhang, Juemin; Moore, Richard; Rafferty, Elizabeth; Kopans, Daniel; Meleis, Waleed; Kaeli, David
2004-05-01
A parallel reconstruction method, based on an iterative maximum likelihood (ML) algorithm, is developed to provide fast reconstruction for digital tomosynthesis mammography. Tomosynthesis mammography acquires 11 low-dose projections of a breast by moving an x-ray tube over a 50° angular range. In parallel reconstruction, each projection is divided into multiple segments along the chest-to-nipple direction. Using the 11 projections, segments located at the same distance from the chest wall are combined to compute a partial reconstruction of the total breast volume. The shape of the partial reconstruction forms a thin slab, angled toward the x-ray source at a projection angle 0°. The reconstruction of the total breast volume is obtained by merging the partial reconstructions. The overlap region between neighboring partial reconstructions and neighboring projection segments is utilized to compensate for the incomplete data at the boundary locations present in the partial reconstructions. A serial execution of the reconstruction is compared to a parallel implementation, using clinical data. The serial code was run on a PC with a single PentiumIV 2.2GHz CPU. The parallel implementation was developed using MPI and run on a 64-node Linux cluster using 800MHz Itanium CPUs. The serial reconstruction for a medium-sized breast (5cm thickness, 11cm chest-to-nipple distance) takes 115 minutes, while a parallel implementation takes only 3.5 minutes. The reconstruction time for a larger breast using a serial implementation takes 187 minutes, while a parallel implementation takes 6.5 minutes. No significant differences were observed between the reconstructions produced by the serial and parallel implementations.
Reference View Selection in DIBR-Based Multiview Coding.
Maugey, Thomas; Petrazzuoli, Giovanni; Frossard, Pascal; Cagnazzo, Marco; Pesquet-Popescu, Beatrice
2016-04-01
Augmented reality, interactive navigation in 3D scenes, multiview video, and other emerging multimedia applications require large sets of images, hence larger data volumes and increased resources compared with traditional video services. The significant increase in the number of images in multiview systems leads to new challenging problems in data representation and data transmission to provide high quality of experience on resource-constrained environments. In order to reduce the size of the data, different multiview video compression strategies have been proposed recently. Most of them use the concept of reference or key views that are used to estimate other images when there is high correlation in the data set. In such coding schemes, the two following questions become fundamental: 1) how many reference views have to be chosen for keeping a good reconstruction quality under coding cost constraints? And 2) where to place these key views in the multiview data set? As these questions are largely overlooked in the literature, we study the reference view selection problem and propose an algorithm for the optimal selection of reference views in multiview coding systems. Based on a novel metric that measures the similarity between the views, we formulate an optimization problem for the positioning of the reference views, such that both the distortion of the view reconstruction and the coding rate cost are minimized. We solve this new problem with a shortest path algorithm that determines both the optimal number of reference views and their positions in the image set. We experimentally validate our solution in a practical multiview distributed coding system and in the standardized 3D-HEVC multiview coding scheme. We show that considering the 3D scene geometry in the reference view, positioning problem brings significant rate-distortion improvements and outperforms the traditional coding strategy that simply selects key frames based on the distance between cameras.
NASA Technical Reports Server (NTRS)
Rao, T. R. N.; Seetharaman, G.; Feng, G. L.
1996-01-01
With the development of new advanced instruments for remote sensing applications, sensor data will be generated at a rate that not only requires increased onboard processing and storage capability, but imposes demands on the space to ground communication link and ground data management-communication system. Data compression and error control codes provide viable means to alleviate these demands. Two types of data compression have been studied by many researchers in the area of information theory: a lossless technique that guarantees full reconstruction of the data, and a lossy technique which generally gives higher data compaction ratio but incurs some distortion in the reconstructed data. To satisfy the many science disciplines which NASA supports, lossless data compression becomes a primary focus for the technology development. While transmitting the data obtained by any lossless data compression, it is very important to use some error-control code. For a long time, convolutional codes have been widely used in satellite telecommunications. To more efficiently transform the data obtained by the Rice algorithm, it is required to meet the a posteriori probability (APP) for each decoded bit. A relevant algorithm for this purpose has been proposed which minimizes the bit error probability in the decoding linear block and convolutional codes and meets the APP for each decoded bit. However, recent results on iterative decoding of 'Turbo codes', turn conventional wisdom on its head and suggest fundamentally new techniques. During the past several months of this research, the following approaches have been developed: (1) a new lossless data compression algorithm, which is much better than the extended Rice algorithm for various types of sensor data, (2) a new approach to determine the generalized Hamming weights of the algebraic-geometric codes defined by a large class of curves in high-dimensional spaces, (3) some efficient improved geometric Goppa codes for disk memory systems and high-speed mass memory systems, and (4) a tree based approach for data compression using dynamic programming.
On the use of orientation filters for 3D reconstruction in event-driven stereo vision
Camuñas-Mesa, Luis A.; Serrano-Gotarredona, Teresa; Ieng, Sio H.; Benosman, Ryad B.; Linares-Barranco, Bernabe
2014-01-01
The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction. PMID:24744694
Kinetic neoclassical calculations of impurity radiation profiles
Stotler, D. P.; Battaglia, D. J.; Hager, R.; ...
2016-12-30
Modifications of the drift-kinetic transport code XGC0 to include the transport, ionization, and recombination of individual charge states, as well as the associated radiation, are described. The code is first applied to a simulation of an NSTX H-mode discharge with carbon impurity to demonstrate the approach to coronal equilibrium. The effects of neoclassical phenomena on the radiated power profile are examined sequentially through the activation of individual physics modules in the code. Orbit squeezing and the neoclassical inward pinch result in increased radiation for temperatures above a few hundred eV and changes to the ratios of charge state emissions atmore » a given electron temperature. As a result, analogous simulations with a neon impurity yield qualitatively similar results.« less
Simulating X-ray bursts with a radiation hydrodynamics code
NASA Astrophysics Data System (ADS)
Seong, Gwangeon; Kwak, Kyujin
2018-04-01
Previous simulations of X-ray bursts (XRBs), for example, those performed by MESA (Modules for Experiments in Stellar Astrophysics) could not address the dynamical effects of strong radiation, which are important to explain the photospheric radius expansion (PRE) phenomena seen in many XRBs. In order to study the effects of strong radiation, we propose to use SNEC (the SuperNova Explosion Code), a 1D Lagrangian open source code that is designed to solve hydrodynamics and equilibrium-diffusion radiation transport together. Because SNEC is able to control modules of radiation-hydrodynamics for properly mapped inputs, radiation-dominant pressure occurring in PRE XRBs can be handled. Here we present simulation models for PRE XRBs by applying SNEC together with MESA.
Liquid Engine Design: Effect of Chamber Dimensions on Specific Impulse
NASA Technical Reports Server (NTRS)
Hoggard, Lindsay; Leahy, Joe
2009-01-01
Which assumption of combustion chemistry - frozen or equilibrium - should be used in the prediction of liquid rocket engine performance calculations? Can a correlation be developed for this? A literature search using the LaSSe tool, an online repository of old rocket data and reports, was completed. Test results of NTO/Aerozine-50 and Lox/LH2 subscale and full-scale injector and combustion chamber test results were found and studied for this task. NASA code, Chemical Equilibrium with Applications (CEA) was used to predict engine performance using both chemistry assumptions, defined here. Frozen- composition remains frozen during expansion through the nozzle. Equilibrium- instantaneous chemical equilibrium during nozzle expansion. Chamber parameters were varied to understand what dimensions drive chamber C* and Isp. Contraction Ratio is the ratio of the nozzle throat area to the area of the chamber. L is the length of the chamber. Characteristic chamber length, L*, is the length that the chamber would be if it were a straight tube and had no converging nozzle. Goal: Develop a qualitative and quantitative correlation for performance parameters - Specific Impulse (Isp) and Characteristic Velocity (C*) - as a function of one or more chamber dimensions - Contraction Ratio (CR), Chamber Length (L ) and/or Characteristic Chamber Length (L*). Determine if chamber dimensions can be correlated to frozen or equilibrium chemistry.
Li, Yuelin; Schaller, Richard D.; Zhu, Mengze; ...
2016-01-20
In correlated oxides the coupling of quasiparticles to other degrees of freedom such as spin and lattice plays critical roles in the emergence of symmetry-breaking quantum ordered states such as high temperature superconductivity. We report a strong lattice coupling of photon-induced quasiparticles in spin-orbital coupling Mott insulator Sr 2IrO 4 probed via optical excitation. Combining time-resolved x-ray diffraction and optical spectroscopy techniques, we reconstruct a spatiotemporal map of the diffusion of these quasiparticles. Lastly, due to the unique electronic configuration of the quasiparticles, the strong lattice correlation is unexpected but extends the similarity between Sr 2IrO 4 and cuprates tomore » a new dimension of electron-phonon coupling which persists under highly non-equilibrium conditions.« less
NASA Astrophysics Data System (ADS)
Amerian, Z.; Salem, M. K.; Salar Elahi, A.; Ghoranneviss, M.
2017-03-01
Equilibrium reconstruction consists of identifying, from experimental measurements, a distribution of the plasma current density that satisfies the pressure balance constraint. Numerous methods exist to solve the Grad-Shafranov equation, describing the equilibrium of plasma confined by an axisymmetric magnetic field. In this paper, we have proposed a new numerical solution to the Grad-Shafranov equation (an axisymmetric, magnetic field transformed in cylindrical coordinates solved with the Chebyshev collocation method) when the source term (current density function) on the right-hand side is linear. The Chebyshev collocation method is a method for computing highly accurate numerical solutions of differential equations. We describe a circular cross-section of the tokamak and present numerical result of magnetic surfaces on the IR-T1 tokamak and then compare the results with an analytical solution.
NASA Astrophysics Data System (ADS)
Giorgino, Toni
2018-07-01
The proper choice of collective variables (CVs) is central to biased-sampling free energy reconstruction methods in molecular dynamics simulations. The PLUMED 2 library, for instance, provides several sophisticated CV choices, implemented in a C++ framework; however, developing new CVs is still time consuming due to the need to provide code for the analytical derivatives of all functions with respect to atomic coordinates. We present two solutions to this problem, namely (a) symbolic differentiation and code generation, and (b) automatic code differentiation, in both cases leveraging open-source libraries (SymPy and Stan Math, respectively). The two approaches are demonstrated and discussed in detail implementing a realistic example CV, the local radius of curvature of a polymer. Users may use the code as a template to streamline the implementation of their own CVs using high-level constructs and automatic gradient computation.
Data Needs and Modeling of the Upper Atmosphere
NASA Astrophysics Data System (ADS)
Brunger, M. J.; Campbell, L.
2007-04-01
We present results from our enhanced statistical equilibrium and time-step codes for atmospheric modeling. In particular we use these results to illustrate the role of electron-driven processes in atmospheric phenomena and the sensitivity of the model results to data inputs such as integral cross sections, dissociative recombination rates and chemical reaction rates.
1994-12-01
Army Research Laboratory ATTN: AMSRL-WT-PA Aberdeen Proving Ground, MD 21005-5066 9 . SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSORING...8 1.5 DISTANCE vs. TIME CALCULATION ........................................... 9 2. D ISCU SSIO N...21 Figure 9 : Comparison of calculated thrust curves ..................................... 32 v
Pressure induced solid-solid reconstructive phase transition in LiGa O2 dominated by elastic strain
NASA Astrophysics Data System (ADS)
Hu, Qiwei; Yan, Xiaozhi; Lei, Li; Wang, Qiming; Feng, Leihao; Qi, Lei; Zhang, Leilei; Peng, Fang; Ohfuji, Hiroaki; He, Duanwei
2018-01-01
Pressure induced solid-solid reconstructive phase transitions for graphite-diamond, and wurtzite-rocksalt in GaN and AlN occur at significantly higher pressure than expected from equilibrium coexistence and their transition paths are always inconsistent with each other. These indicate that the underlying nucleation and growth mechanism in the solid-solid reconstructive phase transitions are poorly understood. Here, we propose an elastic-strain dominated mechanism in a reconstructive phase transition, β -LiGa O2 to γ -LiGa O2 , based on in situ high-pressure angle dispersive x-ray diffraction and single-crystal Raman scattering. This mechanism suggests that the pressure induced solid-solid reconstructive phase transition is neither purely diffusionless nor purely diffusive, as conventionally assumed, but a combination. The large elastic strains are accumulated, with the coherent nucleation, in the early stage of the transition. The elastic strains along the 〈100 〉 and 〈001 〉 directions are too large to be relaxed by the shear stress, so an intermediate structure emerges reducing the elastic strains and making the transition energetically favorable. At higher pressures, when the elastic strains become small enough to be relaxed, the phase transition to γ -LiGa O2 begins and the coherent nucleation is substituted with a semicoherent one with Li and Ga atoms disordered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moffat, Harry K.; Jove-Colon, Carlos F.
2009-06-01
In this report, we summarize our work on developing a production level capability for modeling brine thermodynamic properties using the open-source code Cantera. This implementation into Cantera allows for the application of chemical thermodynamics to describe the interactions between a solid and an electrolyte solution at chemical equilibrium. The formulations to evaluate the thermodynamic properties of electrolytes are based on Pitzer's model to calculate molality-based activity coefficients using a real equation-of-state (EoS) for water. In addition, the thermodynamic properties of solutes at elevated temperature and pressures are computed using the revised Helgeson-Kirkham-Flowers (HKF) EoS for ionic and neutral aqueous species.more » The thermodynamic data parameters for the Pitzer formulation and HKF EoS are from the thermodynamic database compilation developed for the Yucca Mountain Project (YMP) used with the computer code EQ3/6. We describe the adopted equations and their implementation within Cantera and also provide several validated examples relevant to the calculations of extensive properties of electrolyte solutions.« less
NASA Astrophysics Data System (ADS)
Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui
2016-07-01
Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.
Spontaneous Mutation Rate in the Smallest Photosynthetic Eukaryotes
Krasovec, Marc; Eyre-Walker, Adam; Sanchez-Ferandin, Sophie
2017-01-01
Abstract Mutation is the ultimate source of genetic variation, and knowledge of mutation rates is fundamental for our understanding of all evolutionary processes. High throughput sequencing of mutation accumulation lines has provided genome wide spontaneous mutation rates in a dozen model species, but estimates from nonmodel organisms from much of the diversity of life are very limited. Here, we report mutation rates in four haploid marine bacterial-sized photosynthetic eukaryotic algae; Bathycoccus prasinos, Ostreococcus tauri, Ostreococcus mediterraneus, and Micromonas pusilla. The spontaneous mutation rate between species varies from μ = 4.4 × 10−10 to 9.8 × 10−10 mutations per nucleotide per generation. Within genomes, there is a two-fold increase of the mutation rate in intergenic regions, consistent with an optimization of mismatch and transcription-coupled DNA repair in coding sequences. Additionally, we show that deviation from the equilibrium GC content increases the mutation rate by ∼2% to ∼12% because of a GC bias in coding sequences. More generally, the difference between the observed and equilibrium GC content of genomes explains some of the inter-specific variation in mutation rates. PMID:28379581
Modeling of Resistive Wall Modes in Tokamak and Reversed Field Pinch Configurations of KTX
NASA Astrophysics Data System (ADS)
Han, Rui; Zhu, Ping; Bai, Wei; Lan, Tao; Liu, Wandong
2016-10-01
Resistive wall mode is believed to be one of the leading causes for macroscopic degradation of plasma confinement in tokamaks and reversed field pinches (RFP). In this study, we evaluate the linear RWM instability of Keda Torus eXperiment (KTX) in both tokamak and RFP configurations. For the tokamak configuration, the extended MHD code NIMROD is employed for calculating the dependence of the RWM growth rate on the position and conductivity of the vacuum wall for a model tokamak equilibrium of KTX in the large aspect-ratio approximation. For the RFP configuration, the standard formulation of dispersion relation for RWM based on the MHD energy principle has been evaluated for a cylindrical α- Θ model of KTX plasma equilibrium, in an effort to investigate the effects of thin wall on the RWM in KTX. Full MHD calculations of RWM in the RFP configuration of KTX using the NIMROD code are also being developed. Supported by National Magnetic Confinement Fusion Science Program of China Grant Nos. 2014GB124002, 2015GB101004, 2011GB106000, and 2011GB106003.
NASA Astrophysics Data System (ADS)
Prisiazhniuk, D.; Krämer-Flecken, A.; Conway, G. D.; Happel, T.; Lebschy, A.; Manz, P.; Nikolaeva, V.; Stroth, U.; the ASDEX Upgrade Team
2017-02-01
In fusion machines, turbulent eddies are expected to be aligned with the direction of the magnetic field lines and to propagate in the perpendicular direction. Time delay measurements of density fluctuations can be used to calculate the magnetic field pitch angle α and perpendicular velocity {{v}\\bot} profiles. The method is applied to poloidal correlation reflectometry installed at ASDEX Upgrade and TEXTOR, which measure density fluctuations from poloidally and toroidally separated antennas. Validation of the method is achieved by comparing the perpendicular velocity (composed of the E× B drift and the phase velocity of turbulence {{v}\\bot}={{v}E× B}+{{v}\\text{ph}} ) with Doppler reflectometry measurements and with neoclassical {{v}E× B} calculations. An important condition for the application of the method is the presence of turbulence with a sufficiently long decorrelation time. It is shown that at the shear layer the decorrelation time is reduced, limiting the application of the method. The magnetic field pitch angle measured by this method shows the expected dependence on the magnetic field, plasma current and radial position. The profile of the pitch angle reproduces the expected shape and values. However, comparison with the equilibrium reconstruction code cliste suggests an additional inclination of turbulent eddies at the pedestal position (2-3°). This additional angle decreases towards the core and at the edge.
Viking Afterbody Heating Computations and Comparisons to Flight Data
NASA Technical Reports Server (NTRS)
Edquist, Karl T.; Wright, Michael J.; Allen, Gary A., Jr.
2006-01-01
Computational fluid dynamics predictions of Viking Lander 1 entry vehicle afterbody heating are compared to flight data. The analysis includes a derivation of heat flux from temperature data at two base cover locations, as well as a discussion of available reconstructed entry trajectories. Based on the raw temperature-time history data, convective heat flux is derived to be 0.63-1.10 W/cm2 for the aluminum base cover at the time of thermocouple failure. Peak heat flux at the fiberglass base cover thermocouple is estimated to be 0.54-0.76 W/cm2, occurring 16 seconds after peak stagnation point heat flux. Navier-Stokes computational solutions are obtained with two separate codes using an 8- species Mars gas model in chemical and thermal non-equilibrium. Flowfield solutions using local time-stepping did not result in converged heating at either thermocouple location. A global time-stepping approach improved the computational stability, but steady state heat flux was not reached for either base cover location. Both thermocouple locations lie within a separated flow region of the base cover that is likely unsteady. Heat flux computations averaged over the solution history are generally below the flight data and do not vary smoothly over time for both base cover locations. Possible reasons for the mismatch between flight data and flowfield solutions include underestimated conduction effects and limitations of the computational methods.
Viking Afterbody Heating Computations and Comparisons to Flight Data
NASA Technical Reports Server (NTRS)
Edquist, Karl T.; Wright, Michael J.; Allen, Gary A., Jr.
2006-01-01
Computational fluid dynamics predictions of Viking Lander 1 entry vehicle afterbody heating are compared to flight data. The analysis includes a derivation of heat flux from temperature data at two base cover locations, as well as a discussion of available reconstructed entry trajectories. Based on the raw temperature-time history data, convective heat flux is derived to be 0.63-1.10 W/sq cm for the aluminum base cover at the time of thermocouple failure. Peak heat flux at the fiberglass base cover thermocouple is estimated to be 0.54-0.76 W/sq cm, occurring 16 seconds after peak stagnation point heat flux. Navier-Stokes computational solutions are obtained with two separate codes using an 8-species Mars gas model in chemical and thermal non-equilibrium. Flowfield solutions using local time-stepping did not result in converged heating at either thermocouple location. A global time-stepping approach improved the computational stability, but steady state heat flux was not reached for either base cover location. Both thermocouple locations lie within a separated flow region of the base cover that is likely unsteady. Heat flux computations averaged over the solution history are generally below the flight data and do not vary smoothly over time for both base cover locations. Possible reasons for the mismatch between flight data and flowfield solutions include underestimated conduction effects and limitations of the computational methods.
Axisymmetric Plume Simulations with NASA's DSMC Analysis Code
NASA Technical Reports Server (NTRS)
Stewart, B. D.; Lumpkin, F. E., III
2012-01-01
A comparison of axisymmetric Direct Simulation Monte Carlo (DSMC) Analysis Code (DAC) results to analytic and Computational Fluid Dynamics (CFD) solutions in the near continuum regime and to 3D DAC solutions in the rarefied regime for expansion plumes into a vacuum is performed to investigate the validity of the newest DAC axisymmetric implementation. This new implementation, based on the standard DSMC axisymmetric approach where the representative molecules are allowed to move in all three dimensions but are rotated back to the plane of symmetry by the end of the move step, has been fully integrated into the 3D-based DAC code and therefore retains all of DAC s features, such as being able to compute flow over complex geometries and to model chemistry. Axisymmetric DAC results for a spherically symmetric isentropic expansion are in very good agreement with a source flow analytic solution in the continuum regime and show departure from equilibrium downstream of the estimated breakdown location. Axisymmetric density contours also compare favorably against CFD results for the R1E thruster while temperature contours depart from equilibrium very rapidly away from the estimated breakdown surface. Finally, axisymmetric and 3D DAC results are in very good agreement over the entire plume region and, as expected, this new axisymmetric implementation shows a significant reduction in computer resources required to achieve accurate simulations for this problem over the 3D simulations.
NASA Astrophysics Data System (ADS)
Rit, S.; Vila Oliva, M.; Brousmiche, S.; Labarbe, R.; Sarrut, D.; Sharp, G. C.
2014-03-01
We propose the Reconstruction Toolkit (RTK, http://www.openrtk.org), an open-source toolkit for fast cone-beam CT reconstruction, based on the Insight Toolkit (ITK) and using GPU code extracted from Plastimatch. RTK is developed by an open consortium (see affiliations) under the non-contaminating Apache 2.0 license. The quality of the platform is daily checked with regression tests in partnership with Kitware, the company supporting ITK. Several features are already available: Elekta, Varian and IBA inputs, multi-threaded Feldkamp-David-Kress reconstruction on CPU and GPU, Parker short scan weighting, multi-threaded CPU and GPU forward projectors, etc. Each feature is either accessible through command line tools or C++ classes that can be included in independent software. A MIDAS community has been opened to share CatPhan datasets of several vendors (Elekta, Varian and IBA). RTK will be used in the upcoming cone-beam CT scanner developed by IBA for proton therapy rooms. Many features are under development: new input format support, iterative reconstruction, hybrid Monte Carlo / deterministic CBCT simulation, etc. RTK has been built to freely share tomographic reconstruction developments between researchers and is open for new contributions.
Natarajan, Logesh Kumar; Wu, Sean F
2012-06-01
This paper presents helpful guidelines and strategies for reconstructing the vibro-acoustic quantities on a highly non-spherical surface by using the Helmholtz equation least squares (HELS). This study highlights that a computationally simple code based on the spherical wave functions can produce an accurate reconstruction of the acoustic pressure and normal surface velocity on planar surfaces. The key is to select the optimal origin of the coordinate system behind the planar surface, choose a target structural wavelength to be reconstructed, set an appropriate stand-off distance and microphone spacing, use a hybrid regularization scheme to determine the optimal number of the expansion functions, etc. The reconstructed vibro-acoustic quantities are validated rigorously via experiments by comparing the reconstructed normal surface velocity spectra and distributions with the benchmark data obtained by scanning a laser vibrometer over the plate surface. Results confirm that following the proposed guidelines and strategies can ensure the accuracy in reconstructing the normal surface velocity up to the target structural wavelength, and produce much more satisfactory results than a straight application of the original HELS formulations. Experiment validations on a baffled, square plate were conducted inside a fully anechoic chamber.
Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming
2011-02-01
High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.
Reconstruction method for fringe projection profilometry based on light beams.
Li, Xuexing; Zhang, Zhijiang; Yang, Chen
2016-12-01
A novel reconstruction method for fringe projection profilometry, based on light beams, is proposed and verified by experiments. Commonly used calibration techniques require the parameters of projector calibration or the reference planes placed in many known positions. Obviously, introducing the projector calibration can reduce the accuracy of the reconstruction result, and setting the reference planes to many known positions is a time-consuming process. Therefore, in this paper, a reconstruction method without projector's parameters is proposed and only two reference planes are introduced. A series of light beams determined by the subpixel point-to-point map on the two reference planes combined with their reflected light beams determined by the camera model are used to calculate the 3D coordinates of reconstruction points. Furthermore, the bundle adjustment strategy and the complementary gray-code phase-shifting method are utilized to ensure the accuracy and stability. Qualitative and quantitative comparisons as well as experimental tests demonstrate the performance of our proposed approach, and the measurement accuracy can reach about 0.0454 mm.
Accelerating image reconstruction in dual-head PET system by GPU and symmetry properties.
Chou, Cheng-Ying; Dong, Yun; Hung, Yukai; Kao, Yu-Jiun; Wang, Weichung; Kao, Chien-Min; Chen, Chin-Tu
2012-01-01
Positron emission tomography (PET) is an important imaging modality in both clinical usage and research studies. We have developed a compact high-sensitivity PET system that consisted of two large-area panel PET detector heads, which produce more than 224 million lines of response and thus request dramatic computational demands. In this work, we employed a state-of-the-art graphics processing unit (GPU), NVIDIA Tesla C2070, to yield an efficient reconstruction process. Our approaches ingeniously integrate the distinguished features of the symmetry properties of the imaging system and GPU architectures, including block/warp/thread assignments and effective memory usage, to accelerate the computations for ordered subset expectation maximization (OSEM) image reconstruction. The OSEM reconstruction algorithms were implemented employing both CPU-based and GPU-based codes, and their computational performance was quantitatively analyzed and compared. The results showed that the GPU-accelerated scheme can drastically reduce the reconstruction time and thus can largely expand the applicability of the dual-head PET system.
High-Performance 3D Compressive Sensing MRI Reconstruction Using Many-Core Architectures
Kim, Daehyun; Trzasko, Joshua; Smelyanskiy, Mikhail; Haider, Clifton; Dubey, Pradeep; Manduca, Armando
2011-01-01
Compressive sensing (CS) describes how sparse signals can be accurately reconstructed from many fewer samples than required by the Nyquist criterion. Since MRI scan duration is proportional to the number of acquired samples, CS has been gaining significant attention in MRI. However, the computationally intensive nature of CS reconstructions has precluded their use in routine clinical practice. In this work, we investigate how different throughput-oriented architectures can benefit one CS algorithm and what levels of acceleration are feasible on different modern platforms. We demonstrate that a CUDA-based code running on an NVIDIA Tesla C2050 GPU can reconstruct a 256 × 160 × 80 volume from an 8-channel acquisition in 19 seconds, which is in itself a significant improvement over the state of the art. We then show that Intel's Knights Ferry can perform the same 3D MRI reconstruction in only 12 seconds, bringing CS methods even closer to clinical viability. PMID:21922017
Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus
2015-01-01
Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 – Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning. PMID:26217710
Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus
2015-06-01
Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 - Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning.
RTE: A computer code for Rocket Thermal Evaluation
NASA Technical Reports Server (NTRS)
Naraghi, Mohammad H. N.
1995-01-01
The numerical model for a rocket thermal analysis code (RTE) is discussed. RTE is a comprehensive thermal analysis code for thermal analysis of regeneratively cooled rocket engines. The input to the code consists of the composition of fuel/oxidant mixture and flow rates, chamber pressure, coolant temperature and pressure. dimensions of the engine, materials and the number of nodes in different parts of the engine. The code allows for temperature variation in axial, radial and circumferential directions. By implementing an iterative scheme, it provides nodal temperature distribution, rates of heat transfer, hot gas and coolant thermal and transport properties. The fuel/oxidant mixture ratio can be varied along the thrust chamber. This feature allows the user to incorporate a non-equilibrium model or an energy release model for the hot-gas-side. The user has the option of bypassing the hot-gas-side calculations and directly inputting the gas-side fluxes. This feature is used to link RTE to a boundary layer module for the hot-gas-side heat flux calculations.
STELLTRANS: A Transport Analysis Suite for Stellarators
NASA Astrophysics Data System (ADS)
Mittelstaedt, Joseph; Lazerson, Samuel; Pablant, Novimir; Weir, Gavin; W7-X Team
2016-10-01
The stellarator transport code STELLTRANS allows us to better analyze the power balance in W7-X. Although profiles of temperature and density are measured experimentally, geometrical factors are needed in conjunction with these measurements to properly analyze heat flux densities in stellarators. The STELLTRANS code interfaces with VMEC to find an equilibrium flux surface configuration and with TRAVIS to determine the RF heating and current drive in the plasma. Stationary transport equations are then considered which are solved using a boundary value differential equation solver. The equations and quantities considered are averaged over flux surfaces to reduce the system to an essentially one dimensional problem. We have applied this code to data from W-7X and were able to calculate the heat flux coefficients. We will also present extensions of the code to a predictive capacity which would utilize DKES to find neoclassical transport coefficients to update the temperature and density profiles.
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Bittker, David A.
1993-01-01
A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS, are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include static system, steady, one-dimensional, inviscid flow, shock initiated reaction, and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method, which works efficiently for the extremes of very fast and very slow reaction, is used for solving the 'stiff' differential equation systems that arise in chemical kinetics. For static reactions, sensitivity coefficients of all dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters can be computed. This paper presents descriptions of the code and its usage, and includes several illustrative example problems.
Calculations of Helium Bubble Evolution in the PISCES Experiments with Cluster Dynamics
NASA Astrophysics Data System (ADS)
Blondel, Sophie; Younkin, Timothy; Wirth, Brian; Lasa, Ane; Green, David; Canik, John; Drobny, Jon; Curreli, Davide
2017-10-01
Plasma surface interactions in fusion tokamak reactors involve an inherently multiscale, highly non-equilibrium set of phenomena, for which current models are inadequate to predict the divertor response to and feedback on the plasma. In this presentation, we describe the latest code developments of Xolotl, a spatially-dependent reaction diffusion cluster dynamics code to simulate the divertor surface response to fusion-relevant plasma exposure. Xolotl is part of a code-coupling effort to model both plasma and material simultaneously; the first benchmark for this effort is the series of PISCES linear device experiments. We will discuss the processes leading to surface morphology changes, which further affect erosion, as well as how Xolotl has been updated in order to communicate with other codes. Furthermore, we will show results of the sub-surface evolution of helium bubbles in tungsten as well as the material surface displacement under these conditions.
Pulse Vector-Excitation Speech Encoder
NASA Technical Reports Server (NTRS)
Davidson, Grant; Gersho, Allen
1989-01-01
Proposed pulse vector-excitation speech encoder (PVXC) encodes analog speech signals into digital representation for transmission or storage at rates below 5 kilobits per second. Produces high quality of reconstructed speech, but with less computation than required by comparable speech-encoding systems. Has some characteristics of multipulse linear predictive coding (MPLPC) and of code-excited linear prediction (CELP). System uses mathematical model of vocal tract in conjunction with set of excitation vectors and perceptually-based error criterion to synthesize natural-sounding speech.
Zhang, Ying-Ying; Yang, Cai; Zhang, Ping
2017-08-01
In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.
Plio-Pleistocene climate sensitivity evaluated using high-resolution CO2 records.
Martínez-Botí, M A; Foster, G L; Chalk, T B; Rohling, E J; Sexton, P F; Lunt, D J; Pancost, R D; Badger, M P S; Schmidt, D N
2015-02-05
Theory and climate modelling suggest that the sensitivity of Earth's climate to changes in radiative forcing could depend on the background climate. However, palaeoclimate data have thus far been insufficient to provide a conclusive test of this prediction. Here we present atmospheric carbon dioxide (CO2) reconstructions based on multi-site boron-isotope records from the late Pliocene epoch (3.3 to 2.3 million years ago). We find that Earth's climate sensitivity to CO2-based radiative forcing (Earth system sensitivity) was half as strong during the warm Pliocene as during the cold late Pleistocene epoch (0.8 to 0.01 million years ago). We attribute this difference to the radiative impacts of continental ice-volume changes (the ice-albedo feedback) during the late Pleistocene, because equilibrium climate sensitivity is identical for the two intervals when we account for such impacts using sea-level reconstructions. We conclude that, on a global scale, no unexpected climate feedbacks operated during the warm Pliocene, and that predictions of equilibrium climate sensitivity (excluding long-term ice-albedo feedbacks) for our Pliocene-like future (with CO2 levels up to maximum Pliocene levels of 450 parts per million) are well described by the currently accepted range of an increase of 1.5 K to 4.5 K per doubling of CO2.
Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu
2015-07-21
Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy.
Reconstructing mantle volatile contents through the veil of degassing
NASA Astrophysics Data System (ADS)
Tucker, J.; Mukhopadhyay, S.; Gonnermann, H. M.
2014-12-01
The abundance of volatile elements in the mantle reveals critical information about the Earth's origin and evolution such as the chemical constituents that built the Earth and material exchange between the mantle and exosphere. However, due to magmatic degassing, volatile element abundances measured in basalts usually do not represent those in undegassed magmas and hence in the mantle source of the basalts. While estimates of average mantle concentrations of some volatile species can be obtained, such as from the 3He flux into the oceans, volatile element variability within the mantle remains poorly constrained. Here, we use CO2-He-Ne-Ar-Xe measurements in basalts and a new degassing model to reconstruct the initial volatile contents of 8 MORBs from the Mid-Atlantic Ridge and Southwest Indian Ridge that span a wide geochemical range from depleted to enriched MORBs. We first show that equilibrium degassing (e.g. Rayleigh degassing), cannot simultaneously fit the measured CO2-He-Ne-Ar-Xe compositions in MORBs and argue that kinetic fractionation between bubbles and melt lowers the dissolved ratios of light to heavy noble gas species in the melt from that expected at equilibrium. We present a degassing model (after Gonnermann and Mukhopadhyay, 2007) that explicitly accounts for diffusive fractionation between melt and bubbles. The model computes the degassed composition based on an initial volatile composition and a diffusive timescale. To reconstruct the undegassed volatile content of a sample, we find the initial composition and degassing timescale which minimize the misfit between predicted and measured degassed compositions. Initial 3He contents calculated for the 8 MORB samples vary by a factor of ~7. We observe a correlation between initial 3He and CO2 contents, indicating relatively constant CO2/3He ratios despite the geochemical diversity and variable gas content in the basalts. Importantly, the gas-rich popping rock from the North Atlantic, as well as the average mantle ratio computed from the ridge 3He flux and independently estimated CO2 content fall along the same correlation. This observation suggests that undegassed CO2 and noble gas concentrations can be reconstructed in individual samples through measurement of noble gases and CO2 in erupted basalts.
Equilibrium Line Altitude fluctuations at HualcaHualca volcano (southern Peru).
NASA Astrophysics Data System (ADS)
Alcalá, Jesus; Palacios, David; Juan Zamorano, Jose
2015-04-01
Interest in Andean glaciers has substantially increased during the last decades, due to its high sensitivity to climate fluctuations. In this sense, Equilibrium Line Altitude (ELA) is a reliable indicator of climate variability that has been frequently used to reconstruct palaeoenvironmental conditions at different temporal and spatial scales. However, the number of sites with ELA reconstructions is still insufficient to determine patterns in tropical climate or estimations of atmospheric cooling since the Last Glacial Maximum. The main purpose of this study is to contribute in resolving tropical climate evolution through ELA calculations on HualcaHualca (15° 43' S; 71° 52' W; 6,025 masl), a large andesitic stratovolcano located in the south-western Peruvian Andes approximately 70 km north-west of Arequipa. We applied Terminus Headwall Altitude Ratio (THAR) with 0.2; 0.4; 0.5; 0.57 ratios, Accumulation Area Ratio (AAR) and Accumulation Area Balance Ratio (AABR) methods in four valleys of HualcaHualca volcano: Huayuray (north side), Pujro Huayjo (southwest side), Mollebaya (east side) and Mucurca (west side). To estimate ELA depression, we calculated the difference between the ELA on 1955 with its position in the Maximum Glacier Extent (MGE), Tardiglacial phases, little Ice Age (LIA) and 2000. Paleotemperature reconstructions derived from vertical temperature gradient 6.5° C / 1 km, based on GODDARD global observation system considered the most appropriate model for arid Andes. During MGE, the ELA was located between 5,005 (AABR) and 5,215 (AAR 0.67) masl. But in 1955, ELA rose to 5,685 (AABR) - 5,775 (AAR 0.67) masl. The ELA depression between those two phases is 560 - 680 m that implies a temperature decrease of 3.5° - 4.4° C. The experimental process based in the use and contrast of different ELA reconstruction techniques applied in this study suggests that THAR (0.57), AAR (0.67) or AABR are the most consistent procedures for HualcaHualca glaciers, while THAR with ratios 0.2; 0.4 and 0.5 tend to underestimate it's position. Research funded by Cryocrisis project (CGL2012-35858), Government of Spain.
Nonequilibrium air radiation (Nequair) program: User's manual
NASA Technical Reports Server (NTRS)
Park, C.
1985-01-01
A supplement to the data relating to the calculation of nonequilibrium radiation in flight regimes of aeroassisted orbital transfer vehicles contains the listings of the computer code NEQAIR (Nonequilibrium Air Radiation), its primary input data, and explanation of the user-supplied input variables. The user-supplied input variables are the thermodynamic variables of air at a given point, i.e., number densities of various chemical species, translational temperatures of heavy particles and electrons, and vibrational temperature. These thermodynamic variables do not necessarily have to be in thermodynamic equilibrium. The code calculates emission and absorption characteristics of air under these given conditions.