All-Particle Multiscale Computation of Hypersonic Rarefied Flow
NASA Astrophysics Data System (ADS)
Jun, E.; Burt, J. M.; Boyd, I. D.
2011-05-01
This study examines a new hybrid particle scheme used as an alternative means of multiscale flow simulation. The hybrid particle scheme employs the direct simulation Monte Carlo (DSMC) method in rarefied flow regions and the low diffusion (LD) particle method in continuum flow regions. The numerical procedures of the low diffusion particle method are implemented within an existing DSMC algorithm. The performance of the LD-DSMC approach is assessed by studying Mach 10 nitrogen flow over a sphere with a global Knudsen number of 0.002. The hybrid scheme results show good overall agreement with results from standard DSMC and CFD computation. Subcell procedures are utilized to improve computational efficiency and reduce sensitivity to DSMC cell size in the hybrid scheme. This makes it possible to perform the LD-DSMC simulation on a much coarser mesh that leads to a significant reduction in computation time.
Automatic mesh refinement and parallel load balancing for Fokker-Planck-DSMC algorithm
NASA Astrophysics Data System (ADS)
Küchlin, Stephan; Jenny, Patrick
2018-06-01
Recently, a parallel Fokker-Planck-DSMC algorithm for rarefied gas flow simulation in complex domains at all Knudsen numbers was developed by the authors. Fokker-Planck-DSMC (FP-DSMC) is an augmentation of the classical DSMC algorithm, which mitigates the near-continuum deficiencies in terms of computational cost of pure DSMC. At each time step, based on a local Knudsen number criterion, the discrete DSMC collision operator is dynamically switched to the Fokker-Planck operator, which is based on the integration of continuous stochastic processes in time, and has fixed computational cost per particle, rather than per collision. In this contribution, we present an extension of the previous implementation with automatic local mesh refinement and parallel load-balancing. In particular, we show how the properties of discrete approximations to space-filling curves enable an efficient implementation. Exemplary numerical studies highlight the capabilities of the new code.
dsmcFoam+: An OpenFOAM based direct simulation Monte Carlo solver
NASA Astrophysics Data System (ADS)
White, C.; Borg, M. K.; Scanlon, T. J.; Longshaw, S. M.; John, B.; Emerson, D. R.; Reese, J. M.
2018-03-01
dsmcFoam+ is a direct simulation Monte Carlo (DSMC) solver for rarefied gas dynamics, implemented within the OpenFOAM software framework, and parallelised with MPI. It is open-source and released under the GNU General Public License in a publicly available software repository that includes detailed documentation and tutorial DSMC gas flow cases. This release of the code includes many features not found in standard dsmcFoam, such as molecular vibrational and electronic energy modes, chemical reactions, and subsonic pressure boundary conditions. Since dsmcFoam+ is designed entirely within OpenFOAM's C++ object-oriented framework, it benefits from a number of key features: the code emphasises extensibility and flexibility so it is aimed first and foremost as a research tool for DSMC, allowing new models and test cases to be developed and tested rapidly. All DSMC cases are as straightforward as setting up any standard OpenFOAM case, as dsmcFoam+ relies upon the standard OpenFOAM dictionary based directory structure. This ensures that useful pre- and post-processing capabilities provided by OpenFOAM remain available even though the fully Lagrangian nature of a DSMC simulation is not typical of most OpenFOAM applications. We show that dsmcFoam+ compares well to other well-known DSMC codes and to analytical solutions in terms of benchmark results.
Object-Oriented/Data-Oriented Design of a Direct Simulation Monte Carlo Algorithm
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2014-01-01
Over the past decade, there has been much progress towards improved phenomenological modeling and algorithmic updates for the direct simulation Monte Carlo (DSMC) method, which provides a probabilistic physical simulation of gas Rows. These improvements have largely been based on the work of the originator of the DSMC method, Graeme Bird. Of primary importance are improved chemistry, internal energy, and physics modeling and a reduction in time to solution. These allow for an expanded range of possible solutions In altitude and velocity space. NASA's current production code, the DSMC Analysis Code (DAC), is well-established and based on Bird's 1994 algorithms written in Fortran 77 and has proven difficult to upgrade. A new DSMC code is being developed in the C++ programming language using object-oriented and data-oriented design paradigms to facilitate the inclusion of the recent improvements and future development activities. The development efforts on the new code, the Multiphysics Algorithm with Particles (MAP), are described, and performance comparisons are made with DAC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Küchlin, Stephan, E-mail: kuechlin@ifd.mavt.ethz.ch; Jenny, Patrick
2017-01-01
A major challenge for the conventional Direct Simulation Monte Carlo (DSMC) technique lies in the fact that its computational cost becomes prohibitive in the near continuum regime, where the Knudsen number (Kn)—characterizing the degree of rarefaction—becomes small. In contrast, the Fokker–Planck (FP) based particle Monte Carlo scheme allows for computationally efficient simulations of rarefied gas flows in the low and intermediate Kn regime. The Fokker–Planck collision operator—instead of performing binary collisions employed by the DSMC method—integrates continuous stochastic processes for the phase space evolution in time. This allows for time step and grid cell sizes larger than the respective collisionalmore » scales required by DSMC. Dynamically switching between the FP and the DSMC collision operators in each computational cell is the basis of the combined FP-DSMC method, which has been proven successful in simulating flows covering the whole Kn range. Until recently, this algorithm had only been applied to two-dimensional test cases. In this contribution, we present the first general purpose implementation of the combined FP-DSMC method. Utilizing both shared- and distributed-memory parallelization, this implementation provides the capability for simulations involving many particles and complex geometries by exploiting state of the art computer cluster technologies.« less
State resolved vibrational relaxation modeling for strongly nonequilibrium flows
NASA Astrophysics Data System (ADS)
Boyd, Iain D.; Josyula, Eswar
2011-05-01
Vibrational relaxation is an important physical process in hypersonic flows. Activation of the vibrational mode affects the fundamental thermodynamic properties and finite rate relaxation can reduce the degree of dissociation of a gas. Low fidelity models of vibrational activation employ a relaxation time to capture the process at a macroscopic level. High fidelity, state-resolved models have been developed for use in continuum gas dynamics simulations based on computational fluid dynamics (CFD). By comparison, such models are not as common for use with the direct simulation Monte Carlo (DSMC) method. In this study, a high fidelity, state-resolved vibrational relaxation model is developed for the DSMC technique. The model is based on the forced harmonic oscillator approach in which multi-quantum transitions may become dominant at high temperature. Results obtained for integrated rate coefficients from the DSMC model are consistent with the corresponding CFD model. Comparison of relaxation results obtained with the high-fidelity DSMC model shows significantly less excitation of upper vibrational levels in comparison to the standard, lower fidelity DSMC vibrational relaxation model. Application of the new DSMC model to a Mach 7 normal shock wave in carbon monoxide provides better agreement with experimental measurements than the standard DSMC relaxation model.
DREAM: An Efficient Methodology for DSMC Simulation of Unsteady Processes
NASA Astrophysics Data System (ADS)
Cave, H. M.; Jermy, M. C.; Tseng, K. C.; Wu, J. S.
2008-12-01
A technique called the DSMC Rapid Ensemble Averaging Method (DREAM) for reducing the statistical scatter in the output from unsteady DSMC simulations is introduced. During post-processing by DREAM, the DSMC algorithm is re-run multiple times over a short period before the temporal point of interest thus building up a combination of time- and ensemble-averaged sampling data. The particle data is regenerated several mean collision times before the output time using the particle data generated during the original DSMC run. This methodology conserves the original phase space data from the DSMC run and so is suitable for reducing the statistical scatter in highly non-equilibrium flows. In this paper, the DREAM-II method is investigated and verified in detail. Propagating shock waves at high Mach numbers (Mach 8 and 12) are simulated using a parallel DSMC code (PDSC) and then post-processed using DREAM. The ability of DREAM to obtain the correct particle velocity distribution in the shock structure is demonstrated and the reduction of statistical scatter in the output macroscopic properties is measured. DREAM is also used to reduce the statistical scatter in the results from the interaction of a Mach 4 shock with a square cavity and for the interaction of a Mach 12 shock on a wedge in a channel.
Comparison of DAC and MONACO DSMC Codes with Flat Plate Simulation
NASA Technical Reports Server (NTRS)
Padilla, Jose F.
2010-01-01
Various implementations of the direct simulation Monte Carlo (DSMC) method exist in academia, government and industry. By comparing implementations, deficiencies and merits of each can be discovered. This document reports comparisons between DSMC Analysis Code (DAC) and MONACO. DAC is NASA's standard DSMC production code and MONACO is a research DSMC code developed in academia. These codes have various differences; in particular, they employ distinct computational grid definitions. In this study, DAC and MONACO are compared by having each simulate a blunted flat plate wind tunnel test, using an identical volume mesh. Simulation expense and DSMC metrics are compared. In addition, flow results are compared with available laboratory data. Overall, this study revealed that both codes, excluding grid adaptation, performed similarly. For parallel processing, DAC was generally more efficient. As expected, code accuracy was mainly dependent on physical models employed.
Dynamic load balance scheme for the DSMC algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jin; Geng, Xiangren; Jiang, Dingwu
The direct simulation Monte Carlo (DSMC) algorithm, devised by Bird, has been used over a wide range of various rarified flow problems in the past 40 years. While the DSMC is suitable for the parallel implementation on powerful multi-processor architecture, it also introduces a large load imbalance across the processor array, even for small examples. The load imposed on a processor by a DSMC calculation is determined to a large extent by the total of simulator particles upon it. Since most flows are impulsively started with initial distribution of particles which is surely quite different from the steady state, themore » total of simulator particles will change dramatically. The load balance based upon an initial distribution of particles will break down as the steady state of flow is reached. The load imbalance and huge computational cost of DSMC has limited its application to rarefied or simple transitional flows. In this paper, by taking advantage of METIS, a software for partitioning unstructured graphs, and taking the total of simulator particles in each cell as a weight information, the repartitioning based upon the principle that each processor handles approximately the equal total of simulator particles has been achieved. The computation must pause several times to renew the total of simulator particles in each processor and repartition the whole domain again. Thus the load balance across the processors array holds in the duration of computation. The parallel efficiency can be improved effectively. The benchmark solution of a cylinder submerged in hypersonic flow has been simulated numerically. Besides, hypersonic flow past around a complex wing-body configuration has also been simulated. The results have displayed that, for both of cases, the computational time can be reduced by about 50%.« less
Collision partner selection schemes in DSMC: From micro/nano flows to hypersonic flows
NASA Astrophysics Data System (ADS)
Roohi, Ehsan; Stefanov, Stefan
2016-10-01
The motivation of this review paper is to present a detailed summary of different collision models developed in the framework of the direct simulation Monte Carlo (DSMC) method. The emphasis is put on a newly developed collision model, i.e., the Simplified Bernoulli trial (SBT), which permits efficient low-memory simulation of rarefied gas flows. The paper starts with a brief review of the governing equations of the rarefied gas dynamics including Boltzmann and Kac master equations and reiterates that the linear Kac equation reduces to a non-linear Boltzmann equation under the assumption of molecular chaos. An introduction to the DSMC method is provided, and principles of collision algorithms in the DSMC are discussed. A distinction is made between those collision models that are based on classical kinetic theory (time counter, no time counter (NTC), and nearest neighbor (NN)) and the other class that could be derived mathematically from the Kac master equation (pseudo-Poisson process, ballot box, majorant frequency, null collision, Bernoulli trials scheme and its variants). To provide a deeper insight, the derivation of both collision models, either from the principles of the kinetic theory or the Kac master equation, is provided with sufficient details. Some discussions on the importance of subcells in the DSMC collision procedure are also provided and different types of subcells are presented. The paper then focuses on the simplified version of the Bernoulli trials algorithm (SBT) and presents a detailed summary of validation of the SBT family collision schemes (SBT on transient adaptive subcells: SBT-TAS, and intelligent SBT: ISBT) in a broad spectrum of rarefied gas-flow test cases, ranging from low speed, internal micro and nano flows to external hypersonic flow, emphasizing first the accuracy of these new collision models and second, demonstrating that the SBT family scheme, if compared to other conventional and recent collision models, requires smaller number of particles per cell to obtain sufficiently accurate solutions.
Modifications to Axially Symmetric Simulations Using New DSMC (2007) Algorithms
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2008-01-01
Several modifications aimed at improving physical accuracy are proposed for solving axially symmetric problems building on the DSMC (2007) algorithms introduced by Bird. Originally developed to solve nonequilibrium, rarefied flows, the DSMC method is now regularly used to solve complex problems over a wide range of Knudsen numbers. These new algorithms include features such as nearest neighbor collisions excluding the previous collision partners, separate collision and sampling cells, automatically adaptive variable time steps, a modified no-time counter procedure for collisions, and discontinuous and event-driven physical processes. Axially symmetric solutions require radial weighting for the simulated molecules since the molecules near the axis represent fewer real molecules than those farther away from the axis due to the difference in volume of the cells. In the present methodology, these radial weighting factors are continuous, linear functions that vary with the radial position of each simulated molecule. It is shown that how one defines the number of tentative collisions greatly influences the mean collision time near the axis. The method by which the grid is treated for axially symmetric problems also plays an important role near the axis, especially for scalar pressure. A new method to treat how the molecules are traced through the grid is proposed to alleviate the decrease in scalar pressure at the axis near the surface. Also, a modification to the duplication buffer is proposed to vary the duplicated molecular velocities while retaining the molecular kinetic energy and axially symmetric nature of the problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu
2016-02-15
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach.more » The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.« less
An Object-Oriented Serial DSMC Simulation Package
NASA Astrophysics Data System (ADS)
Liu, Hongli; Cai, Chunpei
2011-05-01
A newly developed three-dimensional direct simulation Monte Carlo (DSMC) simulation package, named GRASP ("Generalized Rarefied gAs Simulation Package"), is reported in this paper. This package utilizes the concept of simulation engine, many C++ features and software design patterns. The package has an open architecture which can benefit further development and maintenance of the code. In order to reduce the engineering time for three-dimensional models, a hybrid grid scheme, combined with a flexible data structure compiled by C++ language, are implemented in this package. This scheme utilizes a local data structure based on the computational cell to achieve high performance on workstation processors. This data structure allows the DSMC algorithm to be very efficiently parallelized with domain decomposition and it provides much flexibility in terms of grid types. This package can utilize traditional structured, unstructured or hybrid grids within the framework of a single code to model arbitrarily complex geometries and to simulate rarefied gas flows. Benchmark test cases indicate that this package has satisfactory accuracy for complex rarefied gas flows.
2008-01-17
15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18 . NUMBER OF PAGES 261 19a. NAME OF RESPONSIBLE PERSON a...REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39- 18 This material...Sciences Meeting and Exhibit. Several DSMC [13, 58] and CFD [ 18 , 28, 43] solutions were submitted. Later, others compared CFD and DSMC solutions to these
N-S/DSMC hybrid simulation of hypersonic flow over blunt body including wakes
NASA Astrophysics Data System (ADS)
Li, Zhonghua; Li, Zhihui; Li, Haiyan; Yang, Yanguang; Jiang, Xinyu
2014-12-01
A hybrid N-S/DSMC method is presented and applied to solve the three-dimensional hypersonic transitional flows by employing the MPC (modular Particle-Continuum) technique based on the N-S and the DSMC method. A sub-relax technique is adopted to deal with information transfer between the N-S and the DSMC. The hypersonic flows over a 70-deg spherically blunted cone under different Kn numbers are simulated using the CFD, DSMC and hybrid N-S/DSMC method. The present computations are found in good agreement with DSMC and experimental results. The present method provides an efficient way to predict the hypersonic aerodynamics in near-continuum transitional flow regime.
Parallel DSMC Solution of Three-Dimensional Flow Over a Finite Flat Plate
NASA Technical Reports Server (NTRS)
Nance, Robert P.; Wilmoth, Richard G.; Moon, Bongki; Hassan, H. A.; Saltz, Joel
1994-01-01
This paper describes a parallel implementation of the direct simulation Monte Carlo (DSMC) method. Runtime library support is used for scheduling and execution of communication between nodes, and domain decomposition is performed dynamically to maintain a good load balance. Performance tests are conducted using the code to evaluate various remapping and remapping-interval policies, and it is shown that a one-dimensional chain-partitioning method works best for the problems considered. The parallel code is then used to simulate the Mach 20 nitrogen flow over a finite-thickness flat plate. It is shown that the parallel algorithm produces results which compare well with experimental data. Moreover, it yields significantly faster execution times than the scalar code, as well as very good load-balance characteristics.
NASA Astrophysics Data System (ADS)
Borges Sebastião, Israel; Kulakhmetov, Marat; Alexeenko, Alina
2017-01-01
This work evaluates high-fidelity vibrational-translational (VT) energy relaxation and dissociation models for pure O2 normal shockwave simulations with the direct simulation Monte Carlo (DSMC) method. The O2-O collisions are described using ab initio state-specific relaxation and dissociation models. The Macheret-Fridman (MF) dissociation model is adapted to the DSMC framework by modifying the standard implementation of the total collision energy (TCE) model. The O2-O2 dissociation is modeled with this TCE+MF approach, which is calibrated with O2-O ab initio data and experimental equilibrium dissociation rates. The O2-O2 vibrational relaxation is modeled via the Larsen-Borgnakke model, calibrated to experimental VT rates. All the present results are compared to experimental data and previous calculations available in the literature. It is found that, in general, the ab initio dissociation model is better than the TCE model at matching the shock experiments. Therefore, when available, efficient ab initio models are preferred over phenomenological models. We also show that the proposed TCE + MF formulation can be used to improve the standard TCE model results when ab initio data are not available or limited.
Investigation on a coupled CFD/DSMC method for continuum-rarefied flows
NASA Astrophysics Data System (ADS)
Tang, Zhenyu; He, Bijiao; Cai, Guobiao
2012-11-01
The purpose of the present work is to investigate the coupled CFD/DSMC method using the existing CFD and DSMC codes developed by the authors. The interface between the continuum and particle regions is determined by the gradient-length local Knudsen number. A coupling scheme combining both state-based and flux-based coupling methods is proposed in the current study. Overlapping grids are established between the different grid systems of CFD and DSMC codes. A hypersonic flow over a 2D cylinder has been simulated using the present coupled method. Comparison has been made between the results obtained from both methods, which shows that the coupled CFD/DSMC method can achieve the same precision as the pure DSMC method and obtain higher computational efficiency.
DSMC Simulation and Experimental Validation of Shock Interaction in Hypersonic Low Density Flow
2014-01-01
Direct simulation Monte Carlo (DSMC) of shock interaction in hypersonic low density flow is developed. Three collision molecular models, including hard sphere (HS), variable hard sphere (VHS), and variable soft sphere (VSS), are employed in the DSMC study. The simulations of double-cone and Edney's type IV hypersonic shock interactions in low density flow are performed. Comparisons between DSMC and experimental data are conducted. Investigation of the double-cone hypersonic flow shows that three collision molecular models can predict the trend of pressure coefficient and the Stanton number. HS model shows the best agreement between DSMC simulation and experiment among three collision molecular models. Also, it shows that the agreement between DSMC and experiment is generally good for HS and VHS models in Edney's type IV shock interaction. However, it fails in the VSS model. Both double-cone and Edney's type IV shock interaction simulations show that the DSMC errors depend on the Knudsen number and the models employed for intermolecular interaction. With the increase in the Knudsen number, the DSMC error is decreased. The error is the smallest in HS compared with those in the VHS and VSS models. When the Knudsen number is in the level of 10−4, the DSMC errors, for pressure coefficient, the Stanton number, and the scale of interaction region, are controlled within 10%. PMID:24672360
Restricted Collision List method for faster Direct Simulation Monte-Carlo (DSMC) collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macrossan, Michael N., E-mail: m.macrossan@uq.edu.au
The ‘Restricted Collision List’ (RCL) method for speeding up the calculation of DSMC Variable Soft Sphere collisions, with Borgnakke–Larsen (BL) energy exchange, is presented. The method cuts down considerably on the number of random collision parameters which must be calculated (deflection and azimuthal angles, and the BL energy exchange factors). A relatively short list of these parameters is generated and the parameters required in any cell are selected from this list. The list is regenerated at intervals approximately equal to the smallest mean collision time in the flow, and the chance of any particle re-using the same collision parameters inmore » two successive collisions is negligible. The results using this method are indistinguishable from those obtained with standard DSMC. The CPU time saving depends on how much of a DSMC calculation is devoted to collisions and how much is devoted to other tasks, such as moving particles and calculating particle interactions with flow boundaries. For 1-dimensional calculations of flow in a tube, the new method saves 20% of the CPU time per collision for VSS scattering with no energy exchange. With RCL applied to rotational energy exchange, the CPU saving can be greater; for small values of the rotational collision number, for which most collisions involve some rotational energy exchange, the CPU may be reduced by 50% or more.« less
Conservative bin-to-bin fractional collisions
NASA Astrophysics Data System (ADS)
Martin, Robert
2016-11-01
Particle methods such as direct simulation Monte Carlo (DSMC) and particle-in-cell (PIC) are commonly used to model rarefied kinetic flows for engineering applications because of their ability to efficiently capture non-equilibrium behavior. The primary drawback to these methods relates to the poor convergence properties due to the stochastic nature of the methods which typically rely heavily on high degrees of non-equilibrium and time averaging to compensate for poor signal to noise ratios. For standard implementations, each computational particle represents many physical particles which further exacerbate statistical noise problems for flow with large species density variation such as encountered in flow expansions and chemical reactions. The stochastic weighted particle method (SWPM) introduced by Rjasanow and Wagner overcome this difficulty by allowing the ratio of real to computational particles to vary on a per particle basis throughout the flow. The DSMC procedure must also be slightly modified to properly sample the Boltzmann collision integral accounting for the variable particle weights and to avoid the creation of additional particles with negative weight. In this work, the SWPM with necessary modification to incorporate the variable hard sphere (VHS) collision cross section model commonly used in engineering applications is first incorporated into an existing engineering code, the Thermophysics Universal Research Framework. The results and computational efficiency are compared to a few simple test cases using a standard validated implementation of the DSMC method along with the adapted SWPM/VHS collision using an octree based conservative phase space reconstruction. The SWPM method is then further extended to combine the collision and phase space reconstruction into a single step which avoids the need to create additional computational particles only to destroy them again during the particle merge. This is particularly helpful when oversampling the collision integral when compared to the standard DSMC method. However, it is found that the more frequent phase space reconstructions can cause added numerical thermalization with low particle per cell counts due to the coarseness of the octree used. However, the methods are expected to be of much greater utility in transient expansion flows and chemical reactions in the future.
NASA Technical Reports Server (NTRS)
Macrossan, M. N.
1995-01-01
The direct simulation Monte Carlo (DSMC) method is the established technique for the simulation of rarefied gas flows. In some flows of engineering interest, such as occur for aero-braking spacecraft in the upper atmosphere, DSMC can become prohibitively expensive in CPU time because some regions of the flow, particularly on the windward side of blunt bodies, become collision dominated. As an alternative to using a hybrid DSMC and continuum gas solver (Euler or Navier-Stokes solver) this work is aimed at making the particle simulation method efficient in the high density regions of the flow. A high density, infinite collision rate limit of DSMC, the Equilibrium Particle Simulation method (EPSM) was proposed some 15 years ago. EPSM is developed here for the flow of a gas consisting of many different species of molecules and is shown to be computationally efficient (compared to DSMC) for high collision rate flows. It thus offers great potential as part of a hybrid DSMC/EPSM code which could handle flows in the transition regime between rarefied gas flows and fully continuum flows. As a first step towards this goal a pure EPSM code is described. The next step of combining DSMC and EPSM is not attempted here but should be straightforward. EPSM and DSMC are applied to Taylor-Couette flow with Kn = 0.02 and 0.0133 and S(omega) = 3). Toroidal vortices develop for both methods but some differences are found, as might be expected for the given flow conditions. EPSM appears to be less sensitive to the sequence of random numbers used in the simulation than is DSMC and may also be more dissipative. The question of the origin and the magnitude of the dissipation in EPSM is addressed. It is suggested that this analysis is also relevant to DSMC when the usual accuracy requirements on the cell size and decoupling time step are relaxed in the interests of computational efficiency.
DSMC Shock Simulation of Saturn Entry Probe Conditions
NASA Technical Reports Server (NTRS)
Higdon, Kyle J.; Cruden, Brett A.; Brandis, Aaron; Liechty, Derek S.; Goldstein, David B.; Varghese, Philip L.
2016-01-01
This work describes the direct simulation Monte Carlo (DSMC) investigation of Saturn entry probe scenarios and the influence of non-equilibrium phenomena on Saturn entry conditions. The DSMC simulations coincide with rarefied hypersonic shock tube experiments of a hydrogen-helium mixture performed in the Electric Arc Shock Tube (EAST) at NASA Ames Research Center. The DSMC simulations are post-processed through the NEQAIR line-by-line radiation code to compare directly to the experimental results. Improved collision cross-sections, inelastic collision parameters, and reaction rates are determined for a high temperature DSMC simulation of a 7-species H2-He mixture and an electronic excitation model is implemented in the DSMC code. Simulation results for 27.8 and 27.4 kms shock waves are obtained at 0.2 and 0.1 Torr respectively and compared to measured spectra in the VUV, UV, visible, and IR ranges. These results confirm the persistence of non-equilibrium for several centimeters behind the shock and the diffusion of atomic hydrogen upstream of the shock wave. Although the magnitude of the radiance did not match experiments and an ionization inductance period was not observed in the simulations, the discrepancies indicated where improvements are needed in the DSMC and NEQAIR models.
DSMC Shock Simulation of Saturn Entry Probe Conditions
NASA Technical Reports Server (NTRS)
Higdon, Kyle J.; Cruden, Brett A.; Brandis, Aaron M.; Liechty, Derek S.; Goldstein, David B.; Varghese, Philip L.
2016-01-01
This work describes the direct simulation Monte Carlo (DSMC) investigation of Saturn entry probe scenarios and the influence of non-equilibrium phenomena on Saturn entry conditions. The DSMC simulations coincide with rarefied hypersonic shock tube experiments of a hydrogen-helium mixture performed in the Electric Arc Shock Tube (EAST) at the NASA Ames Research Center. The DSMC simulations are post-processed through the NEQAIR line-by-line radiation code to compare directly to the experimental results. Improved collision cross-sections, inelastic collision parameters, and reaction rates are determined for a high temperature DSMC simulation of a 7-species H2-He mixture and an electronic excitation model is implemented in the DSMC code. Simulation results for 27.8 and 27.4 km/s shock waves are obtained at 0.2 and 0.1 Torr, respectively, and compared to measured spectra in the VUV, UV, visible, and IR ranges. These results confirm the persistence of non-equilibrium for several centimeters behind the shock and the diffusion of atomic hydrogen upstream of the shock wave. Although the magnitude of the radiance did not match experiments and an ionization inductance period was not observed in the simulations, the discrepancies indicated where improvements are needed in the DSMC and NEQAIR models.
DSMC Evaluation of the Navier-Stokes Shear Viscosity of a Granular Fluid
2005-07-13
transport coefficients of the HCS have been measured from DSMC by using the associated Green – Kubo formulas [8]. In the case of a system heated by the action...DSMC evaluation of the Navier–Stokes shear viscosity of a granular fluid José María Montanero∗, Andrés Santos† and Vicente Garzó† ∗Departamento de...proposed to measure the Navier–Stokes shear viscosity in a granular fluid described by the Enskog equation. The method is implemented in DSMC
Pauley, Tim; Gargaro, Judith; Chenard, Glen; Cavanagh, Helen; McKay, Sandra M
2016-01-01
This study evaluated paraprofessional-led diabetes self-management coaching (DSMC) among 94 clients with type 2 diabetes recruited from a Community Care Access Centre in Ontario, Canada. Subjects were randomized to standard care or standard care plus coaching. Measures included the Diabetes Self-Efficacy Scale (DSES), Insulin Management Diabetes Self-Efficacy Scale (IMDSES), and Hospital Anxiety and Depression Scale (HADS). Both groups showed improvement in DSES (6.6 + 1.5 vs. 7.2 + 1.5, p < .001) and IMDSES (113.5 + 20.6 vs. 125.7 + 22.3, p < .001); there were no between-groups differences. There were no between-groups differences in anxiety (p > .05 for all) or depression scores (p > .05 for all), or anxiety (p > .05 for all) or depression (p > .05 for all) categories at baseline, postintervention, or follow-up. While all subjects demonstrated significant improvements in self-efficacy measures, there is no evidence to support paraprofessional-led DSMC as an intervention which conveys additional benefits over standard care.
Vectorization of a particle code used in the simulation of rarefied hypersonic flow
NASA Technical Reports Server (NTRS)
Baganoff, D.
1990-01-01
A limitation of the direct simulation Monte Carlo (DSMC) method is that it does not allow efficient use of vector architectures that predominate in current supercomputers. Consequently, the problems that can be handled are limited to those of one- and two-dimensional flows. This work focuses on a reformulation of the DSMC method with the objective of designing a procedure that is optimized to the vector architectures found on machines such as the Cray-2. In addition, it focuses on finding a better balance between algorithmic complexity and the total number of particles employed in a simulation so that the overall performance of a particle simulation scheme can be greatly improved. Simulations of the flow about a 3D blunt body are performed with 10 to the 7th particles and 4 x 10 to the 5th mesh cells. Good statistics are obtained with time averaging over 800 time steps using 4.5 h of Cray-2 single-processor CPU time.
DSMC simulations of shock tube experiments for the dissociation rate of nitrogen
NASA Astrophysics Data System (ADS)
Bird, G. A.
2012-11-01
The DSMC method has been used to simulate the flow associated with several experiments that led to predictions of the dissociation rate in nitrogen. One involved optical interferometry to determine the density behind strong shock wave and the other involved the measurement of the shock tube end-wall pressure after the reflection of a similar shock wave. DSMC calculations for the un-reflected shock wave were made with the older TCE model that converts rate coefficients to reaction cross-sections, with the newer Q-K model that predicts the rates and with a set of reaction cross-sections for nitrogen dissociation from QCT calculations. A comparison of the resulting density profiles with the measured profile provides a test of the validity of the DSMC chemistry models. The DSMC reaction rates were sampled directly in the DSMC calculation, both far downstream where the flow is in equilibrium and in the non-equilibrium region immediately behind the shock. This permits a critical evaluation of data reduction procedures that were employed to deduce the dissociation rate from the measured quantities.
NASA Astrophysics Data System (ADS)
Mahieux, Arnaud; Goldstein, David B.; Varghese, Philip; Trafton, Laurence M.
2017-10-01
The vapor and particulate plumes arising from the southern polar regions of Enceladus are a key signature of what lies below the surface. Multiple Cassini instruments (INMS, CDA, CAPS, MAG, UVIS, VIMS, ISS) measured the gas-particle plume over the warm Tiger Stripe region and there have been several close flybys. Numerous observations also exist of the near-vent regions in the visible and the IR. The most likely source for these extensive geysers is a subsurface liquid reservoir of somewhat saline water and other volatiles boiling off through crevasse-like conduits into the vacuum of space.In this work, we use a DSMC code to simulate the plume as it exits a vent, considering axisymmetric conditions, in a vertical domain extending up to 10 km. Above 10 km altitude, the flow is collisionless and well modeled in a separate free molecular code. We perform a DSMC parametric and sensitivity study of the following vent parameters: vent diameter, outgassed flow density, water gas/water ice mass flow ratio, gas and ice speed, and ice grain diameter. We build parametric expressions of the plume characteristics at the 10 km upper boundary (number density, temperature, velocity) that will be used in a Bayesian inversion algorithm in order to constrain source conditions from fits to plume observations by various instruments on board the Cassini spacecraft and assess the parametric sensitivity study.
Aspects of GPU perfomance in algorithms with random memory access
NASA Astrophysics Data System (ADS)
Kashkovsky, Alexander V.; Shershnev, Anton A.; Vashchenkov, Pavel V.
2017-10-01
The numerical code for solving the Boltzmann equation on the hybrid computational cluster using the Direct Simulation Monte Carlo (DSMC) method showed that on Tesla K40 accelerators computational performance drops dramatically with increase of percentage of occupied GPU memory. Testing revealed that memory access time increases tens of times after certain critical percentage of memory is occupied. Moreover, it seems to be the common problem of all NVidia's GPUs arising from its architecture. Few modifications of the numerical algorithm were suggested to overcome this problem. One of them, based on the splitting the memory into "virtual" blocks, resulted in 2.5 times speed up.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Womble, David E.
Unified collision operator demonstrated for both radiation transport and PIC-DSMC. A side-by-side comparison between the DSMC method and the radiation transport method was conducted for photon attenuation in the atmosphere over 2 kilometers in physical distance with a reduction of photon density of six orders of magnitude. Both DSMC and traditional radiation transport agreed with theory to two digits. This indicates that PIC-DSMC operators can be unified with the radiation transport collision operators into a single code base and that physics kernels can remain unique to the actual collision pairs. This simulation example provides an initial validation of the unifiedmore » collision theory approach that will later be implemented into EMPIRE.« less
A DSMC Study of Low Pressure Argon Discharge
NASA Astrophysics Data System (ADS)
Hash, David; Meyyappan, M.
1997-10-01
Work toward a self-consistent plasma simulation using the DSMC method for examination of the flowfields of low-pressure high density plasma reactors is presented. Presently, DSMC simulations for these applications involve either treating the electrons as a fluid or imposing experimentally determined values for the electron number density profile. In either approach, the electrons themselves are not physically simulated. Self-consistent plasma DSMC simulations have been conducted for aerospace applications but at a severe computational cost due in part to the scalar architectures on which the codes were employed. The present work attempts to conduct such simulations at a more reasonable cost using a plasma version of the object-oriented parallel Cornell DSMC code, MONACO, on an IBM SP-2. Due the availability of experimental data, the GEC reference cell is chosen to conduct preliminary investigations. An argon discharge is examined thus affording a simple chemistry set with eight gas-phase reactions and five species: Ar, Ar^+, Ar^*, Ar_2, and e where Ar^* is a metastable.
FDDO and DSMC analyses of rarefied gas flow through 2D nozzles
NASA Technical Reports Server (NTRS)
Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren; Penko, Paul F.
1992-01-01
Two different approaches, the finite-difference method coupled with the discrete-ordinate method (FDDO), and the direct-simulation Monte Carlo (DSMC) method, are used in the analysis of the flow of a rarefied gas expanding through a two-dimensional nozzle and into a surrounding low-density environment. In the FDDO analysis, by employing the discrete-ordinate method, the Boltzmann equation simplified by a model collision integral is transformed to a set of partial differential equations which are continuous in physical space but are point functions in molecular velocity space. The set of partial differential equations are solved by means of a finite-difference approximation. In the DSMC analysis, the variable hard sphere model is used as a molecular model and the no time counter method is employed as a collision sampling technique. The results of both the FDDO and the DSMC methods show good agreement. The FDDO method requires less computational effort than the DSMC method by factors of 10 to 40 in CPU time, depending on the degree of rarefaction.
Nonequilibrium hypersonic flows simulations with asymptotic-preserving Monte Carlo methods
NASA Astrophysics Data System (ADS)
Ren, Wei; Liu, Hong; Jin, Shi
2014-12-01
In the rarefied gas dynamics, the DSMC method is one of the most popular numerical tools. It performs satisfactorily in simulating hypersonic flows surrounding re-entry vehicles and micro-/nano- flows. However, the computational cost is expensive, especially when Kn → 0. Even for flows in the near-continuum regime, pure DSMC simulations require a number of computational efforts for most cases. Albeit several DSMC/NS hybrid methods are proposed to deal with this, those methods still suffer from the boundary treatment, which may cause nonphysical solutions. Filbet and Jin [1] proposed a framework of new numerical methods of Boltzmann equation, called asymptotic preserving schemes, whose computational costs are affordable as Kn → 0. Recently, Ren et al. [2] realized the AP schemes with Monte Carlo methods (AP-DSMC), which have better performance than counterpart methods. In this paper, AP-DSMC is applied in simulating nonequilibrium hypersonic flows. Several numerical results are computed and analyzed to study the efficiency and capability of capturing complicated flow characteristics.
DSMC Studies of the Richtmyer-Meshkov Instability
NASA Astrophysics Data System (ADS)
Gallis, M. A.; Koehler, T. P.; Torczynski, J. R.
2014-11-01
A new exascale-capable Direct Simulation Monte Carlo (DSMC) code, SPARTA, developed to be highly efficient on massively parallel computers, has extended the applicability of DSMC to challenging, transient three-dimensional problems in the continuum regime. Because DSMC inherently accounts for compressibility, viscosity, and diffusivity, it has the potential to improve the understanding of the mechanisms responsible for hydrodynamic instabilities. Here, the Richtmyer-Meshkov instability at the interface between two gases was studied parametrically using SPARTA. Simulations performed on Sequoia, an IBM Blue Gene/Q supercomputer at Lawrence Livermore National Laboratory, are used to investigate various Atwood numbers (0.33-0.94) and Mach numbers (1.2-12.0) for two-dimensional and three-dimensional perturbations. Comparisons with theoretical predictions demonstrate that DSMC accurately predicts the early-time growth of the instability. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Implementation of a vibrationally linked chemical reaction model for DSMC
NASA Technical Reports Server (NTRS)
Carlson, A. B.; Bird, Graeme A.
1994-01-01
A new procedure closely linking dissociation and exchange reactions in air to the vibrational levels of the diatomic molecules has been implemented in both one- and two-dimensional versions of Direct Simulation Monte Carlo (DSMC) programs. The previous modeling of chemical reactions with DSMC was based on the continuum reaction rates for the various possible reactions. The new method is more closely related to the actual physics of dissociation and is more appropriate to the particle nature of DSMC. Two cases are presented: the relaxation to equilibrium of undissociated air initially at 10,000 K, and the axisymmetric calculation of shuttle forebody heating during reentry at 92.35 km and 7500 m/s. Although reaction rates are not used in determining the dissociations or exchange reactions, the new method produces rates which agree astonishingly well with the published rates derived from experiment. The results for gas properties and surface properties also agree well with the results produced by earlier DSMC models, equilibrium air calculations, and experiment.
Navier-Stokes Dynamics by a Discrete Boltzmann Model
NASA Technical Reports Server (NTRS)
Rubinstein, Robet
2010-01-01
This work investigates the possibility of particle-based algorithms for the Navier-Stokes equations and higher order continuum approximations of the Boltzmann equation; such algorithms would generalize the well-known Pullin scheme for the Euler equations. One such method is proposed in the context of a discrete velocity model of the Boltzmann equation. Preliminary results on shock structure are consistent with the expectation that the shock should be much broader than the near discontinuity predicted by the Pullin scheme, yet narrower than the prediction of the Boltzmann equation. We discuss the extension of this essentially deterministic method to a stochastic particle method that, like DSMC, samples the distribution function rather than resolving it completely.
Investigation of the DSMC Approach for Ion/neutral Species in Modeling Low Pressure Plasma Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng Hao; Li, Z.; Levin, D.
2011-05-20
Low pressure plasma reactors are important tools for ionized metal physical vapor deposition (IMPVD), a semiconductor plasma processing technology that is increasingly being applied to deposit Cu seed layers on semiconductor surfaces of trenches and vias with the high aspect ratio (e.g., >5:1). A large fraction of ionized atoms produced by the IMPVD process leads to an anisotropic deposition flux towards the substrate, a feature which is critical for attaining a void-free and uniform fill. Modeling such devices is challenging due to their high plasma density, reactive environment, but low gas pressure. A modular code developed by the Computational Opticalmore » and Discharge Physics Group, the Hybrid Plasma Equipment Model (HPEM), has been successfully applied to the numerical investigations of IMPVD by modeling a hollow cathode magnetron (HCM) device. However, as the development of semiconductor devices progresses towards the lower pressure regime (e.g., <5 mTorr), the breakdown of the continuum assumption limits the application of the fluid model in HPEM and suggests the incorporation of the kinetic method, such as the direct simulation Monte Carlo (DSMC), in the plasma simulation.The DSMC method, which solves the Boltzmann equation of transport, has been successfully applied in modeling micro-fluidic flows in MEMS devices with low Reynolds numbers, a feature shared with the HCM. Modeling of the basic physical and chemical processes for ion/neutral species in plasma have been developed and implemented in DSMC, which include ion particle motion due to the Lorentz force, electron impact reactions, charge exchange reactions, and charge recombination at the surface. The heating of neutrals due to collisions with ions and the heating of ions due to the electrostatic field will be shown to be captured by the DSMC simulations. In this work, DSMC calculations were coupled with the modules from HPEM so that the plasma can be self-consistently solved. Differences in the Ar results, the dominant species in the reactor, produced by the DSMC-HPEM coupled simulation will be shown in comparison with the original HPEM results. The effects of the DSMC calculations for ion/neutral species on HPEM plasma simulation will be further analyzed.« less
Axisymmetric Plume Simulations with NASA's DSMC Analysis Code
NASA Technical Reports Server (NTRS)
Stewart, B. D.; Lumpkin, F. E., III
2012-01-01
A comparison of axisymmetric Direct Simulation Monte Carlo (DSMC) Analysis Code (DAC) results to analytic and Computational Fluid Dynamics (CFD) solutions in the near continuum regime and to 3D DAC solutions in the rarefied regime for expansion plumes into a vacuum is performed to investigate the validity of the newest DAC axisymmetric implementation. This new implementation, based on the standard DSMC axisymmetric approach where the representative molecules are allowed to move in all three dimensions but are rotated back to the plane of symmetry by the end of the move step, has been fully integrated into the 3D-based DAC code and therefore retains all of DAC s features, such as being able to compute flow over complex geometries and to model chemistry. Axisymmetric DAC results for a spherically symmetric isentropic expansion are in very good agreement with a source flow analytic solution in the continuum regime and show departure from equilibrium downstream of the estimated breakdown location. Axisymmetric density contours also compare favorably against CFD results for the R1E thruster while temperature contours depart from equilibrium very rapidly away from the estimated breakdown surface. Finally, axisymmetric and 3D DAC results are in very good agreement over the entire plume region and, as expected, this new axisymmetric implementation shows a significant reduction in computer resources required to achieve accurate simulations for this problem over the 3D simulations.
The direct simulation of acoustics on Earth, Mars, and Titan.
Hanford, Amanda D; Long, Lyle N
2009-02-01
With the recent success of the Huygens lander on Titan, a moon of Saturn, there has been renewed interest in further exploring the acoustic environments of the other planets in the solar system. The direct simulation Monte Carlo (DSMC) method is used here for modeling sound propagation in the atmospheres of Earth, Mars, and Titan at a variety of altitudes above the surface. DSMC is a particle method that describes gas dynamics through direct physical modeling of particle motions and collisions. The validity of DSMC for the entire range of Knudsen numbers (Kn), where Kn is defined as the mean free path divided by the wavelength, allows for the exploration of sound propagation in planetary environments for all values of Kn. DSMC results at a variety of altitudes on Earth, Mars, and Titan including the details of nonlinearity, absorption, dispersion, and molecular relaxation in gas mixtures are given for a wide range of Kn showing agreement with various continuum theories at low Kn and deviation from continuum theory at high Kn. Despite large computation time and memory requirements, DSMC is the method best suited to study high altitude effects or where continuum theory is not valid.
Lattice Boltzmann accelerated direct simulation Monte Carlo for dilute gas flow simulations.
Di Staso, G; Clercx, H J H; Succi, S; Toschi, F
2016-11-13
Hybrid particle-continuum computational frameworks permit the simulation of gas flows by locally adjusting the resolution to the degree of non-equilibrium displayed by the flow in different regions of space and time. In this work, we present a new scheme that couples the direct simulation Monte Carlo (DSMC) with the lattice Boltzmann (LB) method in the limit of isothermal flows. The former handles strong non-equilibrium effects, as they typically occur in the vicinity of solid boundaries, whereas the latter is in charge of the bulk flow, where non-equilibrium can be dealt with perturbatively, i.e. according to Navier-Stokes hydrodynamics. The proposed concurrent multiscale method is applied to the dilute gas Couette flow, showing major computational gains when compared with the full DSMC scenarios. In addition, it is shown that the coupling with LB in the bulk flow can speed up the DSMC treatment of the Knudsen layer with respect to the full DSMC case. In other words, LB acts as a DSMC accelerator.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2016 The Author(s).
Simulation of unsteady flows by the DSMC macroscopic chemistry method
NASA Astrophysics Data System (ADS)
Goldsworthy, Mark; Macrossan, Michael; Abdel-jawad, Madhat
2009-03-01
In the Direct Simulation Monte-Carlo (DSMC) method, a combination of statistical and deterministic procedures applied to a finite number of 'simulator' particles are used to model rarefied gas-kinetic processes. In the macroscopic chemistry method (MCM) for DSMC, chemical reactions are decoupled from the specific particle pairs selected for collisions. Information from all of the particles within a cell, not just those selected for collisions, is used to determine a reaction rate coefficient for that cell. Unlike collision-based methods, MCM can be used with any viscosity or non-reacting collision models and any non-reacting energy exchange models. It can be used to implement any reaction rate formulations, whether these be from experimental or theoretical studies. MCM has been previously validated for steady flow DSMC simulations. Here we show how MCM can be used to model chemical kinetics in DSMC simulations of unsteady flow. Results are compared with a collision-based chemistry procedure for two binary reactions in a 1-D unsteady shock-expansion tube simulation. Close agreement is demonstrated between the two methods for instantaneous, ensemble-averaged profiles of temperature, density and species mole fractions, as well as for the accumulated number of net reactions per cell.
NASA Astrophysics Data System (ADS)
Ivanov, M.; Zeitoun, D.; Vuillon, J.; Gimelshein, S.; Markelov, G.
1996-05-01
The problem of transition of planar shock waves over straight wedges in steady flows from regular to Mach reflection and back was numerically studied by the DSMC method for solving the Boltzmann equation and finite difference method with FCT algorithm for solving the Euler equations. It is shown that the transition from regular to Mach reflection takes place in accordance with detachment criterion while the opposite transition occurs at smaller angles. The hysteresis effect was observed at increasing and decreasing shock wave angle.
DSMC simulations of the Shuttle Plume Impingement Flight EXperiment(SPIFEX)
NASA Technical Reports Server (NTRS)
Stewart, Benedicte; Lumpkin, Forrest
2017-01-01
During orbital maneuvers and proximity operations, a spacecraft fires its thrusters inducing plume impingement loads, heating and contamination to itself and to any other nearby spacecraft. These thruster firings are generally modeled using a combination of Computational Fluid Dynamics (CFD) and DSMC simulations. The Shuttle Plume Impingement Flight EXperiment(SPIFEX) produced data that can be compared to a high fidelity simulation. Due to the size of the Shuttle thrusters this problem was too resource intensive to be solved with DSMC when the experiment flew in 1994.
DSMC simulation of the interaction between rarefied free jets
NASA Technical Reports Server (NTRS)
Dagum, Leonardo; Zhu, S. H. K.
1993-01-01
This paper presents a direct simulation Monte Carlo (DSMC) calculation of two interacting free jets exhausting into vacuum. The computed flow field is compared against available experimental data and shows excellent agreement everywhere except in the very near field (less than one orifice diameter downstream of the jet exhaust plane). The lack of agreement in this region is attributed to having assumed an inviscid boundary condition for the orifice lip. The results serve both to validate the DSMC code for a very complex, three dimensional non-equilibrium flow field, and to provide some insight as to the complicated nature of this flow.
A 3-D Coupled CFD-DSMC Solution Method With Application to the Mars Sample Return Orbiter
NASA Technical Reports Server (NTRS)
Glass, Christopher E.; Gnoffo, Peter A.
2000-01-01
A method to obtain coupled Computational Fluid Dynamics-Direct Simulation Monte Carlo (CFD-DSMC), 3-D flow field solutions for highly blunt bodies at low incidence is presented and applied to one concept of the Mars Sample Return Orbiter vehicle as a demonstration of the technique. CFD is used to solve the high-density blunt forebody flow defining an inflow boundary condition for a DSMC solution of the afterbody wake flow. By combining the two techniques in flow regions where most applicable, the entire mixed flow field is modeled in an appropriate manner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shevyrin, Alexander A.; Vashchenkov, Pavel V.; Bondar, Yevgeniy A.
An ionized flow around the RAM C-II vehicle in the range of altitudes from 73 to 81 km is studied by the Direct Simulation Monte Carlo (DSMC) method with three models of chemical reactions. It is demonstrated that vibration favoring in reactions of dissociation of neutral molecules affects significantly the predicted values of plasma density in the shock layer, and good agreement between the results of experiments and DSMC computations can be achieved in terms of the plasma density as a function of the flight altitude.
NASA Technical Reports Server (NTRS)
Prisbell, Andrew; Marichalar, J.; Lumpkin, F.; LeBeau, G.
2010-01-01
Plume impingement effects on the Orion Crew Service Module (CSM) were analyzed for various dual Reaction Control System (RCS) engine firings and various configurations of the solar arrays. The study was performed using a decoupled computational fluid dynamics (CFD) and Direct Simulation Monte Carlo (DSMC) approach. This approach included a single jet plume solution for the R1E RCS engine computed with the General Aerodynamic Simulation Program (GASP) CFD code. The CFD solution was used to create an inflow surface for the DSMC solution based on the Bird continuum breakdown parameter. The DSMC solution was then used to model the dual RCS plume impingement effects on the entire CSM geometry with deployed solar arrays. However, because the continuum breakdown parameter of 0.5 could not be achieved due to geometrical constraints and because high resolution in the plume shock interaction region is desired, a focused DSMC simulation modeling only the plumes and the shock interaction region was performed. This high resolution intermediate solution was then used as the inflow to the larger DSMC solution to obtain plume impingement heating, forces, and moments on the CSM and the solar arrays for a total of 21 cases that were analyzed. The results of these simulations were used to populate the Orion CSM Aerothermal Database.
Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Wada, Takao
2014-07-01
A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.
Application of a Modular Particle-Continuum Method to Partially Rarefied, Hypersonic Flow
NASA Astrophysics Data System (ADS)
Deschenes, Timothy R.; Boyd, Iain D.
2011-05-01
The Modular Particle-Continuum (MPC) method is used to simulate partially-rarefied, hypersonic flow over a sting-mounted planetary probe configuration. This hybrid method uses computational fluid dynamics (CFD) to solve the Navier-Stokes equations in regions that are continuum, while using direct simulation Monte Carlo (DSMC) in portions of the flow that are rarefied. The MPC method uses state-based coupling to pass information between the two flow solvers and decouples both time-step and mesh densities required by each solver. It is parallelized for distributed memory systems using dynamic domain decomposition and internal energy modes can be consistently modeled to be out of equilibrium with the translational mode in both solvers. The MPC results are compared to both full DSMC and CFD predictions and available experimental measurements. By using DSMC in only regions where the flow is nonequilibrium, the MPC method is able to reproduce full DSMC results down to the level of velocity and rotational energy probability density functions while requiring a fraction of the computational time.
Pressure measurements in a low-density nozzle plume for code verification
NASA Technical Reports Server (NTRS)
Penko, Paul F.; Boyd, Iain D.; Meissner, Dana L.; Dewitt, Kenneth J.
1991-01-01
Measurements of Pitot pressure were made in the exit plane and plume of a low-density, nitrogen nozzle flow. Two numerical computer codes were used to analyze the flow, including one based on continuum theory using the explicit MacCormack method, and the other on kinetic theory using the method of direct-simulation Monte Carlo (DSMC). The continuum analysis was carried to the nozzle exit plane and the results were compared to the measurements. The DSMC analysis was extended into the plume of the nozzle flow and the results were compared with measurements at the exit plane and axial stations 12, 24 and 36 mm into the near-field plume. Two experimental apparatus were used that differed in design and gave slightly different profiles of pressure measurements. The DSMC method compared well with the measurements from each apparatus at all axial stations and provided a more accurate prediction of the flow than the continuum method, verifying the validity of DSMC for such calculations.
State-to-state models of vibrational relaxation in Direct Simulation Monte Carlo (DSMC)
NASA Astrophysics Data System (ADS)
Oblapenko, G. P.; Kashkovsky, A. V.; Bondar, Ye A.
2017-02-01
In the present work, the application of state-to-state models of vibrational energy exchanges to the Direct Simulation Monte Carlo (DSMC) is considered. A state-to-state model for VT transitions of vibrational energy in nitrogen and oxygen, based on the application of the inverse Laplace transform to results of quasiclassical trajectory calculations (QCT) of vibrational energy transitions, along with the Forced Harmonic Oscillator (FHO) state-to-state model is implemented in DSMC code and applied to flows around blunt bodies. Comparisons are made with the widely used Larsen-Borgnakke model and the in uence of multi-quantum VT transitions is assessed.
Analysis of Effectiveness of Phoenix Entry Reaction Control System
NASA Technical Reports Server (NTRS)
Dyakonov, Artem A.; Glass, Christopher E.; Desai, Prasun, N.; VanNorman, John W.
2008-01-01
Interaction between the external flowfield and the reaction control system (RCS) thruster plumes of the Phoenix capsule during entry has been investigated. The analysis covered rarefied, transitional, hypersonic and supersonic flight regimes. Performance of pitch, yaw and roll control authority channels was evaluated, with specific emphasis on the yaw channel due to its low nominal yaw control authority. Because Phoenix had already been constructed and its RCS could not be modified before flight, an assessment of RCS efficacy along the trajectory was needed to determine possible issues and to make necessary software changes. Effectiveness of the system at various regimes was evaluated using a hybrid DSMC-CFD technique, based on DSMC Analysis Code (DAC) code and General Aerodynamic Simulation Program (GASP), the LAURA (Langley Aerothermal Upwind Relaxation Algorithm) code, and the FUN3D (Fully Unstructured 3D) code. Results of the analysis at hypersonic and supersonic conditions suggest a significant aero-RCS interference which reduced the efficacy of the thrusters and could likely produce control reversal. Very little aero-RCS interference was predicted in rarefied and transitional regimes. A recommendation was made to the project to widen controller system deadbands to minimize (if not eliminate) the use of RCS thrusters through hypersonic and supersonic flight regimes, where their performance would be uncertain.
NASA Astrophysics Data System (ADS)
Chen, Syuan-Yi; Gong, Sheng-Sian
2017-09-01
This study aims to develop an adaptive high-precision control system for controlling the speed of a vane-type air motor (VAM) pneumatic servo system. In practice, the rotor speed of a VAM depends on the input mass air flow, which can be controlled by the effective orifice area (EOA) of an electronic throttle valve (ETV). As the control variable of a second-order pneumatic system is the integral of the EOA, an observation-based adaptive dynamic sliding-mode control (ADSMC) system is proposed to derive the differential of the control variable, namely, the EOA control signal. In the ADSMC system, a proportional-integral-derivative fuzzy neural network (PIDFNN) observer is used to achieve an ideal dynamic sliding-mode control (DSMC), and a supervisor compensator is designed to eliminate the approximation error. As a result, the ADSMC incorporates the robustness of a DSMC and the online learning ability of a PIDFNN. To ensure the convergence of the tracking error, a Lyapunov-based analytical method is employed to obtain the adaptive algorithms required to tune the control parameters of the online ADSMC system. Finally, our experimental results demonstrate the precision and robustness of the ADSMC system for highly nonlinear and time-varying VAM pneumatic servo systems.
Evaluation of new collision-pair selection models in DSMC
NASA Astrophysics Data System (ADS)
Akhlaghi, Hassan; Roohi, Ehsan
2017-10-01
The current paper investigates new collision-pair selection procedures in a direct simulation Monte Carlo (DSMC) method. Collision partner selection based on the random procedure from nearest neighbor particles and deterministic selection of nearest neighbor particles have already been introduced as schemes that provide accurate results in a wide range of problems. In the current research, new collision-pair selections based on the time spacing and direction of the relative movement of particles are introduced and evaluated. Comparisons between the new and existing algorithms are made considering appropriate test cases including fluctuations in homogeneous gas, 2D equilibrium flow, and Fourier flow problem. Distribution functions for number of particles and collisions in cell, velocity components, and collisional parameters (collision separation, time spacing, relative velocity, and the angle between relative movements of particles) are investigated and compared with existing analytical relations for each model. The capability of each model in the prediction of the heat flux in the Fourier problem at different cell numbers, numbers of particles, and time steps is examined. For new and existing collision-pair selection schemes, the effect of an alternative formula for the number of collision-pair selections and avoiding repetitive collisions are investigated via the prediction of the Fourier heat flux. The simulation results demonstrate the advantages and weaknesses of each model in different test cases.
Mankodi, T K; Bhandarkar, U V; Puranik, B P
2017-08-28
A new ab initio based chemical model for a Direct Simulation Monte Carlo (DSMC) study suitable for simulating rarefied flows with a high degree of non-equilibrium is presented. To this end, Collision Induced Dissociation (CID) cross sections for N 2 +N 2 →N 2 +2N are calculated and published using a global complete active space self-consistent field-complete active space second order perturbation theory N 4 potential energy surface and quasi-classical trajectory algorithm for high energy collisions (up to 30 eV). CID cross sections are calculated for only a selected set of ro-vibrational combinations of the two nitrogen molecules, and a fitting scheme based on spectroscopic weights is presented to interpolate the CID cross section for all possible ro-vibrational combinations. The new chemical model is validated by calculating equilibrium reaction rate coefficients that can be compared well with existing shock tube and computational results. High-enthalpy hypersonic nitrogen flows around a cylinder in the transition flow regime are simulated using DSMC to compare the predictions of the current ab initio based chemical model with the prevailing phenomenological model (the total collision energy model). The differences in the predictions are discussed.
NASA Astrophysics Data System (ADS)
Goldsworthy, M. J.
2012-10-01
One of the most useful tools for modelling rarefied hypersonic flows is the Direct Simulation Monte Carlo (DSMC) method. Simulator particle movement and collision calculations are combined with statistical procedures to model thermal non-equilibrium flow-fields described by the Boltzmann equation. The Macroscopic Chemistry Method for DSMC simulations was developed to simplify the inclusion of complex thermal non-equilibrium chemistry. The macroscopic approach uses statistical information which is calculated during the DSMC solution process in the modelling procedures. Here it is shown how inclusion of macroscopic information in models of chemical kinetics, electronic excitation, ionization, and radiation can enhance the capabilities of DSMC to model flow-fields where a range of physical processes occur. The approach is applied to the modelling of a 6.4 km/s nitrogen shock wave and results are compared with those from existing shock-tube experiments and continuum calculations. Reasonable agreement between the methods is obtained. The quality of the comparison is highly dependent on the set of vibrational relaxation and chemical kinetic parameters employed.
Effects of continuum breakdown on hypersonic aerothermodynamics for reacting flow
NASA Astrophysics Data System (ADS)
Holman, Timothy D.; Boyd, Iain D.
2011-02-01
This study investigates the effects of continuum breakdown on the surface aerothermodynamic properties (pressure, stress, and heat transfer rate) of a sphere in a Mach 25 flow of reacting air in regimes varying from continuum to a rarefied gas. Results are generated using both continuum [computational fluid dynamics (CFD)] and particle [direct simulation Monte Carlo (DSMC)] approaches. The DSMC method utilizes a chemistry model that calculates the backward rates from an equilibrium constant. A preferential dissociation model is modified in the CFD method to better compare with the vibrationally favored dissociation model that is utilized in the DSMC method. Tests of these models are performed to confirm their validity and to compare the chemistry models in both numerical methods. This study examines the effect of reacting air flow on continuum breakdown and the surface properties of the sphere. As the global Knudsen number increases, the amount of continuum breakdown in the flow and on the surface increases. This increase in continuum breakdown significantly affects the surface properties, causing an increase in the differences between CFD and DSMC. Explanations are provided for the trends observed.
NASA Technical Reports Server (NTRS)
Borner, A.; Swaminathan-Gopalan, K.; Stephani, Kelly; Poovathingal, S.; Murray, V. J.; Minton, T. K.; Panerai, F.; Mansour, N. N.
2017-01-01
A collaborative effort between the University of Illinois at Urbana-Champaign (UIUC), NASA Ames Research Center (ARC) and Montana State University (MSU) succeeded at developing a new finite-rate carbon oxidation model from molecular beam scattering experiments on vitreous carbon (VC). We now aim to use the direct simulation Monte Carlo (DSMC) code SPARTA to apply the model to each fiber of the porous fibrous Thermal Protection Systems (TPS) material FiberForm (FF). The detailed micro-structure of FF was obtained from X-ray micro-tomography and then used in DSMC. Both experiments and simulations show that the CO/O products ratio increased at all temperatures from VC to FF. We postulate this is due to the larger number of collisions an O atom encounters inside the porous FF material compared to the flat surface of VC. For the simulations, we particularly focused on the lowest and highest temperatures studied experimentally, 1023 K and 1823 K, and found good agreement between the finite-rate DSMC simulations and experiments.
Oxygen transport properties estimation by DSMC-CT simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruno, Domenico; Frezzotti, Aldo; Ghiroldi, Gian Pietro
Coupling DSMC simulations with classical trajectories calculations is emerging as a powerful tool to improve predictive capabilities of computational rarefied gas dynamics. The considerable increase of computational effort outlined in the early application of the method (Koura,1997) can be compensated by running simulations on massively parallel computers. In particular, GPU acceleration has been found quite effective in reducing computing time (Ferrigni,2012; Norman et al.,2013) of DSMC-CT simulations. The aim of the present work is to study rarefied Oxygen flows by modeling binary collisions through an accurate potential energy surface, obtained by molecular beams scattering (Aquilanti, et al.,1999). The accuracy ofmore » the method is assessed by calculating molecular Oxygen shear viscosity and heat conductivity following three different DSMC-CT simulation methods. In the first one, transport properties are obtained from DSMC-CT simulations of spontaneous fluctuation of an equilibrium state (Bruno et al, Phys. Fluids, 23, 093104, 2011). In the second method, the collision trajectory calculation is incorporated in a Monte Carlo integration procedure to evaluate the Taxman’s expressions for the transport properties of polyatomic gases (Taxman,1959). In the third, non-equilibrium zero and one-dimensional rarefied gas dynamic simulations are adopted and the transport properties are computed from the non-equilibrium fluxes of momentum and energy. The three methods provide close values of the transport properties, their estimated statistical error not exceeding 3%. The experimental values are slightly underestimated, the percentage deviation being, again, few percent.« less
Particle kinetic simulation of high altitude hypervelocity flight
NASA Technical Reports Server (NTRS)
Haas, Brian L.
1993-01-01
In this grant period, the focus has been on enhancement and application of the direct simulation Monte Carlo (DSMC) particle method for computing hypersonic flows of re-entry vehicles. Enhancement efforts dealt with modeling gas-gas interactions for thermal non-equilibrium relaxation processes and gas-surface interactions for prediction of vehicle surface temperatures. Both are important for application to problems of engineering interest. The code was employed in a parametric study to improve future applications, and in simulations of aeropass maneuvers in support of the Magellan mission. Detailed comparisons between continuum models for internal energy relaxation and DSMC models reveals that several discrepancies exist. These include definitions of relaxation parameters and the methodologies for implementing them in DSMC codes. These issues were clarified and all differences were rectified in a paper (Appendix A) submitted to Physics of Fluids A, featuring several key figures in the DSMC community as co-authors and B. Haas as first author. This material will be presented at the Fluid Dynamics meeting of the American Physical Society on November 21, 1993. The aerodynamics of space vehicles in highly rarefied flows are very sensitive to the vehicle surface temperatures. Rather than require prescribed temperature estimates for spacecraft as is typically done in DSMC methods, a new technique was developed which couples the dynamic surface heat transfer characteristics into the DSMC flow simulation code to compute surface temperatures directly. This model, when applied to thin planar bodies such as solar panels, was described in AIAA Paper No. 93-2765 (Appendix B) and was presented at the Thermophysics Conference in July 1993. The paper has been submitted to the Journal of Thermophysics and Heat Transfer. Application of the DSMC method to problems of practical interest requires a trade off between solution accuracy and computational expense and limitations. A parametric study was performed and reported in AIAA Paper No. 93-2806 (Appendix C) which assessed the accuracy penalties associated with simulations of varying grid resolution and flow domain size. The paper was also presented at the Thermophysics Conference and will be submitted to the journal shortly. Finally, the DSMC code was employed to assess the pitch, yaw, and roll aerodynamics of the Magellan spacecraft during entry into the Venus atmosphere at off-design attitudes. This work was in support of the Magellan aerobraking maneuver of May 25-Aug. 3, 1993. Furthermore, analysis of the roll characteristics of the configuration with canted solar panels was performed in support of the proposed 'Windmill' experiment. Results were reported in AIAA Paper No. 93-3676 (Appendix D) presented at the Atmospheric Flight Mechanics Conference in August 1993, and were submitted to Journal of Spacecraft and Rockets.
1994-06-01
Defense Systems Management requirements for program executive College (DSMC). However. the sec- officers ( PEas ), program managers ond and third sets have...and presenting in- predictable outcomes in terms of cul- formation would change the entire tural change. t4 of culture. Once, carrier pigeons took days
Direct simulation Monte Carlo investigation of the Richtmyer-Meshkov instability
Gallis, Michail A.; Koehler, Timothy P.; Torczynski, John R.; ...
2015-08-14
The Rayleigh-Taylor instability (RTI) is investigated using the Direct Simulation Monte Carlo (DSMC) method of molecular gas dynamics. Here, fully resolved two-dimensional DSMC RTI simulations are performed to quantify the growth of flat and single-mode perturbed interfaces between two atmospheric-pressure monatomic gases as a function of the Atwood number and the gravitational acceleration. The DSMC simulations reproduce all qualitative features of the RTI and are in reasonable quantitative agreement with existing theoretical and empirical models in the linear, nonlinear, and self-similar regimes. At late times, the instability is seen to exhibit a self-similar behavior, in agreement with experimental observations. Formore » the conditions simulated, diffusion can influence the initial instability growth significantly.« less
Predicting Flows of Rarefied Gases
NASA Technical Reports Server (NTRS)
LeBeau, Gerald J.; Wilmoth, Richard G.
2005-01-01
DSMC Analysis Code (DAC) is a flexible, highly automated, easy-to-use computer program for predicting flows of rarefied gases -- especially flows of upper-atmospheric, propulsion, and vented gases impinging on spacecraft surfaces. DAC implements the direct simulation Monte Carlo (DSMC) method, which is widely recognized as standard for simulating flows at densities so low that the continuum-based equations of computational fluid dynamics are invalid. DAC enables users to model complex surface shapes and boundary conditions quickly and easily. The discretization of a flow field into computational grids is automated, thereby relieving the user of a traditionally time-consuming task while ensuring (1) appropriate refinement of grids throughout the computational domain, (2) determination of optimal settings for temporal discretization and other simulation parameters, and (3) satisfaction of the fundamental constraints of the method. In so doing, DAC ensures an accurate and efficient simulation. In addition, DAC can utilize parallel processing to reduce computation time. The domain decomposition needed for parallel processing is completely automated, and the software employs a dynamic load-balancing mechanism to ensure optimal parallel efficiency throughout the simulation.
Shock-Wave/Boundary-Layer Interactions in Hypersonic Low Density Flows
NASA Technical Reports Server (NTRS)
Moss, James N.; Olejniczak, Joseph
2004-01-01
Results of numerical simulations of Mach 10 air flow over a hollow cylinder-flare and a double-cone are presented where viscous effects are significant. The flow phenomena include shock-shock and shock- boundary-layer interactions with accompanying flow separation, recirculation, and reattachment. The purpose of this study is to promote an understanding of the fundamental gas dynamics resulting from such complex interactions and to clarify the requirements for meaningful simulations of such flows when using the direct simulation Monte Carlo (DSMC) method. Particular emphasis is placed on the sensitivity of computed results to grid resolution. Comparisons of the DSMC results for the hollow cylinder-flare (30 deg.) configuration are made with the results of experimental measurements conducted in the ONERA RSCh wind tunnel for heating, pressure, and the extent of separation. Agreement between computations and measurements for various quantities is good except that for pressure. For the same flow conditions, the double- cone geometry (25 deg.- 65 deg.) produces much stronger interactions, and these interactions are investigated numerically using both DSMC and Navier-Stokes codes. For the double-cone computations, a two orders of magnitude variation in free-stream density (with Reynolds numbers from 247 to 24,7 19) is investigated using both computational methods. For this range of flow conditions, the computational results are in qualitative agreement for the extent of separation with the DSMC method always predicting a smaller separation region. Results from the Navier-Stokes calculations suggest that the flow for the highest density double-cone case may be unsteady; however, the DSMC solution does not show evidence of unsteadiness.
Particle/Continuum Hybrid Simulation in a Parallel Computing Environment
NASA Technical Reports Server (NTRS)
Baganoff, Donald
1996-01-01
The objective of this study was to modify an existing parallel particle code based on the direct simulation Monte Carlo (DSMC) method to include a Navier-Stokes (NS) calculation so that a hybrid solution could be developed. In carrying out this work, it was determined that the following five issues had to be addressed before extensive program development of a three dimensional capability was pursued: (1) find a set of one-sided kinetic fluxes that are fully compatible with the DSMC method, (2) develop a finite volume scheme to make use of these one-sided kinetic fluxes, (3) make use of the one-sided kinetic fluxes together with DSMC type boundary conditions at a material surface so that velocity slip and temperature slip arise naturally for near-continuum conditions, (4) find a suitable sampling scheme so that the values of the one-sided fluxes predicted by the NS solution at an interface between the two domains can be converted into the correct distribution of particles to be introduced into the DSMC domain, (5) carry out a suitable number of tests to confirm that the developed concepts are valid, individually and in concert for a hybrid scheme.
Molecular-Level Simulations of the Turbulent Taylor-Green Flow
NASA Astrophysics Data System (ADS)
Gallis, M. A.; Bitter, N. P.; Koehler, T. P.; Plimpton, S. J.; Torczynski, J. R.; Papadakis, G.
2017-11-01
The Direct Simulation Monte Carlo (DSMC) method, a statistical, molecular-level technique that provides accurate solutions to the Boltzmann equation, is applied to the turbulent Taylor-Green vortex flow. The goal of this work is to investigate whether DSMC can accurately simulate energy decay in a turbulent flow. If so, then simulating turbulent flows at the molecular level can provide new insights because the energy decay can be examined in detail from molecular to macroscopic length scales, thereby directly linking molecular relaxation processes to macroscopic transport processes. The DSMC simulations are performed on half a million cores of Sequoia, the 17 Pflop platform at Lawrence Livermore National Laboratory, and the kinetic-energy dissipation rate and the energy spectrum are computed directly from the molecular velocities. The DSMC simulations are found to reproduce the Kolmogorov -5/3 law and to agree with corresponding Navier-Stokes simulations obtained using a spectral method. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
Direct simulation Monte Carlo investigation of the Rayleigh-Taylor instability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallis, M. A.; Koehler, T. P.; Torczynski, J. R.
In this paper, the Rayleigh-Taylor instability (RTI) is investigated using the direct simulation Monte Carlo (DSMC) method of molecular gas dynamics. Here, fully resolved two-dimensional DSMC RTI simulations are performed to quantify the growth of flat and single-mode perturbed interfaces between two atmospheric-pressure monatomic gases as a function of the Atwood number and the gravitational acceleration. The DSMC simulations reproduce many qualitative features of the growth of the mixing layer and are in reasonable quantitative agreement with theoretical and empirical models in the linear, nonlinear, and self-similar regimes. In some of the simulations at late times, the instability enters themore » self-similar regime, in agreement with experimental observations. Finally, for the conditions simulated, diffusion can influence the initial instability growth significantly.« less
Direct simulation Monte Carlo investigation of the Rayleigh-Taylor instability
Gallis, M. A.; Koehler, T. P.; Torczynski, J. R.; ...
2016-08-31
In this paper, the Rayleigh-Taylor instability (RTI) is investigated using the direct simulation Monte Carlo (DSMC) method of molecular gas dynamics. Here, fully resolved two-dimensional DSMC RTI simulations are performed to quantify the growth of flat and single-mode perturbed interfaces between two atmospheric-pressure monatomic gases as a function of the Atwood number and the gravitational acceleration. The DSMC simulations reproduce many qualitative features of the growth of the mixing layer and are in reasonable quantitative agreement with theoretical and empirical models in the linear, nonlinear, and self-similar regimes. In some of the simulations at late times, the instability enters themore » self-similar regime, in agreement with experimental observations. Finally, for the conditions simulated, diffusion can influence the initial instability growth significantly.« less
DSMC Simulations of Disturbance Torque to ISS During Airlock Depressurization
NASA Technical Reports Server (NTRS)
Lumpkin, F. E., III; Stewart, B. S.
2015-01-01
The primary attitude control system on the International Space Station (ISS) is part of the United States On-orbit Segment (USOS) and uses Control Moment Gyroscopes (CMG). The secondary system is part of the Russian On orbit Segment (RSOS) and uses a combination of gyroscopes and thrusters. Historically, events with significant disturbances such as the airlock depressurizations associated with extra-vehicular activity (EVA) have been performed using the RSOS attitude control system. This avoids excessive propulsive "de-saturations" of the CMGs. However, transfer of attitude control is labor intensive and requires significant propellant. Predictions employing NASA's DSMC Analysis Code (DAC) of the disturbance torque to the ISS for depressurization of the Pirs airlock on the RSOS will be presented [1]. These predictions were performed to assess the feasibility of using USOS control during these events. The ISS Pirs airlock is vented using a device known as a "T-vent" as shown in the inset in figure 1. By orienting two equal streams of gas in opposite directions, this device is intended to have no propulsive effect. However, disturbance force and torque to the ISS do occur due to plume impingement. The disturbance torque resulting from the Pirs depressurization during EVAs is estimated by using a loosely coupled CFD/DSMC technique [2]. CFD is used to simulate the flow field in the nozzle and the near field plume. DSMC is used to simulate the remaining flow field using the CFD results to create an in flow boundary to the DSMC simulation. Due to the highly continuum nature of flow field near the T-vent, two loosely coupled DSMC domains are employed. An 88.2 cubic meter inner domain contains the Pirs airlock and the T-vent. Inner domain results are used to create an in flow boundary for an outer domain containing the remaining portions of the ISS. Several orientations of the ISS solar arrays and radiators have been investigated to find cases that result in minimal disturbance torque. Figure 1 shows surface pressure contours on the ISS and a plane of number density contours for a particular case.
NAS Experiences of Porting CM Fortran Codes to HPF on IBM SP2 and SGI Power Challenge
NASA Technical Reports Server (NTRS)
Saini, Subhash
1995-01-01
Current Connection Machine (CM) Fortran codes developed for the CM-2 and the CM-5 represent an important class of parallel applications. Several users have employed CM Fortran codes in production mode on the CM-2 and the CM-5 for the last five to six years, constituting a heavy investment in terms of cost and time. With Thinking Machines Corporation's decision to withdraw from the hardware business and with the decommissioning of many CM-2 and CM-5 machines, the best way to protect the substantial investment in CM Fortran codes is to port the codes to High Performance Fortran (HPF) on highly parallel systems. HPF is very similar to CM Fortran and thus represents a natural transition. Conversion issues involved in porting CM Fortran codes on the CM-5 to HPF are presented. In particular, the differences between data distribution directives and the CM Fortran Utility Routines Library, as well as the equivalent functionality in the HPF Library are discussed. Several CM Fortran codes (Cannon algorithm for matrix-matrix multiplication, Linear solver Ax=b, 1-D convolution for 2-D datasets, Laplace's Equation solver, and Direct Simulation Monte Carlo (DSMC) codes have been ported to Subset HPF on the IBM SP2 and the SGI Power Challenge. Speedup ratios versus number of processors for the Linear solver and DSMC code are presented.
Direct Simulation Monte Carlo Simulations of Low Pressure Semiconductor Plasma Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gochberg, L. A.; Ozawa, T.; Deng, H.
2008-12-31
The two widely used plasma deposition tools for semiconductor processing are Ionized Metal Physical Vapor Deposition (IMPVD) of metals using either planar or hollow cathode magnetrons (HCM), and inductively-coupled plasma (ICP) deposition of dielectrics in High Density Plasma Chemical Vapor Deposition (HDP-CVD) reactors. In these systems, the injected neutral gas flows are generally in the transonic to supersonic flow regime. The Hybrid Plasma Equipment Model (HPEM) has been developed and is strategically and beneficially applied to the design of these tools and their processes. For the most part, the model uses continuum-based techniques, and thus, as pressures decrease below 10more » mTorr, the continuum approaches in the model become questionable. Modifications have been previously made to the HPEM to significantly improve its accuracy in this pressure regime. In particular, the Ion Monte Carlo Simulation (IMCS) was added, wherein a Monte Carlo simulation is used to obtain ion and neutral velocity distributions in much the same way as in direct simulation Monte Carlo (DSMC). As a further refinement, this work presents the first steps towards the adaptation of full DSMC calculations to replace part of the flow module within the HPEM. Six species (Ar, Cu, Ar*, Cu*, Ar{sup +}, and Cu{sup +}) are modeled in DSMC. To couple SMILE as a module to the HPEM, source functions for species, momentum and energy from plasma sources will be provided by the HPEM. The DSMC module will then compute a quasi-converged flow field that will provide neutral and ion species densities, momenta and temperatures. In this work, the HPEM results for a hollow cathode magnetron (HCM) IMPVD process using the Boltzmann distribution are compared with DSMC results using portions of those HPEM computations as an initial condition.« less
Development of a Detailed Surface Chemistry Framework in DSMC
NASA Technical Reports Server (NTRS)
Swaminathan-Gopalan, K.; Borner, A.; Stephani, K. A.
2017-01-01
Many of the current direct simulation Monte Carlo (DSMC) codes still employ only simple surface catalysis models. These include only basic mechanisms such as dissociation, recombination, and exchange reactions, without any provision for adsorption and finite rate kinetics. Incorporating finite rate chemistry at the surface is increasingly becoming a necessity for various applications such as high speed re-entry flows over thermal protection systems (TPS), micro-electro-mechanical systems (MEMS), surface catalysis, etc. In the recent years, relatively few works have examined finite-rate surface reaction modeling using the DSMC method.In this work, a generalized finite-rate surface chemistry framework incorporating a comprehensive list of reaction mechanisms is developed and implemented into the DSMC solver SPARTA. The various mechanisms include adsorption, desorption, Langmuir-Hinshelwood (LH), Eley-Rideal (ER), Collision Induced (CI), condensation, sublimation, etc. The approach is to stochastically model the various competing reactions occurring on a set of active sites. Both gas-surface (e.g., ER, CI) and pure-surface (e.g., LH, desorption) reaction mechanisms are incorporated. The reaction mechanisms could also be catalytic or surface altering based on the participation of the bulk-phase species (e.g., bulk carbon atoms). Marschall and MacLean developed a general formulation in which multiple phases and surface sites are used and we adopt a similar convention in the current work. Microscopic parameters of reaction probabilities (for gas-surface reactions) and frequencies (for pure-surface reactions) that are require for DSMC are computed from the surface properties and macroscopic parameters such as rate constants, sticking coefficients, etc. The energy and angular distributions of the products are decided based on the reaction type and input parameters. Thus, the user has the capability to model various surface reactions via user-specified reaction rate constants, surface properties and parameters.
Fast and accurate calculation of dilute quantum gas using Uehling–Uhlenbeck model equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yano, Ryosuke, E-mail: ryosuke.yano@tokiorisk.co.jp
The Uehling–Uhlenbeck (U–U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U–U model equation. DSMC analysis based on the U–U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U–U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculatingmore » the viscosity coefficient of a Bose gas on the basis of the Green–Kubo expression and the shock layer of a dilute Bose gas around a cylinder.« less
1991-09-01
Maintaining Goal Congruence International Cooperation-the Next Generation ENDNOTES 1. Wolfgang Flume and David Swa, "British Aerospace-Leading...Program Management Questionnaire Report. Michael G. Krause , DSMC internal document, May 1989- 10. Bonn Seminar on Armaments cooperation, proceedings, w...Appendix K 154 International Cooperation-the Next Generation Dudney, Robert S., "The Electronics Industry Flume, Wolfgang , "Electronics for the Ger- Is
Particle kinetic simulation of high altitude hypervelocity flight
NASA Technical Reports Server (NTRS)
Boyd, Iain; Haas, Brian L.
1994-01-01
Rarefied flows about hypersonic vehicles entering the upper atmosphere or through nozzles expanding into a near vacuum may only be simulated accurately with a direct simulation Monte Carlo (DSMC) method. Under this grant, researchers enhanced the models employed in the DSMC method and performed simulations in support of existing NASA projects or missions. DSMC models were developed and validated for simulating rotational, vibrational, and chemical relaxation in high-temperature flows, including effects of quantized anharmonic oscillators and temperature-dependent relaxation rates. State-of-the-art advancements were made in simulating coupled vibration-dissociation recombination for post-shock flows. Models were also developed to compute vehicle surface temperatures directly in the code rather than requiring isothermal estimates. These codes were instrumental in simulating aerobraking of NASA's Magellan spacecraft during orbital maneuvers to assess heat transfer and aerodynamic properties of the delicate satellite. NASA also depended upon simulations of entry of the Galileo probe into the atmosphere of Jupiter to provide drag and flow field information essential for accurate interpretation of an onboard experiment. Finally, the codes have been used extensively to simulate expanding nozzle flows in low-power thrusters in support of propulsion activities at NASA-Lewis. Detailed comparisons between continuum calculations and DSMC results helped to quantify the limitations of continuum CFD codes in rarefied applications.
New chemical-DSMC method in numerical simulation of axisymmetric rarefied reactive flow
NASA Astrophysics Data System (ADS)
Zakeri, Ramin; Kamali Moghadam, Ramin; Mani, Mahmoud
2017-04-01
The modified quantum kinetic (MQK) chemical reaction model introduced by Zakeri et al. is developed for applicable cases in axisymmetric reactive rarefied gas flows using the direct simulation Monte Carlo (DSMC) method. Although, the MQK chemical model uses some modifications in the quantum kinetic (QK) method, it also employs the general soft sphere collision model and Stockmayer potential function to properly select the collision pairs in the DSMC algorithm and capture both the attraction and repulsion intermolecular forces in rarefied gas flows. For assessment of the presented model in the simulation of more complex and applicable reacting flows, first, the air dissociation is studied in a single cell for equilibrium and non-equilibrium conditions. The MQK results agree well with the analytical and experimental data and they accurately predict the characteristics of the rarefied flowfield with chemical reaction. To investigate accuracy of the MQK chemical model in the simulation of the axisymmetric flow, air dissociation is also assessed in an axial hypersonic flow around two geometries, the sphere as a benchmark case and the blunt body (STS-2) as an applicable test case. The computed results including the transient, rotational and vibrational temperatures, species concentration in the stagnation line, and also the heat flux and pressure coefficient on the surface are compared with those of the other chemical methods like the QK and total collision energy (TCE) models and available analytical and experimental data. Generally, the MQK chemical model properly simulates the chemical reactions and predicts flowfield characteristics more accurate rather than the typical QK model. Although in some cases, results of the MQK approaches match with those of the TCE method, the main point is that the MQK does not need any experimental data or unrealistic assumption of specular boundary condition as used in the TCE method. Another advantage of the MQK model is the significant reduction of computational cost rather than the QK chemical model to reach the same accuracy because of applying more proper collision model and consequently, decrease of the particles collision number.
DSMC Simulations of Hypersonic Flows and Comparison With Experiments
NASA Technical Reports Server (NTRS)
Moss, James N.; Bird, Graeme A.; Markelov, Gennady N.
2004-01-01
This paper presents computational results obtained with the direct simulation Monte Carlo (DSMC) method for several biconic test cases in which shock interactions and flow separation-reattachment are key features of the flow. Recent ground-based experiments have been performed for several biconic configurations, and surface heating rate and pressure measurements have been proposed for code validation studies. The present focus is to expand on the current validating activities for a relatively new DSMC code called DS2V that Bird (second author) has developed. Comparisons with experiments and other computations help clarify the agreement currently being achieved between computations and experiments and to identify the range of measurement variability of the proposed validation data when benchmarked with respect to the current computations. For the test cases with significant vibrational nonequilibrium, the effect of the vibrational energy surface accommodation on heating and other quantities is demonstrated.
Hypersonic simulations using open-source CFD and DSMC solvers
NASA Astrophysics Data System (ADS)
Casseau, V.; Scanlon, T. J.; John, B.; Emerson, D. R.; Brown, R. E.
2016-11-01
Hypersonic hybrid hydrodynamic-molecular gas flow solvers are required to satisfy the two essential requirements of any high-speed reacting code, these being physical accuracy and computational efficiency. The James Weir Fluids Laboratory at the University of Strathclyde is currently developing an open-source hybrid code which will eventually reconcile the direct simulation Monte-Carlo method, making use of the OpenFOAM application called dsmcFoam, and the newly coded open-source two-temperature computational fluid dynamics solver named hy2Foam. In conjunction with employing the CVDV chemistry-vibration model in hy2Foam, novel use is made of the QK rates in a CFD solver. In this paper, further testing is performed, in particular with the CFD solver, to ensure its efficacy before considering more advanced test cases. The hy2Foam and dsmcFoam codes have shown to compare reasonably well, thus providing a useful basis for other codes to compare against.
Comparisons of the Maxwell and CLL Gas/Surface Interaction Models Using DSMC
NASA Technical Reports Server (NTRS)
Hedahl, Marc O.
1995-01-01
Two contrasting models of gas-surface interactions are studied using the Direct Simulation Monte Carlo (DSMC) method. The DSMC calculations examine differences in predictions of aerodynamic forces and heat transfer between the Maxwell and Cercignani-Lampis-Lord (CLL) models for flat plate configurations at freestream conditions corresponding to a 140 km orbit around Venus. The size of the flat plate is that of one of the solar panels on the Magellan spacecraft, and the freestream conditions are one of those experienced during aerobraking maneuvers. Results are presented for both a single flat plate and a two-plate configuration as a function of angle of attack and gas-surface accommodation coefficients. The two plate system is not representative of the Magellan geometry, but is studied to explore possible experiments that might be used to differentiate between the two gas surface interaction models.
NASA Technical Reports Server (NTRS)
Liechty, Derek S.; Burt, Jonathan M.
2016-01-01
There are many flows fields that span a wide range of length scales where regions of both rarefied and continuum flow exist and neither direct simulation Monte Carlo (DSMC) nor computational fluid dynamics (CFD) provide the appropriate solution everywhere. Recently, a new viscous collision limited (VCL) DSMC technique was proposed to incorporate effects of physical diffusion into collision limiter calculations to make the low Knudsen number regime normally limited to CFD more tractable for an all-particle technique. This original work had been derived for a single species gas. The current work extends the VCL-DSMC technique to gases with multiple species. Similar derivations were performed to equate numerical and physical transport coefficients. However, a more rigorous treatment of determining the mixture viscosity is applied. In the original work, consideration was given to internal energy non-equilibrium, and this is also extended in the current work to chemical non-equilibrium.
NASA Technical Reports Server (NTRS)
Campbell, David; Wysong, Ingrid; Kaplan, Carolyn; Mott, David; Wadsworth, Dean; VanGilder, Douglas
2000-01-01
An AFRL/NRL team has recently been selected to develop a scalable, parallel, reacting, multidimensional (SUPREM) Direct Simulation Monte Carlo (DSMC) code for the DoD user community under the High Performance Computing Modernization Office (HPCMO) Common High Performance Computing Software Support Initiative (CHSSI). This paper will introduce the JANNAF Exhaust Plume community to this three-year development effort and present the overall goals, schedule, and current status of this new code.
Predictive Modeling in Plasma Reactor and Process Design
NASA Technical Reports Server (NTRS)
Hash, D. B.; Bose, D.; Govindan, T. R.; Meyyappan, M.; Arnold, James O. (Technical Monitor)
1997-01-01
Research continues toward the improvement and increased understanding of high-density plasma tools. Such reactor systems are lauded for their independent control of ion flux and energy enabling high etch rates with low ion damage and for their improved ion velocity anisotropy resulting from thin collisionless sheaths and low neutral pressures. Still, with the transition to 300 mm processing, achieving etch uniformity and high etch rates concurrently may be a formidable task for such large diameter wafers for which computational modeling can play an important role in successful reactor and process design. The inductively coupled plasma (ICP) reactor is the focus of the present investigation. The present work attempts to understand the fundamental physical phenomena of such systems through computational modeling. Simulations will be presented using both computational fluid dynamics (CFD) techniques and the direct simulation Monte Carlo (DSMC) method for argon and chlorine discharges. ICP reactors generally operate at pressures on the order of 1 to 10 mTorr. At such low pressures, rarefaction can be significant to the degree that the constitutive relations used in typical CFD techniques become invalid and a particle simulation must be employed. This work will assess the extent to which CFD can be applied and evaluate the degree to which accuracy is lost in prediction of the phenomenon of interest; i.e., etch rate. If the CFD approach is found reasonably accurate and bench-marked with DSMC and experimental results, it has the potential to serve as a design tool due to the rapid time relative to DSMC. The continuum CFD simulation solves the governing equations for plasma flow using a finite difference technique with an implicit Gauss-Seidel Line Relaxation method for time marching toward a converged solution. The equation set consists of mass conservation for each species, separate energy equations for the electrons and heavy species, and momentum equations for the gas. The sheath is modeled by imposing the Bohm velocity to the ions near the walls. The DSMC method simulates each constituent of the gas as a separate species which would be analogous in CFD to employing separate species mass, momentum, and energy equations. All particles including electrons are moved and allowed to collide with one another with the stipulation that the electrons remain tied to the ions consistent with the concept of ambipolar diffusion. The velocities of the electrons are allowed to be modified during collisions and are not confined to a Maxwellian distribution. These benefits come at a price in terms of computational time and memory. The DSMC and CFD are made as consistent as possible by using similar chemistry and power deposition models. Although the comparison of CFD and DSMC is interesting, the main goal of this work is the increased understanding of high-density plasma flowfields that can then direct improvements in both techniques. This work is unique in the level of the physical models employed in both the DSMC and CFD for high-density plasma reactor applications. For example, the electrons are simulated in the present DSMC work which has not been done before for low temperature plasma processing problems. In the CFD approach, for the first time, the charged particle transport (discharge physics) has been self-consistently coupled to the gas flow and heat transfer.
Comparison of DSMC and CFD Solutions of Fire II Including Radiative Heating
NASA Technical Reports Server (NTRS)
Liechty, Derek S.; Johnston, Christopher O.; Lewis, Mark J.
2011-01-01
The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. These flows may also contain significant radiative heating. To prepare for these missions, NASA is developing the capability to simulate rarefied, ionized flows and to then calculate the resulting radiative heating to the vehicle's surface. In this study, the DSMC codes DAC and DS2V are used to obtain charge-neutral ionization solutions. NASA s direct simulation Monte Carlo code DAC is currently being updated to include the ability to simulate charge-neutral ionized flows, take advantage of the recently introduced Quantum-Kinetic chemistry model, and to include electronic energy levels as an additional internal energy mode. The Fire II flight test is used in this study to assess these new capabilities. The 1634 second data point was chosen for comparisons to be made in order to include comparisons to computational fluid dynamics solutions. The Knudsen number at this point in time is such that the DSMC simulations are still tractable and the CFD computations are at the edge of what is considered valid. It is shown that there can be quite a bit of variability in the vibrational temperature inferred from DSMC solutions and that, from how radiative heating is computed, the electronic temperature is much better suited for radiative calculations. To include the radiative portion of heating, the flow-field solutions are post-processed by the non-equilibrium radiation code HARA. Acceptable agreement between CFD and DSMC flow field solutions is demonstrated and the progress of the updates to DAC, along with an appropriate radiative heating solution, are discussed. In addition, future plans to generate more high fidelity radiative heat transfer solutions are discussed.
DSMC Simulations of Hypersonic Flows With Shock Interactions and Validation With Experiments
NASA Technical Reports Server (NTRS)
Moss, James N.; Bird, Graeme A.
2004-01-01
The capabilities of a relatively new direct simulation Monte Carlo (DSMC) code are examined for the problem of hypersonic laminar shock/shock and shock/boundary layer interactions, where boundary layer separation is an important feature of the flow. Flow about two model configurations is considered, where both configurations (a biconic and a hollow cylinder-flare) have recent published experimental measurements. The computations are made by using the DS2V code of Bird, a general two-dimensional/axisymmetric time accurate code that incorporates many of the advances in DSMC over the past decade. The current focus is on flows produced in ground-based facilities at Mach 12 and 16 test conditions with nitrogen as the test gas and the test models at zero incidence. Results presented highlight the sensitivity of the calculations to grid resolutions, sensitivity to physical modeling parameters, and comparison with experimental measurements. Information is provided concerning the flow structure and surface results for the extent of separation, heating, pressure, and skin friction.
DSMC Simulations of Hypersonic Flows With Shock Interactions and Validation With Experiments
NASA Technical Reports Server (NTRS)
Moss, James N.; Bird, Graeme A.
2004-01-01
The capabilities of a relatively new direct simulation Monte Carlo (DSMC) code are examined for the problem of hypersonic laminar shock/shock and shock/boundary layer interactions, where boundary layer separation is an important feature of the flow. Flow about two model configurations is considered, where both configurations (a biconic and a hollow cylinder-flare) have recent published experimental measurements. The computations are made by using the DS2V code of Bird, a general two-dimensional/axisymmetric time accurate code that incorporates many of the advances in DSMC over the past decade. The current focus is on flows produced in ground-based facilities at Mach 12 and 16 test conditions with nitrogen as the test gas and the test models at zero incidence. Results presented highlight the sensitivity of the calculations to grid resolution, sensitivity to physical modeling parameters, and comparison with experimental measurements. Information is provided concerning the flow structure and surface results for the extent of separation, heating, pressure, and skin friction.
A continuum analysis of chemical nonequilibrium under hypersonic low-density flight conditions
NASA Technical Reports Server (NTRS)
Gupta, R. N.
1986-01-01
Results of employing the continuum model of Navier-Stokes equations under the low-density flight conditions are presented. These results are obtained with chemical nonequilibrium and multicomponent surface slip boundary conditions. The conditions analyzed are those encountered by the nose region of the Space Shuttle Orbiter during reentry. A detailed comparison of the Navier-Stokes (NS) results is made with the viscous shock-layer (VSL) and direct simulation Monte Carlo (DSMC) predictions. With the inclusion of new surface-slip boundary conditions in NS calculations, the surface heat transfer and other flowfield quantities adjacent to the surface are predicted favorably with the DSMC calculations from 75 km to 115 km in altitude. This suggests a much wider practical range for the applicability of Navier-Stokes solutions than previously thought. This is appealing because the continuum (NS and VSL) methods are commonly used to solve the fluid flow problems and are less demanding in terms of computer resource requirements than the noncontinuum (DSMC) methods.
Error estimation for CFD aeroheating prediction under rarefied flow condition
NASA Astrophysics Data System (ADS)
Jiang, Yazhong; Gao, Zhenxun; Jiang, Chongwen; Lee, Chunhian
2014-12-01
Both direct simulation Monte Carlo (DSMC) and Computational Fluid Dynamics (CFD) methods have become widely used for aerodynamic prediction when reentry vehicles experience different flow regimes during flight. The implementation of slip boundary conditions in the traditional CFD method under Navier-Stokes-Fourier (NSF) framework can extend the validity of this approach further into transitional regime, with the benefit that much less computational cost is demanded compared to DSMC simulation. Correspondingly, an increasing error arises in aeroheating calculation as the flow becomes more rarefied. To estimate the relative error of heat flux when applying this method for a rarefied flow in transitional regime, theoretical derivation is conducted and a dimensionless parameter ɛ is proposed by approximately analyzing the ratio of the second order term to first order term in the heat flux expression in Burnett equation. DSMC simulation for hypersonic flow over a cylinder in transitional regime is performed to test the performance of parameter ɛ, compared with two other parameters, Knρ and MaṡKnρ.
Measurement and analysis of a small nozzle plume in vacuum
NASA Technical Reports Server (NTRS)
Penko, P. F.; Boyd, I. D.; Meissner, D. L.; Dewitt, K. J.
1993-01-01
Pitot pressures and flow angles are measured in the plume of a nozzle flowing nitrogen and exhausting to a vacuum. Total pressures are measured with Pitot tubes sized for specific regions of the plume and flow angles measured with a conical probe. The measurement area for total pressure extends 480 mm (16 exit diameters) downstream of the nozzle exit plane and radially to 60 mm (1.9 exit diameters) off the plume axis. The measurement area for flow angle extends to 160 mm (5 exit diameters) downstream and radially to 60 mm. The measurements are compared to results from a numerical simulation of the flow that is based on kinetic theory and uses the direct-simulation Monte Carlo (DSMC) method. Comparisons of computed results from the DSMC method with measurements of flow angle display good agreement in the far-field of the plume and improve with increasing distance from the exit plane. Pitot pressures computed from the DSMC method are in reasonably good agreement with experimental results over the entire measurement area.
Comparison of a 3-D CFD-DSMC Solution Methodology With a Wind Tunnel Experiment
NASA Technical Reports Server (NTRS)
Glass, Christopher E.; Horvath, Thomas J.
2002-01-01
A solution method for problems that contain both continuum and rarefied flow regions is presented. The methodology is applied to flow about the 3-D Mars Sample Return Orbiter (MSRO) that has a highly compressed forebody flow, a shear layer where the flow separates from a forebody lip, and a low density wake. Because blunt body flow fields contain such disparate regions, employing a single numerical technique to solve the entire 3-D flow field is often impractical, or the technique does not apply. Direct simulation Monte Carlo (DSMC) could be employed to solve the entire flow field; however, the technique requires inordinate computational resources for continuum and near-continuum regions, and is best suited for the wake region. Computational fluid dynamics (CFD) will solve the high-density forebody flow, but continuum assumptions do not apply in the rarefied wake region. The CFD-DSMC approach presented herein may be a suitable way to obtain a higher fidelity solution.
Development of the ARISTOTLE webware for cloud-based rarefied gas flow modeling
NASA Astrophysics Data System (ADS)
Deschenes, Timothy R.; Grot, Jonathan; Cline, Jason A.
2016-11-01
Rarefied gas dynamics are important for a wide variety of applications. An improvement in the ability of general users to predict these gas flows will enable optimization of current, and discovery of future processes. Despite this potential, most rarefied simulation software is designed by and for experts in the community. This has resulted in low adoption of the methods outside of the immediate RGD community. This paper outlines an ongoing effort to create a rarefied gas dynamics simulation tool that can be used by a general audience. The tool leverages a direct simulation Monte Carlo (DSMC) library that is available to the entire community and a web-based simulation process that will enable all users to take advantage of high performance computing capabilities. First, the DSMC library and simulation architecture are described. Then the DSMC library is used to predict a number of representative transient gas flows that are applicable to the rarefied gas dynamics community. The paper closes with a summary and future direction.
DSMC simulations of shock interactions about sharp double cones
NASA Astrophysics Data System (ADS)
Moss, James N.
2001-08-01
This paper presents the results of a numerical study of shock interactions resulting from Mach 10 flow about sharp double cones. Computations are made by using the direct simulation Monte Carlo (DSMC) method of Bird. The sensitivity and characteristics of the interactions are examined by varying flow conditions, model size, and configuration. The range of conditions investigated includes those for which experiments have been or will be performed in the ONERA R5Ch low-density wind tunnel and the Calspan-University of Buffalo Research Center (CUBRC) Large Energy National Shock (LENS) tunnel.
DSMC Simulations of Shock Interactions About Sharp Double Cones
NASA Technical Reports Server (NTRS)
Moss, James N.
2000-01-01
This paper presents the results of a numerical study of shock interactions resulting from Mach 10 flow about sharp double cones. Computations are made by using the direct simulation Monte Carlo (DSMC) method of Bird. The sensitivity and characteristics of the interactions are examined by varying flow conditions, model size, and configuration. The range of conditions investigated includes those for which experiments have been or will be performed in the ONERA R5Ch low-density wind tunnel and the Calspan-University of Buffalo Research Center (CUBRC) Large Energy National Shock (LENS) tunnel.
Microscale Modeling of Porous Thermal Protection System Materials
NASA Astrophysics Data System (ADS)
Stern, Eric C.
Ablative thermal protection system (TPS) materials play a vital role in the design of entry vehicles. Most simulation tools for ablative TPS in use today take a macroscopic approach to modeling, which involves heavy empiricism. Recent work has suggested improving the fidelity of the simulations by taking a multi-scale approach to the physics of ablation. In this work, a new approach for modeling ablative TPS at the microscale is proposed, and its feasibility and utility is assessed. This approach uses the Direct Simulation Monte Carlo (DSMC) method to simulate the gas flow through the microstructure, as well as the gas-surface interaction. Application of the DSMC method to this problem allows the gas phase dynamics---which are often rarefied---to be modeled to a high degree of fidelity. Furthermore this method allows for sophisticated gas-surface interaction models to be implemented. In order to test this approach for realistic materials, a method for generating artificial microstructures which emulate those found in spacecraft TPS is developed. Additionally, a novel approach for allowing the surface to move under the influence of chemical reactions at the surface is developed. This approach is shown to be efficient and robust for performing coupled simulation of the oxidation of carbon fibers. The microscale modeling approach is first applied to simulating the steady flow of gas through the porous medium. Predictions of Darcy permeability for an idealized microstructure agree with empirical correlations from the literature, as well as with predictions from computational fluid dynamics (CFD) when the continuum assumption is valid. Expected departures are observed for conditions at which the continuum assumption no longer holds. Comparisons of simulations using a fabricated microstructure to experimental data for a real spacecraft TPS material show good agreement when similar microstructural parameters are used to build the geometry. The approach is then applied to investigating the ablation of porous materials through oxidation. A simple gas surface interaction model is described, and an approach for coupling the surface reconstruction algorithm to the DSMC method is outlined. Simulations of single carbon fibers at representative conditions suggest this approach to be feasible for simulating the ablation of porous TPS materials at scale. Additionally, the effect of various simulation parameters on in-depth morphology is investigated for random fibrous microstructures.
Effects of Chemistry on Blunt-Body Wake Structure
NASA Technical Reports Server (NTRS)
Dogra, Virendra K.; Moss, James N.; Wilmoth, Richard G.; Taylor, Jeff C.; Hassan, H. A.
1995-01-01
Results of a numerical study are presented for hypersonic low-density flow about a 70-deg blunt cone using direct simulation Monte Carlo (DSMC) and Navier-Stokes calculations. Particular emphasis is given to the effects of chemistry on the near-wake structure and on the surface quantities and the comparison of the DSMC results with the Navier-Stokes calculations. The flow conditions simulated are those experienced by a space vehicle at an altitude of 85 km and a velocity of 7 km/s during Earth entry. A steady vortex forms in the near wake for these freestream conditions for both chemically reactive and nonreactive air gas models. The size (axial length) of the vortex for the reactive air calculations is 25% larger than that of the nonreactive air calculations. The forebody surface quantities are less sensitive to the chemistry than the base surface quantities. The presence of the afterbody has no effect on the forebody flow structure or the surface quantities. The comparisons of DSMC and Navier-Stokes calculations show good agreement for the wake structure and the forebody surface quantities.
Second-Order Consensus in Multiagent Systems via Distributed Sliding Mode Control.
Yu, Wenwu; Wang, He; Cheng, Fei; Yu, Xinghuo; Wen, Guanghui
2016-11-22
In this paper, the new decoupled distributed sliding-mode control (DSMC) is first proposed for second-order consensus in multiagent systems, which finally solves the fundamental unknown problem for sliding-mode control (SMC) design of coupled networked systems. A distributed full-order sliding-mode surface is designed based on the homogeneity with dilation for reaching second-order consensus in multiagent systems, under which the sliding-mode states are decoupled. Then, the SMC is applied to the decoupled sliding-mode states to reach their origin in finite time, which is the sliding-mode surface. The states of agents can first reach the designed sliding-mode surface in finite time and then move to the second-order consensus state along the surface in finite time as well. The DSMC designed in this paper can eliminate the influence of singularity problems and weaken the influence of chattering, which is still very difficult in the SMC systems. In addition, DSMC proposes a general decoupling framework for designing SMC in networked multiagent systems. Simulations are presented to verify the theoretical results in this paper.
Experimental validation of a direct simulation by Monte Carlo molecular gas flow model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shufflebotham, P.K.; Bartel, T.J.; Berney, B.
1995-07-01
The Sandia direct simulation Monte Carlo (DSMC) molecular/transition gas flow simulation code has significant potential as a computer-aided design tool for the design of vacuum systems in low pressure plasma processing equipment. The purpose of this work was to verify the accuracy of this code through direct comparison to experiment. To test the DSMC model, a fully instrumented, axisymmetric vacuum test cell was constructed, and spatially resolved pressure measurements made in N{sub 2} at flows from 50 to 500 sccm. In a ``blind`` test, the DSMC code was used to model the experimental conditions directly, and the results compared tomore » the measurements. It was found that the model predicted all the experimental findings to a high degree of accuracy. Only one modeling issue was uncovered. The axisymmetric model showed localized low pressure spots along the axis next to surfaces. Although this artifact did not significantly alter the accuracy of the results, it did add noise to the axial data. {copyright} {ital 1995} {ital American} {ital Vacuum} {ital Society}« less
Low-Density Nozzle Flow by the Direct Simulation Monte Carlo and Continuum Methods
NASA Technical Reports Server (NTRS)
Chung, Chang-Hong; Kim, Sku C.; Stubbs, Robert M.; Dewitt, Kenneth J.
1994-01-01
Two different approaches, the direct simulation Monte Carlo (DSMC) method based on molecular gasdynamics, and a finite-volume approximation of the Navier-Stokes equations, which are based on continuum gasdynamics, are employed in the analysis of a low-density gas flow in a small converging-diverging nozzle. The fluid experiences various kinds of flow regimes including continuum, slip, transition, and free-molecular. Results from the two numerical methods are compared with Rothe's experimental data, in which density and rotational temperature variations along the centerline and at various locations inside a low-density nozzle were measured by the electron-beam fluorescence technique. The continuum approach showed good agreement with the experimental data as far as density is concerned. The results from the DSMC method showed good agreement with the experimental data, both in the density and the rotational temperature. It is also shown that the simulation parameters, such as the gas/surface interaction model, the energy exchange model between rotational and translational modes, and the viscosity-temperature exponent, have substantial effects on the results of the DSMC method.
Direct simulation Monte Carlo method for gas flows in micro-channels with bends with added curvature
NASA Astrophysics Data System (ADS)
Tisovský, Tomáš; Vít, Tomáš
Gas flows in micro-channels are simulated using an open source Direct Simulation Monte Carlo (DSMC) code dsmcFOAM for general application to rarefied gas flow written within the framework of the open source C++ toolbox called OpenFOAM. Aim of this paper is to investigate the flow in micro-channel with bend with added curvature. Results are compared with flows in channel without added curvature and equivalent straight channel. Effects of micro-channel bend was already thoroughly investigated by White et al. Geometry proposed by White is also used here for refference.
Direct simulation of high-vorticity gas flows
NASA Technical Reports Server (NTRS)
Bird, G. A.
1987-01-01
The computational limitations associated with the molecular dynamics (MD) method and the direct simulation Monte Carlo (DSMC) method are reviewed in the context of the computation of dilute gas flows with high vorticity. It is concluded that the MD method is generally limited to the dense gas case in which the molecular diameter is one-tenth or more of the mean free path. It is shown that the cell size in DSMC calculations should be small in comparison with the mean free path, and that this may be facilitated by a new subcell procedure for the selection of collision partners.
Simulations of Ground and Space-Based Oxygen Atom Experiments
NASA Technical Reports Server (NTRS)
Finchum, A. (Technical Monitor); Cline, J. A.; Minton, T. K.; Braunstein, M.
2003-01-01
A low-earth orbit (LEO) materials erosion scenario and the ground-based experiment designed to simulate it are compared using the direct-simulation Monte Carlo (DSMC) method. The DSMC model provides a detailed description of the interactions between the hyperthermal gas flow and a normally oriented flat plate for each case. We find that while the general characteristics of the LEO exposure are represented in the ground-based experiment, multi-collision effects can potentially alter the impact energy and directionality of the impinging molecules in the ground-based experiment. Multi-collision phenomena also affect downstream flux measurements.
Direct Simulation of Reentry Flows with Ionization
NASA Technical Reports Server (NTRS)
Carlson, Ann B.; Hassan, H. A.
1989-01-01
The Direct Simulation Monte Carlo (DSMC) method is applied in this paper to the study of rarefied, hypersonic, reentry flows. The assumptions and simplifications involved with the treatment of ionization, free electrons and the electric field are investigated. A new method is presented for the calculation of the electric field and handling of charged particles with DSMC. In addition, a two-step model for electron impact ionization is implemented. The flow field representing a 10 km/sec shock at an altitude of 65 km is calculated. The effects of the new modeling techniques on the calculation results are presented and discussed.
Search strategy in a complex and dynamic environment (the Indian Ocean case)
NASA Astrophysics Data System (ADS)
Loire, Sophie; Arbabi, Hassan; Clary, Patrick; Ivic, Stefan; Crnjaric-Zic, Nelida; Macesic, Senka; Crnkovic, Bojan; Mezic, Igor; UCSB Team; Rijeka Team
2014-11-01
The disappearance of Malaysia Airlines Flight 370 (MH370) in the early morning hours of 8 March 2014 has exposed the disconcerting lack of efficient methods for identifying where to look and how to look for missing objects in a complex and dynamic environment. The search area for plane debris is a remote part of the Indian Ocean. Searches, of the lawnmower type, have been unsuccessful so far. Lagrangian kinematics of mesoscale features are visible in hypergraph maps of the Indian Ocean surface currents. Without a precise knowledge of the crash site, these maps give an estimate of the time evolution of any initial distribution of plane debris and permits the design of a search strategy. The Dynamic Spectral Multiscale Coverage search algorithm is modified to search a spatial distribution of targets that is evolving with time following the dynamic of ocean surface currents. Trajectories are generated for multiple search agents such that their spatial coverage converges to the target distribution. Central to this DSMC algorithm is a metric for the ergodicity.
NASA Astrophysics Data System (ADS)
Hansen, K. C.; Fougere, N.; Bieler, A. M.; Altwegg, K.; Combi, M. R.; Gombosi, T. I.; Huang, Z.; Rubin, M.; Tenishev, V.; Toth, G.; Tzou, C. Y.
2015-12-01
We have previously published results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model and its characterization of the neutral coma of comet 67P/Churyumov-Gerasimenko through detailed comparison with data collected by the ROSINA/COPS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis/COmet Pressure Sensor) instrument aboard the Rosetta spacecraft [Bieler, 2015]. Results from these DSMC models have been used to create an empirical model of the near comet coma (<200 km) of comet 67P. The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. The model is a significant improvement over more simple empirical models, such as the Haser model. While the DSMC results are a more accurate representation of the coma at any given time, the advantage of a mean state, empirical model is the ease and speed of use. One use of such an empirical model is in the calculation of a total cometary coma production rate from the ROSINA/COPS data. The COPS data are in situ measurements of gas density and velocity along the ROSETTA spacecraft track. Converting the measured neutral density into a production rate requires knowledge of the neutral gas distribution in the coma. Our empirical model provides this information and therefore allows us to correct for the spacecraft location to calculate a production rate as a function of heliocentric distance. We will present the full empirical model as well as the calculated neutral production rate for the period of August 2014 - August 2015 (perihelion).
Transient Macroscopic Chemistry in the DSMC Method
NASA Astrophysics Data System (ADS)
Goldsworthy, M. J.; Macrossan, M. N.; Abdel-Jawad, M.
2008-12-01
In the Direct Simulation Monte Carlo method, a combination of statistical and deterministic procedures applied to a finite number of `simulator' particles are used to model rarefied gas-kinetic processes. Traditionally, chemical reactions are modelled using information from specific colliding particle pairs. In the Macroscopic Chemistry Method (MCM), the reactions are decoupled from the specific particle pairs selected for collisions. Information from all of the particles within a cell is used to determine a reaction rate coefficient for that cell. MCM has previously been applied to steady flow DSMC simulations. Here we show how MCM can be used to model chemical kinetics in DSMC simulations of unsteady flow. Results are compared with a collision-based chemistry procedure for two binary reactions in a 1-D unsteady shock-expansion tube simulation and during the unsteady development of 2-D flow through a cavity. For the shock tube simulation, close agreement is demonstrated between the two methods for instantaneous, ensemble-averaged profiles of temperature and species mole fractions. For the cavity flow, a high degree of thermal non-equilibrium is present and non-equilibrium reaction rate correction factors are employed in MCM. Very close agreement is demonstrated for ensemble averaged mole fraction contours predicted by the particle and macroscopic methods at three different flow-times. A comparison of the accumulated number of net reactions per cell shows that both methods compute identical numbers of reaction events. For the 2-D flow, MCM required similar CPU and memory resources to the particle chemistry method. The Macroscopic Chemistry Method is applicable to any general DSMC code using any viscosity or non-reacting collision models and any non-reacting energy exchange models. MCM can be used to implement any reaction rate formulations, whether these be from experimental or theoretical studies.
Plume Impingement to the Lunar Surface: A Challenging Problem for DSMC
NASA Technical Reports Server (NTRS)
Lumpkin, Forrest; Marichalar, Jermiah; Piplica, Anthony
2007-01-01
The President's Vision for Space Exploration calls for the return of human exploration of the Moon. The plans are ambitious and call for the creation of a lunar outpost. Lunar Landers will therefore be required to land near predeployed hardware, and the dust storm created by the Lunar Lander's plume impingement to the lunar surface presents a hazard. Knowledge of the number density, size distribution, and velocity of the grains in the dust cloud entrained into the flow is needing to develop mitigation strategies. An initial step to acquire such knowledge is simulating the associated plume impingement flow field. The following paper presents results from a loosely coupled continuum flow solver/Direct Simulation Monte Carlo (DSMC) technique for simulating the plume impingement of the Apollo Lunar module on the lunar surface. These cases were chosen for initial study to allow for comparison with available Apollo video. The relatively high engine thrust and the desire to simulate interesting cases near touchdown result in flow that is nearly entirely continuum. The DSMC region of the flow field was simulated using NASA's DSMC Analysis Code (DAC) and must begin upstream of the impingement shock for the loosely coupled technique to succeed. It was therefore impossible to achieve mean free path resolution with a reasonable number of molecules (say 100 million) as is shown. In order to mitigate accuracy and performance issues when using such large cells, advanced techniques such as collision limiting and nearest neighbor collisions were employed. The final paper will assess the benefits and shortcomings of such techniques. In addition, the effects of plume orientation, plume altitude, and lunar topography, such as craters, on the flow field, the surface pressure distribution, and the surface shear stress distribution are presented.
A generalized form of the Bernoulli Trial collision scheme in DSMC: Derivation and evaluation
NASA Astrophysics Data System (ADS)
Roohi, Ehsan; Stefanov, Stefan; Shoja-Sani, Ahmad; Ejraei, Hossein
2018-02-01
The impetus of this research is to present a generalized Bernoulli Trial collision scheme in the context of the direct simulation Monte Carlo (DSMC) method. Previously, a subsequent of several collision schemes have been put forward, which were mathematically based on the Kac stochastic model. These include Bernoulli Trial (BT), Ballot Box (BB), Simplified Bernoulli Trial (SBT) and Intelligent Simplified Bernoulli Trial (ISBT) schemes. The number of considered pairs for a possible collision in the above-mentioned schemes varies between N (l) (N (l) - 1) / 2 in BT, 1 in BB, and (N (l) - 1) in SBT or ISBT, where N (l) is the instantaneous number of particles in the lth cell. Here, we derive a generalized form of the Bernoulli Trial collision scheme (GBT) where the number of selected pairs is any desired value smaller than (N (l) - 1), i.e., Nsel < (N (l) - 1), keeping the same the collision frequency and accuracy of the solution as the original SBT and BT models. We derive two distinct formulas for the GBT scheme, where both formula recover BB and SBT limits if Nsel is set as 1 and N (l) - 1, respectively, and provide accurate solutions for a wide set of test cases. The present generalization further improves the computational efficiency of the BT-based collision models compared to the standard no time counter (NTC) and nearest neighbor (NN) collision models.
Coma dust scattering concepts applied to the Rosetta mission
NASA Astrophysics Data System (ADS)
Fink, Uwe; Rinaldi, Giovanna
2015-09-01
This paper describes basic concepts, as well as providing a framework, for the interpretation of the light scattered by the dust in a cometary coma as observed by instruments on a spacecraft such as Rosetta. It is shown that the expected optical depths are small enough that single scattering can be applied. Each of the quantities that contribute to the scattered intensity is discussed in detail. Using optical constants of the likely coma dust constituents, olivine, pyroxene and carbon, the scattering properties of the dust are calculated. For the resulting observable scattering intensities several particle size distributions are considered, a simple power law, power laws with a small particle cut off and a log-normal distributions with various parameters. Within the context of a simple outflow model, the standard definition of Afρ for a circular observing aperture is expanded to an equivalent Afρ for an annulus and specific line-of-sight observation. The resulting equivalence between the observed intensity and Afρ is used to predict observable intensities for 67P/Churyumov-Gerasimenko at the spacecraft encounter near 3.3 AU and near perihelion at 1.3 AU. This is done by normalizing particle production rates of various size distributions to agree with observed ground based Afρ values. Various geometries for the column densities in a cometary coma are considered. The calculations for a simple outflow model are compared with more elaborate Direct Simulation Monte Carlo Calculation (DSMC) models to define the limits of applicability of the simpler analytical approach. Thus our analytical approach can be applied to the majority of the Rosetta coma observations, particularly beyond several nuclear radii where the dust is no longer in a collisional environment, without recourse to computer intensive DSMC calculations for specific cases. In addition to a spherically symmetric 1-dimensional approach we investigate column densities for the 2-dimensional DSMC model on the day and night side of the comet. Our calculations are also applied to estimates of the dust particle densities and flux which are useful for the in-situ experiments on Rosetta.
NASA Astrophysics Data System (ADS)
Argha, Ahmadreza; Li, Li; W. Su, Steven
2017-04-01
This paper develops a novel stabilising sliding mode for systems involving uncertainties as well as measurement data packet dropouts. In contrast to the existing literature that designs the switching function by using unavailable system states, a novel linear sliding function is constructed by employing only the available communicated system states for the systems involving measurement packet losses. This also equips us with the possibility to build a novel switching component for discrete-time sliding mode control (DSMC) by using only available system states. Finally, using a numerical example, we evaluate the performance of the designed DSMC for networked systems.
NASA Technical Reports Server (NTRS)
Gupta, R. N.; Simmonds, A. L.
1986-01-01
Solutions of the Navier-Stokes equations with chemical nonequilibrium and multicomponent surface slip are presented along the stagnation streamline under low-density hypersonic flight conditions. The conditions analyzed are those encountered by the nose region of the Space Shuttle Orbiter during reentry. A detailed comparison of the Navier-Stokes (NS) results is made with the viscous shock-layer (VSL) and Direct Simulation Monte Carlo (DSMC) predictions. With the inclusion of surface-slip boundary conditions in NS calculations, the surface heat transfer and other flow field quantities adjacent to the surface are predicted favorably with the DSMC calculations from 75 km to 115 km in altitude. Therefore, the practical range for the applicability of Navier-Stokes solutions is much wider than previously thought. This is appealing because the continuum (NS and VSL) methods are commonly used to solve the fluid flow problems and are less demanding in terms of computer resource requirements than the noncontinuum (DSMC) methods. The NS solutions agree well with the VSL results for altitudes less than 92 km. An assessment is made of the frozen flow approximation employed in the VSL calculations.
Plume flowfield analysis of the shuttle primary Reaction Control System (RCS) rocket engine
NASA Technical Reports Server (NTRS)
Hueser, J. E.; Brock, F. J.
1990-01-01
A solution was generated for the physical properties of the Shuttle RCS 4000 N (900 lb) rocket engine exhaust plume flowfield. The modeled exhaust gas consists of the five most abundant molecular species, H2, N2, H2O, CO, and CO2. The solution is for a bare RCS engine firing into a vacuum; the only additional hardware surface in the flowfield is a cylinder (=engine mount) which coincides with the nozzle lip outer corner at X = 0, extends to the flowfield outer boundary at X = -137 m and is coaxial with the negative symmetry axis. Continuum gas dynamic methods and the Direct Simulation Monte Carlo (DSMC) method were combined in an iterative procedure to produce a selfconsistent solution. Continuum methods were used in the RCS nozzle and in the plume as far as the P = 0.03 breakdown contour; the DSMC method was used downstream of this continuum flow boundary. The DSMC flowfield extends beyond 100 m from the nozzle exit and thus the solution includes the farfield flow properties, but substantial information is developed on lip flow dynamics and thus results are also presented for the flow properties in the vicinity of the nozzle lip.
DSMC simulations of Mach 20 nitrogen flows about a 70 degree blunted cone and its wake
NASA Technical Reports Server (NTRS)
Moss, James N.; Dogra, Virendra K.; Wilmoth, Richard G.
1993-01-01
Numerical results obtained with the direct simulation Monte Carlo (DSMC) method are presented for Mach 20 nitrogen flow about a 70-deg blunted cone. The flow conditions simulated are those that can be obtained in existing low-density hypersonic wind tunnels. Three sets of flow conditions are simulated with freestream Knudsen numbers ranging from 0.03 to 0.001. The focus is to characterize the wake flow under rarefied conditions. This is accomplished by calculating the influence of rarefaction on wake structure along with the impact that an afterbody has on flow features. This data report presents extensive information concerning flowfield features and surface quantities.
Investigation of Thermal Stress Convection in Nonisothermal Gases under Microgravity Conditions
NASA Technical Reports Server (NTRS)
Mackowski, Daniel W.
1999-01-01
The project has sought to ascertain the veracity of the Burnett relations, as applied to slow moving, highly nonisothermal gases, by comparison of convection and stress predictions with those generated by the DSMC method. The Burnett equations were found to provide reasonable descriptions of the pressure distribution and normal stress in stationary gases with a 1-D temperature gradient. Continuum/Burnett predictions of thermal stress convection in 2-D heated enclosures, however, are not quantitatively supported by DSMC results. For such situations, it appears that thermal creep flows, generated at the boundaries of the enclosure, will be significantly larger than the flows resulting from thermal stress in the gas.
Hypersonic Shock Interactions About a 25 deg/65 deg Sharp Double Cone
NASA Technical Reports Server (NTRS)
Moss, James N.; LeBeau, Gerald J.; Glass, Christopher E.
2002-01-01
This paper presents the results of a numerical study of shock interactions resulting from Mach 10 air flow about a sharp double cone. Computations are made with the direct simulation Monte Carlo (DSMC) method by using two different codes: the G2 code of Bird and the DAC (DSMC Analysis Code) code of LeBeau. The flow conditions are the pretest nominal free-stream conditions specified for the ONERA R5Ch low-density wind tunnel. The focus is on the sensitivity of the interactions to grid resolution while providing information concerning the flow structure and surface results for the extent of separation, heating, pressure, and skin friction.
The solution of a model problem of the atmospheric entry of a small meteoroid
NASA Astrophysics Data System (ADS)
Zalogin, G. N.; Kusov, A. L.
2016-03-01
Direct simulation Monte Carlo modeling (DSMC) is used to solve the problem of the entry into the Earth's atmosphere of a small meteoroid. The main aspects of the physical theory of meteors, such as mass loss (ablation) and effects of aerodynamic and thermal shielding, are considered based on the numerical solution of the model problem of the atmospheric entry of an iron meteoroid. The DSMC makes it possible to obtain insight into the structure of the disturbed area around the meteoroid (coma) and trace its evolution depending on entry velocity and height (Knudsen number) in a transitional flow regime where calculation methods used for free molecular and continuum regimes are inapplicable.
DSMC Simulations of Apollo Capsule Aerodynamics for Hypersonic Rarefied Conditions
NASA Technical Reports Server (NTRS)
Moss, James N.; Glass, Christopher E.; Greene, Francis A.
2006-01-01
Direct simulation Monte Carlo DSMC simulations are performed for the Apollo capsule in the hypersonic low density transitional flow regime. The focus is on ow conditions similar to that experienced by the Apollo Command Module during the high altitude portion of its reentry Results for aerodynamic forces and moments are presented that demonstrate their sensitivity to rarefaction that is for free molecular to continuum conditions. Also aerodynamic data are presented that shows their sensitivity to a range of reentry velocity encompasing conditions that include reentry from low Earth orbit lunar return and Mars return velocities to km/s. The rarefied results are anchored in the continuum regime with data from Navier Stokes simulations
Molecular-level simulations of turbulence and its decay
Gallis, M. A.; Bitter, N. P.; Koehler, T. P.; ...
2017-02-08
Here, we provide the first demonstration that molecular-level methods based on gas kinetic theory and molecular chaos can simulate turbulence and its decay. The direct simulation Monte Carlo (DSMC) method, a molecular-level technique for simulating gas flows that resolves phenomena from molecular to hydrodynamic (continuum) length scales, is applied to simulate the Taylor-Green vortex flow. The DSMC simulations reproduce the Kolmogorov –5/3 law and agree well with the turbulent kinetic energy and energy dissipation rate obtained from direct numerical simulation of the Navier-Stokes equations using a spectral method. This agreement provides strong evidence that molecular-level methods for gases can bemore » used to investigate turbulent flows quantitatively.« less
Numerical Simulations Of High-Altitude Aerothermodynamics Of A Prospective Spacecraft Model
NASA Astrophysics Data System (ADS)
Vashchenkov, P. V.; Kaskovsky, A. V.; Krylov, A. N.; Ivanov, M. S.
2011-05-01
The paper describes the computations of aerothermodynamic characteristics of a promising spacecraft (Prospective Piloted Transport System) along its de- scent trajectory at altitudes from 120 to 60 km. The computations are performed by the DSMC method with the use of the SMILE software system and by the engineering technique (local bridging method) with the use of the RuSat software system. The influence of real gas effects (excitation of rotational and vibrational energy modes and chemical reactions) on aerothermodynamic characteristics of the vehicle is studied. A comparison of results obtained by the approximate engineering method and the DSMC method allow the accuracy of prediction of aerodynamic characteristics by the local bridging method to be estimated.
Comparisons of the Maxwell and CLL gas/surface interaction models using DSMC
NASA Technical Reports Server (NTRS)
Hedahl, Marc O.; Wilmoth, Richard G.
1995-01-01
The behavior of two different models of gas-surface interactions is studied using the Direct Simulation Monte Carlo (DSMC) method. The DSMC calculations examine differences in predictions of aerodynamic forces and heat transfer between the Maxwell and the Cercignani-Lampis-Lord (CLL) models for flat plate configurations at freestream conditions corresponding to a 140 km orbit around Venus. The size of the flat plate represents one of the solar panels on the Magellan spacecraft, and the freestream conditions correspond to those experienced during aerobraking maneuvers. Results are presented for both a single flat plate and a two-plate configuration as a function of angle of attack and gas-surface accommodation coefficients. The two-plate system is not representative of the Magellan geometry but is studied to explore possible experiments that might be used to differentiate between the two gas-surface interaction models. The Maxwell and CLL models produce qualitatively similar results for the aerodynamic forces and heat transfer on a single flat plate. However, the flow fields produced with the two models are qualitatively different for both the single-plate and two-plate calculations. These differences in the flowfield lead to predictions of the angle of attack for maximum heat transfer in a two plate configuration that are distinctly different for the two gas-surface interactions models.
Consistent post-reaction vibrational energy redistribution in DSMC simulations using TCE model
NASA Astrophysics Data System (ADS)
Borges Sebastião, Israel; Alexeenko, Alina
2016-10-01
The direct simulation Monte Carlo (DSMC) method has been widely applied to study shockwaves, hypersonic reentry flows, and other nonequilibrium flow phenomena. Although there is currently active research on high-fidelity models based on ab initio data, the total collision energy (TCE) and Larsen-Borgnakke (LB) models remain the most often used chemistry and relaxation models in DSMC simulations, respectively. The conventional implementation of the discrete LB model, however, may not satisfy detailed balance when recombination and exchange reactions play an important role in the flow energy balance. This issue can become even more critical in reacting mixtures involving polyatomic molecules, such as in combustion. In this work, this important shortcoming is addressed and an empirical approach to consistently specify the post-reaction vibrational states close to thermochemical equilibrium conditions is proposed within the TCE framework. Following Bird's quantum-kinetic (QK) methodology for populating post-reaction states, the new TCE-based approach involves two main steps. The state-specific TCE reaction probabilities for a forward reaction are first pre-computed from equilibrium 0-D simulations. These probabilities are then employed to populate the post-reaction vibrational states of the corresponding reverse reaction. The new approach is illustrated by application to exchange and recombination reactions relevant to H2-O2 combustion processes.
Study of Plume Impingement Effects in the Lunar Lander Environment
NASA Technical Reports Server (NTRS)
Marichalar, Jeremiah; Prisbell, A.; Lumpkin, F.; LeBeau, G.
2010-01-01
Plume impingement effects from the descent and ascent engine firings of the Lunar Lander were analyzed in support of the Lunar Architecture Team under the Constellation Program. The descent stage analysis was performed to obtain shear and pressure forces on the lunar surface as well as velocity and density profiles in the flow field in an effort to understand lunar soil erosion and ejected soil impact damage which was analyzed as part of a separate study. A CFD/DSMC decoupled methodology was used with the Bird continuum breakdown parameter to distinguish the continuum flow from the rarefied flow. The ascent stage analysis was performed to ascertain the forces and moments acting on the Lunar Lander Ascent Module due to the firing of the main engine on take-off. The Reacting and Multiphase Program (RAMP) method of characteristics (MOC) code was used to model the continuum region of the nozzle plume, and the Direct Simulation Monte Carlo (DSMC) Analysis Code (DAC) was used to model the impingement results in the rarefied region. The ascent module (AM) was analyzed for various pitch and yaw rotations and for various heights in relation to the descent module (DM). For the ascent stage analysis, the plume inflow boundary was located near the nozzle exit plane in a region where the flow number density was large enough to make the DSMC solution computationally expensive. Therefore, a scaling coefficient was used to make the DSMC solution more computationally manageable. An analysis of the effectiveness of this scaling technique was performed by investigating various scaling parameters for a single height and rotation of the AM. Because the inflow boundary was near the nozzle exit plane, another analysis was performed investigating three different inflow contours to determine the effects of the flow expansion around the nozzle lip on the final plume impingement results.
Extension of a hybrid particle-continuum method for a mixture of chemical species
NASA Astrophysics Data System (ADS)
Verhoff, Ashley M.; Boyd, Iain D.
2012-11-01
Due to the physical accuracy and numerical efficiency achieved by analyzing transitional, hypersonic flow fields with hybrid particle-continuum methods, this paper describes a Modular Particle-Continuum (MPC) method and its extension to include multiple chemical species. Considerations that are specific to a hybrid approach for simulating gas mixtures are addressed, including a discussion of the Chapman-Enskog velocity distribution function (VDF) for near-equilibrium flows, and consistent viscosity models for the individual CFD and DSMC modules of the MPC method. Representative results for a hypersonic blunt-body flow are then presented, where the flow field properties, surface properties, and computational performance are compared for simulations employing full CFD, full DSMC, and the MPC method.
Poly-Gaussian model of randomly rough surface in rarefied gas flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aksenova, Olga A.; Khalidov, Iskander A.
2014-12-09
Surface roughness is simulated by the model of non-Gaussian random process. Our results for the scattering of rarefied gas atoms from a rough surface using modified approach to the DSMC calculation of rarefied gas flow near a rough surface are developed and generalized applying the poly-Gaussian model representing probability density as the mixture of Gaussian densities. The transformation of the scattering function due to the roughness is characterized by the roughness operator. Simulating rough surface of the walls by the poly-Gaussian random field expressed as integrated Wiener process, we derive a representation of the roughness operator that can be appliedmore » in numerical DSMC methods as well as in analytical investigations.« less
Uniform rovibrational collisional N2 bin model for DSMC, with application to atmospheric entry flows
NASA Astrophysics Data System (ADS)
Torres, E.; Bondar, Ye. A.; Magin, T. E.
2016-11-01
A state-to-state model for internal energy exchange and molecular dissociation allows for high-fidelity DSMC simulations. Elementary reaction cross sections for the N2 (v, J)+ N system were previously extracted from a quantum-chemical database, originally compiled at NASA Ames Research Center. Due to the high computational cost of simulating the full range of inelastic collision processes (approx. 23 million reactions), a coarse-grain model, called the Uniform RoVibrational Collisional (URVC) bin model can be used instead. This allows to reduce the original 9390 rovibrational levels of N2 to 10 energy bins. In the present work, this reduced model is used to simulate a 2D flow configuration, which more closely reproduces the conditions of high-speed entry into Earth's atmosphere. For this purpose, the URVC bin model had to be adapted for integration into the "Rarefied Gas Dynamics Analysis System" (RGDAS), a separate high-performance DSMC code capable of handling complex geometries and parallel computations. RGDAS was developed at the Institute of Theoretical and Applied Mechanics in Novosibirsk, Russia for use by the European Space Agency (ESA) and shares many features with the well-known SMILE code developed by the same group. We show that the reduced mechanism developed previously can be implemented in RGDAS, and the results exhibit nonequilibrium effects consistent with those observed in previous 1D-simulations.
Analysis of vibrational-translational energy transfer using the direct simulation Monte Carlo method
NASA Technical Reports Server (NTRS)
Boyd, Iain D.
1991-01-01
A new model is proposed for energy transfer between the vibrational and translational modes for use in the direct simulation Monte Carlo method (DSMC). The model modifies the Landau-Teller theory for a harmonic oscillator and the rate transition is related to an experimental correlation for the vibrational relaxation time. Assessment of the model is made with respect to three different computations: relaxation in a heat bath, a one-dimensional shock wave, and hypersonic flow over a two-dimensional wedge. These studies verify that the model achieves detailed balance, and excellent agreement with experimental data is obtained in the shock wave calculation. The wedge flow computation reveals that the usual phenomenological method for simulating vibrational nonequilibrium in the DSMC technique predicts much higher vibrational temperatures in the wake region.
DSMC Computations for Regions of Shock/Shock and Shock/Boundary Layer Interaction
NASA Technical Reports Server (NTRS)
Moss, James N.
2001-01-01
This paper presents the results of a numerical study of hypersonic interacting flows at flow conditions that include those for which experiments have been conducted in the Calspan-University of Buffalo Research Center (CUBRC) Large Energy National Shock (LENS) tunnel and the ONERA R5Ch low-density wind tunnel. The computations are made with the direct simulation Monte Carlo (DSMC) method of Bird. The focus is on Mach 9.3 to 11.4 flows about flared axisymmetric configurations, both hollow cylinder flares and double cones. The results presented highlight the sensitivity of the calculations to grid resolution, provide results concerning the conditions for incipient separation, and provide information concerning the flow structure and surface results for the extent of separation, heating, pressure, and skin friction.
Numerical Simulation of Transitional, Hypersonic Flows using a Hybrid Particle-Continuum Method
NASA Astrophysics Data System (ADS)
Verhoff, Ashley Marie
Analysis of hypersonic flows requires consideration of multiscale phenomena due to the range of flight regimes encountered, from rarefied conditions in the upper atmosphere to fully continuum flow at low altitudes. At transitional Knudsen numbers there are likely to be localized regions of strong thermodynamic nonequilibrium effects that invalidate the continuum assumptions of the Navier-Stokes equations. Accurate simulation of these regions, which include shock waves, boundary and shear layers, and low-density wakes, requires a kinetic theory-based approach where no prior assumptions are made regarding the molecular distribution function. Because of the nature of these types of flows, there is much to be gained in terms of both numerical efficiency and physical accuracy by developing hybrid particle-continuum simulation approaches. The focus of the present research effort is the continued development of the Modular Particle-Continuum (MPC) method, where the Navier-Stokes equations are solved numerically using computational fluid dynamics (CFD) techniques in regions of the flow field where continuum assumptions are valid, and the direct simulation Monte Carlo (DSMC) method is used where strong thermodynamic nonequilibrium effects are present. Numerical solutions of transitional, hypersonic flows are thus obtained with increased physical accuracy relative to CFD alone, and improved numerical efficiency is achieved in comparison to DSMC alone because this more computationally expensive method is restricted to those regions of the flow field where it is necessary to maintain physical accuracy. In this dissertation, a comprehensive assessment of the physical accuracy of the MPC method is performed, leading to the implementation of a non-vacuum supersonic outflow boundary condition in particle domains, and more consistent initialization of DSMC simulator particles along hybrid interfaces. The relative errors between MPC and full DSMC results are greatly reduced as a direct result of these improvements. Next, a new parameter for detecting rotational nonequilibrium effects is proposed and shown to offer advantages over other continuum breakdown parameters, achieving further accuracy gains. Lastly, the capabilities of the MPC method are extended to accommodate multiple chemical species in rotational nonequilibrium, each of which is allowed to equilibrate independently, enabling application of the MPC method to more realistic atmospheric flows.
NASA Astrophysics Data System (ADS)
Gallis, M. A.; Torczynski, J. R.
2011-03-01
The ellipsoidal-statistical Bhatnagar-Gross-Krook (ES-BGK) kinetic model is investigated for steady gas-phase transport of heat, tangential momentum, and mass between parallel walls (i.e., Fourier, Couette, and Fickian flows). This investigation extends the original study of Cercignani and Tironi, who first applied the ES-BGK model to heat transport (i.e., Fourier flow) shortly after this model was proposed by Holway. The ES-BGK model is implemented in a molecular-gas-dynamics code so that results from this model can be compared directly to results from the full Boltzmann collision term, as computed by the same code with the direct simulation Monte Carlo (DSMC) algorithm of Bird. A gas of monatomic molecules is considered. These molecules collide in a pairwise fashion according to either the Maxwell or the hard-sphere interaction and reflect from the walls according to the Cercignani-Lampis-Lord model with unity accommodation coefficients. Simulations are performed at pressures from near-free-molecular to near-continuum. Unlike the BGK model, the ES-BGK model produces heat-flux and shear-stress values that both agree closely with the DSMC values at all pressures. However, for both interactions, the ES-BGK model produces molecular-velocity-distribution functions that are qualitatively similar to those determined for the Maxwell interaction from Chapman-Enskog theory for small wall temperature differences and moment-hierarchy theory for large wall temperature differences. Moreover, the ES-BGK model does not produce accurate values of the mass self-diffusion coefficient for either interaction. Nevertheless, given its reasonable accuracy for heat and tangential-momentum transport, its sound theoretical foundation (it obeys the H-theorem), and its available extension to polyatomic molecules, the ES-BGK model may be a useful method for simulating certain classes of single-species noncontinuum gas flows, as Cercignani suggested.
Comparison between phenomenological and ab-initio reaction and relaxation models in DSMC
NASA Astrophysics Data System (ADS)
Sebastião, Israel B.; Kulakhmetov, Marat; Alexeenko, Alina
2016-11-01
New state-specific vibrational-translational energy exchange and dissociation models, based on ab-initio data, are implemented in direct simulation Monte Carlo (DSMC) method and compared to the established Larsen-Borgnakke (LB) and total collision energy (TCE) phenomenological models. For consistency, both the LB and TCE models are calibrated with QCT-calculated O2+O data. The model comparison test cases include 0-D thermochemical relaxation under adiabatic conditions and 1-D normal shockwave calculations. The results show that both the ME-QCT-VT and LB models can reproduce vibrational relaxation accurately but the TCE model is unable to reproduce nonequilibrium rates even when it is calibrated to accurate equilibrium rates. The new reaction model does capture QCT-calculated nonequilibrium rates. For all investigated cases, we discuss the prediction differences based on the new model features.
Review of blunt body wake flows at hypersonic low density conditions
NASA Technical Reports Server (NTRS)
Moss, J. N.; Price, J. M.
1996-01-01
Recent results of experimental and computational studies concerning hypersonic flows about blunted cones including their near wake are reviewed. Attention is focused on conditions where rarefaction effects are present, particularly in the wake. The experiments have been performed for a common model configuration (70 deg spherically-blunted cone) in five hypersonic facilities that encompass a significant range of rarefaction and nonequilibrium effects. Computational studies using direct simulation Monte Carlo (DSMC) and Navier-Stokes solvers have been applied to selected experiments performed in each of the facilities. In addition, computations have been made for typical flight conditions in both Earth and Mars atmospheres, hence more energetic flows than produced in the ground-based tests. Also, comparisons of DSMC calculations and forebody measurements made for the Japanese Orbital Reentry Experiment (OREX) vehicle (a 50 deg spherically-blunted cone) are presented to bridge the spectrum of ground to flight conditions.
Radiation Modeling with Direct Simulation Monte Carlo
NASA Technical Reports Server (NTRS)
Carlson, Ann B.; Hassan, H. A.
1991-01-01
Improvements in the modeling of radiation in low density shock waves with direct simulation Monte Carlo (DSMC) are the subject of this study. A new scheme to determine the relaxation collision numbers for excitation of electronic states is proposed. This scheme attempts to move the DSMC programs toward a more detailed modeling of the physics and more reliance on available rate data. The new method is compared with the current modeling technique and both techniques are compared with available experimental data. The differences in the results are evaluated. The test case is based on experimental measurements from the AVCO-Everett Research Laboratory electric arc-driven shock tube of a normal shock wave in air at 10 km/s and .1 Torr. The new method agrees with the available data as well as the results from the earlier scheme and is more easily extrapolated to di erent ow conditions.
A Fokker-Planck based kinetic model for diatomic rarefied gas flows
NASA Astrophysics Data System (ADS)
Gorji, M. Hossein; Jenny, Patrick
2013-06-01
A Fokker-Planck based kinetic model is presented here, which also accounts for internal energy modes characteristic for diatomic gas molecules. The model is based on a Fokker-Planck approximation of the Boltzmann equation for monatomic molecules, whereas phenomenological principles were employed for the derivation. It is shown that the model honors the equipartition theorem in equilibrium and fulfills the Landau-Teller relaxation equations for internal degrees of freedom. The objective behind this approximate kinetic model is accuracy at reasonably low computational cost. This can be achieved due to the fact that the resulting stochastic differential equations are continuous in time; therefore, no collisions between the simulated particles have to be calculated. Besides, because of the devised energy conserving time integration scheme, it is not required to resolve the collisional scales, i.e., the mean collision time and the mean free path of molecules. This, of course, gives rise to much more efficient simulations with respect to other particle methods, especially the conventional direct simulation Monte Carlo (DSMC), for small and moderate Knudsen numbers. To examine the new approach, first the computational cost of the model was compared with respect to DSMC, where significant speed up could be obtained for small Knudsen numbers. Second, the structure of a high Mach shock (in nitrogen) was studied, and the good performance of the model for such out of equilibrium conditions could be demonstrated. At last, a hypersonic flow of nitrogen over a wedge was studied, where good agreement with respect to DSMC (with level to level transition model) for vibrational and translational temperatures is shown.
DSMC Simulations of High Mach Number Taylor-Couette Flow
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev
2017-11-01
The main focus of this work is to characterise the Taylor-Couette flow of an ideal gas between two coaxial cylinders at Mach number Ma =(Uw /√{ kbTw / m }) in the range 0.01
NASA Astrophysics Data System (ADS)
Christou, Chariton; Kokou Dadzie, S.; Thomas, Nicolas; Hartogh, Paul; Jorda, Laurent; Kührt, Ekkehard; Whitby, James; Wright, Ian; Zarnecki, John
2017-04-01
While ESA's Rosetta mission has formally been completed, the data analysis and interpretation continues. Here, we address the physics of the gas flow at the surface of the comet. Understanding the sublimation of ice at the surface of the nucleus provides the initial boundary condition for studying the inner coma. The gas flow at the surface of the comet 67P/Churyumov-Gerasimenko can be in the rarefaction regime and a non-Maxwellian velocity distribution may be present. In these cases, continuum methods like Navier-Stokes-Fourier (NSF) set of equations are rarely applicable. Discrete particle methods such as Direct Simulation Monte Carlo (DSMC) method are usually adopted. DSMC is currently the dominant numerical method to study rarefied gas flows. It has been widely used to study cometary outflow over past years .1,2. In the present study, we investigate numerically, gas transport near the surface of the nucleus using DSMC. We focus on the outgassing from the near surface boundary layer into the vacuum (˜20 cm above the nucleus surface). Simulations are performed using the open source code dsmcFoam on an unstructured grid. Until now, artificially generated random porous media formed by packed spheres have been used to represent the comet surface boundary layer structure .3. In the present work, we used instead Micro-computerized-tomography (micro-CT) scanned images to provide geologically realistic 3D representations of the boundary layer porous structure. The images are from earth basins. The resolution is relatively high - in the range of some μm. Simulations from different rock samples with high porosity (and comparable to those expected at 67P) are compared. Gas properties near the surface boundary layer are presented and characterized. We have identified effects of the various porous structure properties on the gas flow fields. Temperature, density and velocity profiles have also been analyzed. .1. J.-F. Crifo, G. Loukianov, A. Rodionov and V. Zakharov, Icarus 176 (1), 192-219 (2005). 2. Y. Liao, C. Su, R. Marschall, J. Wu, M. Rubin, I. Lai, W. Ip, H. Keller, J. Knollenberg and E. Kührt, Earth, Moon, and Planets 117 (1), 41-64 (2016). 3. Y. V. Skorov, R. Van Lieshout, J. Blum and H. U. Keller, Icarus 212 (2), 867-876 (2011).
Key issues of ultraviolet radiation of OH at high altitudes
NASA Astrophysics Data System (ADS)
Zhang, Yuhuai; Wan, Tian; Jiang, Jianzheng; Fan, Jing
2014-12-01
Ultraviolet (UV) emissions radiated by hydroxyl (OH) is one of the fundamental elements in the prediction of radiation signature of high-altitude and high-speed vehicle. In this work, the OH A2Σ+→ X2Π ultraviolet emission band behind the bow shock is computed under the experimental condition of the second bow-shock ultraviolet flight (BSUV-2). Four related key issues are discussed, namely, the source of hydrogen element in the high-altitude atmosphere, the formation mechanism of OH species, efficient computational algorithm of trace species in rarefied flows, and accurate calculation of OH emission spectra. Firstly, by analyzing the typical atmospheric model, the vertical distributions of the number densities of different species containing hydrogen element are given. According to the different dominating species containing hydrogen element, the atmosphere is divided into three zones, and the formation mechanism of OH species is analyzed in the different zones. The direct simulation Monte Carlo (DSMC) method and the Navier-Stokes equations are employed to compute the number densities of the different OH electronically and vibrationally excited states. Different to the previous work, the trace species separation (TSS) algorithm is applied twice in order to accurately calculate the densities of OH species and its excited states. Using a non-equilibrium radiation model, the OH ultraviolet emission spectra and intensity at different altitudes are computed, and good agreement is obtained with the flight measured data.
Axisymmetric Implementation for 3D-Based DSMC Codes
NASA Technical Reports Server (NTRS)
Stewart, Benedicte; Lumpkin, F. E.; LeBeau, G. J.
2011-01-01
The primary objective in developing NASA s DSMC Analysis Code (DAC) was to provide a high fidelity modeling tool for 3D rarefied flows such as vacuum plume impingement and hypersonic re-entry flows [1]. The initial implementation has been expanded over time to offer other capabilities including a novel axisymmetric implementation. Because of the inherently 3D nature of DAC, this axisymmetric implementation uses a 3D Cartesian domain and 3D surfaces. Molecules are moved in all three dimensions but their movements are limited by physical walls to a small wedge centered on the plane of symmetry (Figure 1). Unfortunately, far from the axis of symmetry, the cell size in the direction perpendicular to the plane of symmetry (the Z-direction) may become large compared to the flow mean free path. This frequently results in inaccuracies in these regions of the domain. A new axisymmetric implementation is presented which aims to solve this issue by using Bird s approach for the molecular movement while preserving the 3D nature of the DAC software [2]. First, the computational domain is similar to that previously used such that a wedge must still be used to define the inflow surface and solid walls within the domain. As before molecules are created inside the inflow wedge triangles but they are now rotated back to the symmetry plane. During the move step, molecules are moved in 3D but instead of interacting with the wedge walls, the molecules are rotated back to the plane of symmetry at the end of the move step. This new implementation was tested for multiple flows over axisymmetric shapes, including a sphere, a cone, a double cone and a hollow cylinder. Comparisons to previous DSMC solutions and experiments, when available, are made.
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2014-01-01
The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. Recently introduced molecular-level chemistry models that predict equilibrium and nonequilibrium reaction rates using only kinetic theory and fundamental molecular properties are extended in the current work to include electronic energy level transitions and reactions involving charged particles. These extensions are shown to agree favorably with reported transition and reaction rates from the literature for near-equilibrium conditions. Also, the extensions are applied to the second flight of the Project FIRE flight experiment at 1634 seconds with a Knudsen number of 0.001 at an altitude of 76.4 km. In order to accomplish this, NASA's direct simulation Monte Carlo code DAC was rewritten to include the ability to simulate charge-neutral ionized flows, take advantage of the recently introduced chemistry model, and to include the extensions presented in this work. The 1634 second data point was chosen for comparisons to be made in order to include a CFD solution. The Knudsen number at this point in time is such that the DSMC simulations are still tractable and the CFD computations are at the edge of what is considered valid because, although near-transitional, the flow is still considered to be continuum. It is shown that the inclusion of electronic energy levels in the DSMC simulation is necessary for flows of this nature and is required for comparison to the CFD solution. The flow field solutions are also post-processed by the nonequilibrium radiation code HARA to compute the radiative portion.
Particle Methods for Simulating Atomic Radiation in Hypersonic Reentry Flows
NASA Astrophysics Data System (ADS)
Ozawa, T.; Wang, A.; Levin, D. A.; Modest, M.
2008-12-01
With a fast reentry speed, the Stardust vehicle generates a strong shock region ahead of its blunt body with a temperature above 60,000 K. These extreme Mach number flows are sufficiently energetic to initiate gas ionization processes and thermal and chemical ablation processes. The nonequilibrium gaseous radiation from the shock layer is so strong that it affects the flowfield macroparameter distributions. In this work, we present the first loosely coupled direct simulation Monte Carlo (DSMC) simulations with the particle-based photon Monte Carlo (p-PMC) method to simulate high-Mach number reentry flows in the near-continuum flow regime. To efficiently capture the highly nonequilibrium effects, emission and absorption cross section databases using the Nonequilibrium Air Radiation (NEQAIR) were generated, and atomic nitrogen and oxygen radiative transport was calculated by the p-PMC method. The radiation energy change calculated by the p-PMC method has been coupled in the DSMC calculations, and the atomic radiation was found to modify the flow field and heat flux at the wall.
Simulation of thermal transpiration flow using a high-order moment method
NASA Astrophysics Data System (ADS)
Sheng, Qiang; Tang, Gui-Hua; Gu, Xiao-Jun; Emerson, David R.; Zhang, Yong-Hao
2014-04-01
Nonequilibrium thermal transpiration flow is numerically analyzed by an extended thermodynamic approach, a high-order moment method. The captured velocity profiles of temperature-driven flow in a parallel microchannel and in a micro-chamber are compared with available kinetic data or direct simulation Monte Carlo (DSMC) results. The advantages of the high-order moment method are shown as a combination of more accuracy than the Navier-Stokes-Fourier (NSF) equations and less computation cost than the DSMC method. In addition, the high-order moment method is also employed to simulate the thermal transpiration flow in complex geometries in two types of Knudsen pumps. One is based on micro-mechanized channels, where the effect of different wall temperature distributions on thermal transpiration flow is studied. The other relies on porous structures, where the variation of flow rate with a changing porosity or pore surface area ratio is investigated. These simulations can help to optimize the design of a real Knudsen pump.
DSMC Simulation of Separated Flows About Flared Bodies at Hypersonic Conditions
NASA Technical Reports Server (NTRS)
Moss, James N.
2000-01-01
This paper describes the results of a numerical study of interacting hypersonic flows at conditions that can be produced in ground-based test facilities. The computations are made with the direct simulation Monte Carlo (DSMC) method of Bird. The focus is on Mach 10 flows about flared axisymmetric configurations, both hollow cylinder flares and double cones. The flow conditions are those for which experiments have been or will be performed in the ONERA R5Ch low-density wind tunnel and the Calspan-University of Buffalo Research Center (CUBRC) Large Energy National Shock (LENS) tunnel. The range of flow conditions, model configurations, and model sizes provides a significant range of shock/shock and shock/boundary layer interactions at low Reynolds number conditions. Results presented will highlight the sensitivity of the calculations to grid resolution, contrast the differences in flow structure for hypersonic cold flows and those of more energetic but still low enthalpy flows, and compare the present results with experimental measurements for surface heating, pressure, and extent of separation.
Numerical simulation of rarefied gas flow through a slit
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr.; Jeng, Duen-Ren; De Witt, Kenneth J.; Chung, Chan-Hong
1990-01-01
Two different approaches, the finite-difference method coupled with the discrete-ordinate method (FDDO), and the direct-simulation Monte Carlo (DSMC) method, are used in the analysis of the flow of a rarefied gas from one reservoir to another through a two-dimensional slit. The cases considered are for hard vacuum downstream pressure, finite pressure ratios, and isobaric pressure with thermal diffusion, which are not well established in spite of the simplicity of the flow field. In the FDDO analysis, by employing the discrete-ordinate method, the Boltzmann equation simplified by a model collision integral is transformed to a set of partial differential equations which are continuous in physical space but are point functions in molecular velocity space. The set of partial differential equations are solved by means of a finite-difference approximation. In the DSMC analysis, three kinds of collision sampling techniques, the time counter (TC) method, the null collision (NC) method, and the no time counter (NTC) method, are used.
Program Manager - A Bimonthly Magazine of DSMC, Volume 27, Number 2.
1998-04-01
catalog. http /www.gsa.gov -------------- - Online shopping for commercial items to http ’Iwww.ndia.org I--- support government interests. Events...funds. Allows users access to GAO "Whats New in Contracting?" educational reports, FAQs. products catalog. http://www.gsa.gov Online shopping for
Key issues of ultraviolet radiation of OH at high altitudes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yuhuai; Wan, Tian; Jiang, Jianzheng
2014-12-09
Ultraviolet (UV) emissions radiated by hydroxyl (OH) is one of the fundamental elements in the prediction of radiation signature of high-altitude and high-speed vehicle. In this work, the OH A{sup 2}Σ{sup +}→X{sup 2}Π ultraviolet emission band behind the bow shock is computed under the experimental condition of the second bow-shock ultraviolet flight (BSUV-2). Four related key issues are discussed, namely, the source of hydrogen element in the high-altitude atmosphere, the formation mechanism of OH species, efficient computational algorithm of trace species in rarefied flows, and accurate calculation of OH emission spectra. Firstly, by analyzing the typical atmospheric model, the verticalmore » distributions of the number densities of different species containing hydrogen element are given. According to the different dominating species containing hydrogen element, the atmosphere is divided into three zones, and the formation mechanism of OH species is analyzed in the different zones. The direct simulation Monte Carlo (DSMC) method and the Navier-Stokes equations are employed to compute the number densities of the different OH electronically and vibrationally excited states. Different to the previous work, the trace species separation (TSS) algorithm is applied twice in order to accurately calculate the densities of OH species and its excited states. Using a non-equilibrium radiation model, the OH ultraviolet emission spectra and intensity at different altitudes are computed, and good agreement is obtained with the flight measured data.« less
NASA Astrophysics Data System (ADS)
Li, Zhi-Hui; Peng, Ao-Ping; Zhang, Han-Xin; Yang, Jaw-Yen
2015-04-01
This article reviews rarefied gas flow computations based on nonlinear model Boltzmann equations using deterministic high-order gas-kinetic unified algorithms (GKUA) in phase space. The nonlinear Boltzmann model equations considered include the BGK model, the Shakhov model, the Ellipsoidal Statistical model and the Morse model. Several high-order gas-kinetic unified algorithms, which combine the discrete velocity ordinate method in velocity space and the compact high-order finite-difference schemes in physical space, are developed. The parallel strategies implemented with the accompanying algorithms are of equal importance. Accurate computations of rarefied gas flow problems using various kinetic models over wide ranges of Mach numbers 1.2-20 and Knudsen numbers 0.0001-5 are reported. The effects of different high resolution schemes on the flow resolution under the same discrete velocity ordinate method are studied. A conservative discrete velocity ordinate method to ensure the kinetic compatibility condition is also implemented. The present algorithms are tested for the one-dimensional unsteady shock-tube problems with various Knudsen numbers, the steady normal shock wave structures for different Mach numbers, the two-dimensional flows past a circular cylinder and a NACA 0012 airfoil to verify the present methodology and to simulate gas transport phenomena covering various flow regimes. Illustrations of large scale parallel computations of three-dimensional hypersonic rarefied flows over the reusable sphere-cone satellite and the re-entry spacecraft using almost the largest computer systems available in China are also reported. The present computed results are compared with the theoretical prediction from gas dynamics, related DSMC results, slip N-S solutions and experimental data, and good agreement can be found. The numerical experience indicates that although the direct model Boltzmann equation solver in phase space can be computationally expensive, nevertheless, the present GKUAs for kinetic model Boltzmann equations in conjunction with current available high-performance parallel computer power can provide a vital engineering tool for analyzing rarefied gas flows covering the whole range of flow regimes in aerospace engineering applications.
Numerical Modeling of Thermal Edge Flow
NASA Astrophysics Data System (ADS)
Ibrayeva, Aizhan
A gas flow can be induced between two interdigitated arrays of thin vanes, when one of the arrays is uniformly heated or cooled. Sharply curved isotherms near the vane edges leads to momentum imbalance among incident particles, which creates Knudsen force to the vane and thermal edge flow in a gas. The flow is observed in a rarefied gas, when the mean free path of the molecules are comparable with the characteristic length scale of the system. In order to understand a physical mechanism of the flow and Knudsen force, the configuration was numerically investigated under different gas rarefication degrees and temperature gradients in the system by direct simulation Monte Carlo (DSMC) method. From simulations, the highest force value is obtained when Knudsen number is around 0.5 and becomes negligible in free molecular and continuum regimes. DSMC results are analyzed from the theoretical point of view and compared to experimental data. Validation of the simulations is done by the RKDG method. An effect of various geometric parameters to the performance of the actuator was investigated and suggestions were made for improved design of the device.
AEROELASTIC SIMULATION TOOL FOR INFLATABLE BALLUTE AEROCAPTURE
NASA Technical Reports Server (NTRS)
Liever, P. A.; Sheta, E. F.; Habchi, S. D.
2006-01-01
A multidisciplinary analysis tool is under development for predicting the impact of aeroelastic effects on the functionality of inflatable ballute aeroassist vehicles in both the continuum and rarefied flow regimes. High-fidelity modules for continuum and rarefied aerodynamics, structural dynamics, heat transfer, and computational grid deformation are coupled in an integrated multi-physics, multi-disciplinary computing environment. This flexible and extensible approach allows the integration of state-of-the-art, stand-alone NASA and industry leading continuum and rarefied flow solvers and structural analysis codes into a computing environment in which the modules can run concurrently with synchronized data transfer. Coupled fluid-structure continuum flow demonstrations were conducted on a clamped ballute configuration. The feasibility of implementing a DSMC flow solver in the simulation framework was demonstrated, and loosely coupled rarefied flow aeroelastic demonstrations were performed. A NASA and industry technology survey identified CFD, DSMC and structural analysis codes capable of modeling non-linear shape and material response of thin-film inflated aeroshells. The simulation technology will find direct and immediate applications with NASA and industry in ongoing aerocapture technology development programs.
Effect of plasma distribution on propulsion performance in electrodeless plasma thrusters
NASA Astrophysics Data System (ADS)
Takao, Yoshinori; Takase, Kazuki; Takahashi, Kazunori
2016-09-01
A helicon plasma thruster consisting of a helicon plasma source and a magnetic nozzle is one of the candidates for long-lifetime thrusters because no electrodes are employed to generate or accelerate plasma. A recent experiment, however, detected the non-negligible axial momentum lost to the lateral wall boundary, which degrades thruster performance, when the source was operated with highly ionized gases. To investigate this mechanism, we have conducted two-dimensional axisymmetric particle-in-cell (PIC) simulations with the neutral distribution obtained by Direct Simulation Monte Carlo (DSMC) method. The numerical results have indicated that the axially asymmetric profiles of the plasma density and potential are obtained when the strong decay of neutrals occurs at the source downstream. This asymmetric potential profile leads to the accelerated ion towards the lateral wall, leading to the non-negligible net axial force in the opposite direction of the thrust. Hence, to reduce this asymmetric profile by increasing the neutral density at downstream and/or by confining plasma with external magnetic field would result in improvement of the propulsion performance. These effects are also analyzed by PIC/DSMC simulations.
Evaluation of nonequilibrium boundary conditions for hypersonic rarefied gas flows
NASA Astrophysics Data System (ADS)
Le, N. T. P.; Greenshields, Ch. J.; Reese, J. M.
2012-01-01
A new Computational Fluid Dynamics (CFD) solver for high-speed viscous §ows in the OpenFOAM code is validated against published experimental data and Direct Simulation Monte Carlo (DSMC) results. The laminar §at plate and circular cylinder cases are studied for Mach numbers, Ma, ranging from 6 to 12.7, and with argon and nitrogen as working gases. Simulation results for the laminar §at plate cases show that the combination of accommodation coefficient values σu = 0.7 and σT = 1.0 in the Maxwell/Smoluchowski conditions, and the coefficient values A1 = 1.5 and A2 = 1.0 in the second-order velocity slip condition, give best agreement with experimental data of surface pressure. The values σu = 0.7 and σT = 1.0 also give good agreement with DSMC data of surface pressure at the stagnation point in the circular cylinder case at Kn = 0.25. The Langmuir surface adsorption condition is also tested for the laminar §at plate case, but initial results were not as good as the Maxwell/Smoluchowski boundary conditions.
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2013-01-01
The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. Recently introduced molecular-level chemistry models that predict equilibrium and nonequilibrium reaction rates using only kinetic theory and fundamental molecular properties are extended in the current work to include electronic energy level transitions and reactions involving charged particles. These extensions are shown to agree favorably with reported transition and reaction rates from the literature for nearequilibrium conditions. Also, the extensions are applied to the second flight of the Project FIRE flight experiment at 1634 seconds with a Knudsen number of 0.001 at an altitude of 76.4 km. In order to accomplish this, NASA's direct simulation Monte Carlo code DAC was rewritten to include the ability to simulate charge-neutral ionized flows, take advantage of the recently introduced chemistry model, and to include the extensions presented in this work. The 1634 second data point was chosen for comparisons to be made in order to include a CFD solution. The Knudsen number at this point in time is such that the DSMC simulations are still tractable and the CFD computations are at the edge of what is considered valid because, although near-transitional, the flow is still considered to be continuum. It is shown that the inclusion of electronic energy levels in the DSMC simulation is necessary for flows of this nature and is required for comparison to the CFD solution. The flow field solutions are also post-processed by the nonequilibrium radiation code HARA to compute the radiative portion of the heating and is then compared to the total heating measured in flight.
NASA Astrophysics Data System (ADS)
Hansen, Kenneth; Altwegg, Kathrin; Berthelier, Jean-Jacques; Bieler, Andre; Calmonte, Ursina; Combi, Michael; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, Tamas; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Lena; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu
2016-04-01
We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet coma (<400 km) of comet 67P for the pre-equinox orbit of comet 67P/Churyumov-Gerasimenko. In this work we extend the empirical model to the post-equinox, post-perihelion time period. In addition, we extend the coma model to significantly further from the comet (~100,000-1,000,000 km). The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. Furthermore, we have generalized the model beyond application to 67P by replacing the heliocentric distance parameterizations and mapping them to production rates. Using this method, the model become significantly more general and can be applied to any comet. The model is a significant improvement over simpler empirical models, such as the Haser model. For 67P, the DSMC results are, of course, a more accurate representation of the coma at any given time, but the advantage of a mean state, empirical model is the ease and speed of use. One application of the empirical model is to de-trend the spacecraft motion from the ROSINA COPS and DFMS data (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Comet Pressure Sensor, Double Focusing Mass Spectrometer). The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on the single point measurement. In this presentation we will present the coma production rate as a function of heliocentric distance both pre- and post-equinox and perihelion.
Unified gas-kinetic scheme with multigrid convergence for rarefied flow study
NASA Astrophysics Data System (ADS)
Zhu, Yajun; Zhong, Chengwen; Xu, Kun
2017-09-01
The unified gas kinetic scheme (UGKS) is based on direct modeling of gas dynamics on the mesh size and time step scales. With the modeling of particle transport and collision in a time-dependent flux function in a finite volume framework, the UGKS can connect the flow physics smoothly from the kinetic particle transport to the hydrodynamic wave propagation. In comparison with the direct simulation Monte Carlo (DSMC) method, the current equation-based UGKS can implement implicit techniques in the updates of macroscopic conservative variables and microscopic distribution functions. The implicit UGKS significantly increases the convergence speed for steady flow computations, especially in the highly rarefied and near continuum regimes. In order to further improve the computational efficiency, for the first time, a geometric multigrid technique is introduced into the implicit UGKS, where the prediction step for the equilibrium state and the evolution step for the distribution function are both treated with multigrid acceleration. More specifically, a full approximate nonlinear system is employed in the prediction step for fast evaluation of the equilibrium state, and a correction linear equation is solved in the evolution step for the update of the gas distribution function. As a result, convergent speed has been greatly improved in all flow regimes from rarefied to the continuum ones. The multigrid implicit UGKS (MIUGKS) is used in the non-equilibrium flow study, which includes microflow, such as lid-driven cavity flow and the flow passing through a finite-length flat plate, and high speed one, such as supersonic flow over a square cylinder. The MIUGKS shows 5-9 times efficiency increase over the previous implicit scheme. For the low speed microflow, the efficiency of MIUGKS is several orders of magnitude higher than the DSMC. Even for the hypersonic flow at Mach number 5 and Knudsen number 0.1, the MIUGKS is still more than 100 times faster than the DSMC method for obtaining a convergent steady state solution.
DSMC Modeling of Flows with Recombination Reactions
2017-06-23
Rogasinsky, “Analysis of the numerical techniques of the direct simulation Monte Carlo method in the rarefied gas dynamics,” Russ. J. Numer. Anal. Math ...reflection in steady flows,” Comput. Math . Appl. 35(1-2), 113–126 (1998). 45K. L. Wray, “Shock-tube study of the recombination of O atoms by Ar catalysts at
1993-06-01
lr __________ r onM eth S()4 Greg Caruth _________________ William J. Perry, Typography and Design DEPSECDEF 43 Paula Croisetlere 3 Program Manager...the DSMC Press to be such a link to the govern- for publication consideration in either the brand ment and private sector defense acquisition com- new
Thermal Nonequilibrium in Hypersonic Separated Flow
2014-12-22
flow duration and steadiness. 15. SUBJECT TERMS Hypersonic Flowfield Measurements, Laser Diagnostics of Gas Flow, Laser Induced...extent than the NS computation. While it would be convenient to believe that the more physically realistic flow modeling of the DSMC gas - surface...index and absorption coefficient. Each of the curves was produced assuming a 0.5 % concentration of lithium at the Condition A nozzle exit conditions
NASA Astrophysics Data System (ADS)
Akhlaghi, H.; Roohi, E.; Myong, R. S.
2012-11-01
Micro/nano geometries with specified wall heat flux are widely encountered in electronic cooling and micro-/nano-fluidic sensors. We introduce a new technique to impose the desired (positive/negative) wall heat flux boundary condition in the DSMC simulations. This technique is based on an iterative progress on the wall temperature magnitude. It is found that the proposed iterative technique has a good numerical performance and could implement both positive and negative values of wall heat flux rates accurately. Using present technique, rarefied gas flow through micro-/nanochannels under specified wall heat flux conditions is simulated and unique behaviors are observed in case of channels with cooling walls. For example, contrary to the heating process, it is observed that cooling of micro/nanochannel walls would result in small variations in the density field. Upstream thermal creep effects in the cooling process decrease the velocity slip despite of the Knudsen number increase along the channel. Similarly, cooling process decreases the curvature of the pressure distribution below the linear incompressible distribution. Our results indicate that flow cooling increases the mass flow rate through the channel, and vice versa.
Molecular simulation of small Knudsen number flows
NASA Astrophysics Data System (ADS)
Fei, Fei; Fan, Jing
2012-11-01
The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.
Study of cluster behavior in the riser of CFB by the DSMC method
NASA Astrophysics Data System (ADS)
Liu, H. P.; Liu, D. Y.; Liu, H.
2010-03-01
The flow behaviors of clusters in the riser of a two-dimensional (2D) circulating fluidized bed was numerically studied based on the Euler-Lagrangian approach. Gas turbulence was modeled by means of Large Eddy Simulation (LES). Particle collision was modeled by means of the direct simulation Monte Carlo (DSMC) method. Clusters' hydrodynamic characteristics are obtained using a cluster identification method proposed by sharrma et al. (2000). The descending clusters near the wall region and the up- and down-flowing clusters in the core were studied separately due to their different flow behaviors. The effects of superficial gas velocity on the cluster behavior were analyzed. Simulated results showed that near wall clusters flow downward and the descent velocity is about -45 cm/s. The occurrence frequency of the up-flowing cluster is higher than that of down-flowing cluster in the core of riser. With the increase of superficial gas velocity, the solid concentration and occurrence frequency of clusters decrease, while the cluster axial velocity increase. Simulated results were in agreement with experimental data. The stochastic method used in present paper is feasible for predicting the cluster flow behavior in CFBs.
Rarefied flow past a flat plate at incidence
NASA Technical Reports Server (NTRS)
Dogra, Virendra K.; Moss, James N.; Price, Joseph M.
1988-01-01
Results of a numerical study using the direct simulation Monte Carlo (DSMC) method are presented for the transitional flow about a flat plate at 40 deg incidence. The plate has zero thickness and a length of 1.0 m. The flow conditions simulated are those experienced by the Shuttle Orbiter during reentry at 7.5 km/s. The range of freestream conditions are such that the freestream Knudsen number values are between 0.02 and 8.4, i.e., conditions that encompass most of the transitional flow regime. The DSMC simulations show that transitional effects are evident when compared with free molecule results for all cases considered. The calculated results demonstrate clearly the necessity of having a means of identifying the effects of transitional flow when making aerodynamic flight measurements as are currently being made with the Space Shuttle Orbiter vehicles. Previous flight data analyses have relied exclusively on adjustments in the gas-surface interaction models without accounting for the transitional effect which can be comparable in magnitude. The present calculations show that the transitional effect at 175 km would increase the Space Shuttle Orbiter lift-drag ratio by 90 percent over the free molecule value.
NASA Astrophysics Data System (ADS)
Yang, Guang; Weigand, Bernhard
2018-04-01
The pressure-driven gas transport characteristics through a porous medium consisting of arrays of discrete elements is investigated by using the direct simulation Monte Carlo (DSMC) method. Different porous structures are considered, accounting for both two- and three-dimensional arrangements of basic microscale and nanoscale elements. The pore scale flow patterns in the porous medium are obtained, and the Knudsen diffusion in the pores is studied in detail for slip and transition flow regimes. A new effective pore size of the porous medium is defined, which is a function of the porosity, the tortuosity, the contraction factor, and the intrinsic permeability of the porous medium. It is found that the Klinkenberg effect in different porous structures can be fully described by the Knudsen number characterized by the effective pore size. The accuracies of some widely used Klinkenberg correlations are evaluated by the present DSMC results. It is also found that the available correlations for apparent permeability, most of which are derived from simple pipe or channel flows, can still be applicative for more complex porous media flows, by using the effective pore size defined in this study.
Collisional spreading of Enceladus’ neutral cloud
NASA Astrophysics Data System (ADS)
Cassidy, T. A.; Johnson, R. E.
2010-10-01
We describe a direct simulation Monte Carlo (DSMC) model of Enceladus' neutral cloud and compare its results to observations of OH and O orbiting Saturn. The OH and O are observed far from Enceladus (at 3.95 R S), as far out as 25 R S for O. Previous DSMC models attributed this breadth primarily to ion/neutral scattering (including charge exchange) and molecular dissociation. However, the newly reported O observations and a reinterpretation of the OH observations (Melin, H., Shemansky, D.E., Liu, X. [2009] Planet. Space Sci., 57, 1743-1753, PS&S) showed that the cloud is broader than previously thought. We conclude that the addition of neutral/neutral scattering (Farmer, A.J. [2009] Icarus, 202, 280-286), which was underestimated by previous models, brings the model results in line with the new observations. Neutral/neutral collisions primarily happen in the densest part of the cloud, near Enceladus' orbit, but contribute to the spreading by pumping up orbital eccentricity. Based on the cloud model presented here Enceladus maybe the ultimate source of oxygen for the upper atmospheres of Titan and Saturn. We also predict that large quantities of OH, O and H 2O bombard Saturn's icy satellites.
NASA Astrophysics Data System (ADS)
Hansen, Kenneth C.; Altwegg, Kathrin; Bieler, Andre; Berthelier, Jean-Jacques; Calmonte, Ursina; Combi, Michael R.; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, T. I.; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Léna; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu; ROSINA Team
2016-10-01
We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet water (H2O) coma of comet 67P/Churyumov-Gerasimenko. In this work we create additional empirical models for the coma distributions of CO2 and CO. The AMPS simulations are based on ROSINA DFMS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Double Focusing Mass Spectrometer) data taken over the entire timespan of the Rosetta mission. The empirical model is created using AMPS DSMC results which are extracted from simulations at a range of radial distances, rotation phases and heliocentric distances. The simulation results are then averaged over a comet rotation and fitted to an empirical model distribution. Model coefficients are then fitted to piecewise-linear functions of heliocentric distance. The final product is an empirical model of the coma distribution which is a function of heliocentric distance, radial distance, and sun-fixed longitude and latitude angles. The model clearly mimics the behavior of water shifting production from North to South across the inbound equinox while the CO2 production is always in the South.The empirical model can be used to de-trend the spacecraft motion from the ROSINA COPS and DFMS data. The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on single point measurements. In this presentation we will present the coma production rates as a function of heliocentric distance for the entire Rosetta mission.This work was supported by contracts JPL#1266313 and JPL#1266314 from the US Rosetta Project and NASA grant NNX14AG84G from the Planetary Atmospheres Program.
DSMC Simulations of Blunt Body Flows for Mars Entries: Mars Pathfinder and Mars Microprobe Capsules
NASA Technical Reports Server (NTRS)
Moss, James N.; Wilmoth, Richard G.; Price, Joseph M.
1997-01-01
The hypersonic transitional flow aerodynamics of the Mars Pathfinder and Mars Microprobe capsules are simulated with the direct simulation Monte Carlo method. Calculations of axial, normal, and static pitching coefficients were obtained over an angle of attack range comparable to actual flight requirements. Comparisons are made with modified Newtonian and free-molecular-flow calculations. Aerothermal results were also obtained for zero incidence entry conditions.
Comparison of Hall Thruster Plume Expansion Model with Experimental Data
2006-05-23
focus of this study, is a hybrid particle- in-cell ( PIC ) model that tracks particles along an unstructured tetrahedral mesh. * Research Engineer...measurements of the ion current density profile, ion energy distributions, and ion species fraction distributions using a nude Faraday probe, retarding...Vol.37 No.1. 6 Oh, D. and Hastings, D., “Three Dimensional PIC -DSMC Simulations of Hall Thruster Plumes and Analysis for Realistic Spacecraft
Comparison of DSMC Reaction Models with QCT Reaction Rates for Nitrogen
2016-07-17
The U.S. Government is joint author of the work and has the right to use, modify, reproduce, release, perform, display, or disclose the work. 13...Distribution A: Approved for Public Release, Distribution Unlimited PA #16299 Introduction • Comparison with measurements is final goal • Validation...model verification and parameter adjustment • Four chemistry models: total collision energy (TCE), quantum kinetic (QK), vibration-dissociation favoring
State-specific catalytic recombination boundary condition for DSMC methods in aerospace applications
NASA Astrophysics Data System (ADS)
Bariselli, F.; Torres, E.; Magin, T. E.
2016-11-01
Accurate characterization of the hypersonic flow around a vehicle during its atmospheric entry is important for a precise quantification of heat flux margins. In some cases, exothermic reactions promoted by the catalytic properties of the surface material can significantly contribute to the overall heat flux. In this work, the effect of catalytic recombination of atomic nitrogen is examined within the framework of a state-specific DSMC implementation. State-to-state reaction cross sections are derived from a detailed quantum-chemical database for the N2(v, J) + N system. A coarse-grain model is used to reduce the number of internal states and state-specific reactions to a manageable level. The catalytic boundary condition is based on an phenomenological approach and the state-specific surface recombination probabilities can be imposed by the user. This can represent an important aspect in modelling catalysis, since experiments and molecular dynamics suggest that only part of the chemical energy is absorbed by the wall, with the formed molecules leaving the surface in an excited state. The implementation is verified in a simplified geometrical configuration by comparing the numerical results with an analytical solution, developed for a 1D diffusion problem in a binary mixture. Then, the effect of catalysis in a hypersonic flow along the stagnation line of a blunt body is studied.
DSMC simulations of vapor transport toward development of the lithium vapor box divertor concept
NASA Astrophysics Data System (ADS)
Jagoe, Christopher; Schwartz, Jacob; Goldston, Robert
2016-10-01
The lithium vapor divertor box concept attempts to achieve volumetric dissipation of the high heat efflux from a fusion power system. The vapor extracts the heat of the incoming plasma by ionization and radiation, while remaining localized in the vapor box due to differential pumping based on rapid condensation. Preliminary calculations with lithium vapor at densities appropriate for an NSTX-U-scale machine give Knudsen numbers between 0.01 and 1, outside both the range of continuum fluid dynamics and of collisionless Monte Carlo. The direct-simulation Monte Carlo (DSMC) method, however, can simulate rarefied gas flows in this regime. Using the solver contained in the OpenFOAM package, pressure-driven flows of water vapor will be analyzed. The use of water vapor in the relevant range of Knudsen number allows for a flexible similarity experiment to verify the reliability of the code before moving to tests with lithium. The simulation geometry consists of chains of boxes on a temperature gradient, connected by slots with widths that are a representative fraction of the dimensions of the box. We expect choked flow, sonic shocks, and order-of-magnitude pressure and density drops from box to box, but this expectation will be tested in the simulation and then experiment. This work is supported by the Princeton Environmental Institute.
Hypersonic separated flows about "tick" configurations with sensitivity to model design
NASA Astrophysics Data System (ADS)
Moss, J. N.; O'Byrne, S.; Gai, S. L.
2014-12-01
This paper presents computational results obtained by applying the direct simulation Monte Carlo (DSMC) method for hypersonic nonequilibrium flow about "tick-shaped" model configurations. These test models produces a complex flow where the nonequilibrium and rarefied aspects of the flow are initially enhanced as the flow passes over an expansion surface, and then the flow encounters a compression surface that can induce flow separation. The resulting flow is such that meaningful numerical simulations must have the capability to account for a significant range of rarefaction effects; hence the application of the DSMC method in the current study as the flow spans several flow regimes, including transitional, slip, and continuum. The current focus is to examine the sensitivity of both the model surface response (heating, friction and pressure) and flowfield structure to assumptions regarding surface boundary conditions and more extensively the impact of model design as influenced by leading edge configuration as well as the geometrical features of the expansion and compression surfaces. Numerical results indicate a strong sensitivity to both the extent of the leading edge sharpness and the magnitude of the leading edge bevel angle. Also, the length of the expansion surface for a fixed compression surface has a significant impact on the extent of separated flow.
A diffusive information preservation method for small Knudsen number flows
NASA Astrophysics Data System (ADS)
Fei, Fei; Fan, Jing
2013-06-01
The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.
Hypersonic Separated Flows About "Tick" Configurations With Sensitivity to Model Design
NASA Technical Reports Server (NTRS)
Moss, J. N.; O'Byrne, S.; Gai, S. L.
2014-01-01
This paper presents computational results obtained by applying the direct simulation Monte Carlo (DSMC) method for hypersonic nonequilibrium flow about "tick-shaped" model configurations. These test models produces a complex flow where the nonequilibrium and rarefied aspects of the flow are initially enhanced as the flow passes over an expansion surface, and then the flow encounters a compression surface that can induce flow separation. The resulting flow is such that meaningful numerical simulations must have the capability to account for a significant range of rarefaction effects; hence the application of the DSMC method in the current study as the flow spans several flow regimes, including transitional, slip, and continuum. The current focus is to examine the sensitivity of both the model surface response (heating, friction and pressure) and flowfield structure to assumptions regarding surface boundary conditions and more extensively the impact of model design as influenced by leading edge configuration as well as the geometrical features of the expansion and compression surfaces. Numerical results indicate a strong sensitivity to both the extent of the leading edge sharpness and the magnitude of the leading edge bevel angle. Also, the length of the expansion surface for a fixed compression surface has a significant impact on the extent of separated flow.
NASA Astrophysics Data System (ADS)
Takase, Kazuki; Takahashi, Kazunori; Takao, Yoshinori
2018-02-01
The effects of neutral distribution and an external magnetic field on plasma distribution and thruster performance are numerically investigated using a particle-in-cell simulation with Monte Carlo collisions (PIC-MCC) and the direct simulation Monte Carlo (DSMC) method. The modeled thruster consists of a quartz tube 1 cm in diameter and 3 cm in length, where a double-turn rf loop antenna is wound at the center of the tube and a solenoid is placed between the loop antenna and the downstream tube exit. A xenon propellant is introduced from both the upstream and downstream sides of the thruster, and the flow rates are varied while maintaining the total gas flow rate of 30 μg/s. The PIC-MCC calculations have been conducted using the neutral distribution obtained from the DSMC calculations, which were applied with different strengths of the magnetic field. The numerical results show that both the downstream gas injection and the external magnetic field with a maximum strength near the thruster exit lead to a shift of the plasma density peak from the upstream to the downstream side. Consequently, a larger total thrust is obtained when increasing the downstream gas injection and the magnetic field strength, which qualitatively agrees with a previous experiment using a helicon plasma source.
Numerical investigation of rarefaction effects in the vicinity of a sharp leading edge
NASA Astrophysics Data System (ADS)
Pan, Shaowu; Gao, Zhenxun; Lee, Chunhian
2014-12-01
This paper presents a study of rarefaction effect on hypersonic flow over a sharp leading edge. Both continuum approach and kinetic method: a widely spread commercial Computational Fluid Dynamics-Navior-Stokes-Fourier (CFD-NSF) software - Fluent together with a direct simulation Monte Carlo (DSMC) code developed by the authors are employed for simulation of transition regime with Knudsen number ranging from 0.005 to 0.2. It is found that Fluent can predict the wall fluxes in the case of hypersonic argon flow over the sharp leading edge for the lowest Kn case (Kn = 0.005) in current paper while for other cases it also has a good agreement with DSMC except at the location near the sharp leading edge. Among all of the wall fluxes, it is found that coefficient of pressure is the most sensitive to rarefaction while heat transfer is the least one. A parameter based on translational nonequilibrium and a cut-off value of 0.34 is proposed for continuum breakdown in this paper. The structure of entropy and velocity profile in boundary layer is analyzed. Also, it is found that the ratio of heat transfer coefficient to skin friction coefficient remains uniform along the surface for the four cases in this paper.
TiOx deposited by magnetron sputtering: a joint modelling and experimental study
NASA Astrophysics Data System (ADS)
Tonneau, R.; Moskovkin, P.; Pflug, A.; Lucas, S.
2018-05-01
This paper presents a 3D multiscale simulation approach to model magnetron reactive sputter deposition of TiOx⩽2 at various O2 inlets and its validation against experimental results. The simulation first involves the transport of sputtered material in a vacuum chamber by means of a three-dimensional direct simulation Monte Carlo (DSMC) technique. Second, the film growth at different positions on a 3D substrate is simulated using a kinetic Monte Carlo (kMC) method. When simulating the transport of species in the chamber, wall chemistry reactions are taken into account in order to get the proper content of the reactive species in the volume. Angular and energy distributions of particles are extracted from DSMC and used for film growth modelling by kMC. Along with the simulation, experimental deposition of TiOx coatings on silicon samples placed at different positions on a curved sample holder was performed. The experimental results are in agreement with the simulated ones. For a given coater, the plasma phase hysteresis behaviour, film composition and film morphology are predicted. The used methodology can be applied to any coater and any films. This paves the way to the elaboration of a virtual coater allowing a user to predict composition and morphology of films deposited in silico.
Lattice Boltzmann simulation of nonequilibrium effects in oscillatory gas flow.
Tang, G H; Gu, X J; Barber, R W; Emerson, D R; Zhang, Y H
2008-08-01
Accurate evaluation of damping in laterally oscillating microstructures is challenging due to the complex flow behavior. In addition, device fabrication techniques and surface properties will have an important effect on the flow characteristics. Although kinetic approaches such as the direct simulation Monte Carlo (DSMC) method and directly solving the Boltzmann equation can address these challenges, they are beyond the reach of current computer technology for large scale simulation. As the continuum Navier-Stokes equations become invalid for nonequilibrium flows, we take advantage of the computationally efficient lattice Boltzmann method to investigate nonequilibrium oscillating flows. We have analyzed the effects of the Stokes number, Knudsen number, and tangential momentum accommodation coefficient for oscillating Couette flow and Stokes' second problem. Our results are in excellent agreement with DSMC data for Knudsen numbers up to Kn=O(1) and show good agreement for Knudsen numbers as large as 2.5. In addition to increasing the Stokes number, we demonstrate that increasing the Knudsen number or decreasing the accommodation coefficient can also expedite the breakdown of symmetry for oscillating Couette flow. This results in an earlier transition from quasisteady to unsteady flow. Our paper also highlights the deviation in velocity slip between Stokes' second problem and the confined Couette case.
A study of internal energy relaxation in shocks using molecular dynamics based models
NASA Astrophysics Data System (ADS)
Li, Zheng; Parsons, Neal; Levin, Deborah A.
2015-10-01
Recent potential energy surfaces (PESs) for the N2 + N and N2 + N2 systems are used in molecular dynamics (MD) to simulate rates of vibrational and rotational relaxations for conditions that occur in hypersonic flows. For both chemical systems, it is found that the rotational relaxation number increases with the translational temperature and decreases as the rotational temperature approaches the translational temperature. The vibrational relaxation number is observed to decrease with translational temperature and approaches the rotational relaxation number in the high temperature region. The rotational and vibrational relaxation numbers are generally larger in the N2 + N2 system. MD-quasi-classical trajectory (QCT) with the PESs is also used to calculate the V-T transition cross sections, the collision cross section, and the dissociation cross section for each collision pair. Direct simulation Monte Carlo (DSMC) results for hypersonic flow over a blunt body with the total collision cross section from MD/QCT simulations, Larsen-Borgnakke with new relaxation numbers, and the N2 dissociation rate from MD/QCT show a profile with a decreased translational temperature and a rotational temperature close to vibrational temperature. The results demonstrate that many of the physical models employed in DSMC should be revised as fundamental potential energy surfaces suitable for high temperature conditions become available.
Tutorial for Thermophysics Universal Research Framework
2017-07-30
DS1V are compared in Section 3.4.5. 3.4.2 Description of the Example Problem In a fluid, disturbance information is communicated within a medium at the...Universal Research Framework development (TURF-DEV) package on a case-by-case basis. Brief descriptions of the operations are provided in Tables 4.1 and...of additional experimental (E) and research (R) operations included in TURF-DEV. Module Operation Description DSMC SPDistDirectDSMCCellMergeOp (R
Study of the Transition Flow Regime using Monte Carlo Methods
NASA Technical Reports Server (NTRS)
Hassan, H. A.
1999-01-01
This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.
Comparison of Hall Thruster Plume Expansion Model with Experimental Data (Preprint)
2006-07-01
Cartesian mesh. AQUILA, the focus of this study, is a hybrid PIC model that tracks particles along an unstructured tetrahedral mesh. COLISEUM is capable...measurements of the ion current density profile, ion energy distributions, and ion species fraction distributions using a nude Faraday probe...Spacecraft and Rockets, Vol.37 No.1. 6 Oh, D. and Hastings, D., “Three Dimensional PIC -DSMC Simulations of Hall Thruster Plumes and Analysis for
NASA Technical Reports Server (NTRS)
Goldstein, David B.; Varghese, Philip L.
1997-01-01
We proposed to create a single computational code incorporating methods that can model both rarefied and continuum flow to enable the efficient simulation of flow about space craft and high altitude hypersonic aerospace vehicles. The code was to use a single grid structure that permits a smooth transition between the continuum and rarefied portions of the flow. Developing an appropriate computational boundary between the two regions represented a major challenge. The primary approach chosen involves coupling a four-speed Lattice Boltzmann model for the continuum flow with the DSMC method in the rarefied regime. We also explored the possibility of using a standard finite difference Navier Stokes solver for the continuum flow. With the resulting code we will ultimately investigate three-dimensional plume impingement effects, a subject of critical importance to NASA and related to the work of Drs. Forrest Lumpkin, Steve Fitzgerald and Jay Le Beau at Johnson Space Center. Below is a brief background on the project and a summary of the results as of the end of the grant.
Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods
NASA Astrophysics Data System (ADS)
Sohn, Ilyoup
During extreme-Mach number reentry into Earth's atmosphere, spacecraft experience hypersonic non-equilibrium flow conditions that dissociate molecules and ionize atoms. Such situations occur behind a shock wave leading to high temperatures, which have an adverse effect on the thermal protection system and radar communications. Since the electronic energy levels of gaseous species are strongly excited for high Mach number conditions, the radiative contribution to the total heat load can be significant. In addition, radiative heat source within the shock layer may affect the internal energy distribution of dissociated and weakly ionized gas species and the number density of ablative species released from the surface of vehicles. Due to the radiation total heat load to the heat shield surface of the vehicle may be altered beyond mission tolerances. Therefore, in the design process of spacecrafts the effect of radiation must be considered and radiation analyses coupled with flow solvers have to be implemented to improve the reliability during the vehicle design stage. To perform the first stage for radiation analyses coupled with gas-dynamics, efficient databasing schemes for emission and absorption coefficients were developed to model radiation from hypersonic, non-equilibrium flows. For bound-bound transitions, spectral information including the line-center wavelength and assembled parameters for efficient calculations of emission and absorption coefficients are stored for typical air plasma species. Since the flow is non-equilibrium, a rate equation approach including both collisional and radiatively induced transitions was used to calculate the electronic state populations, assuming quasi-steady-state (QSS). The Voigt line shape function was assumed for modeling the line broadening effect. The accuracy and efficiency of the databasing scheme was examined by comparing results of the databasing scheme with those of NEQAIR for the Stardust flowfield. An accuracy of approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking into account non-local radiation. The definition of the escape factor which is dependent on the incoming radiative intensity from over all directions is presented. The effect of the escape factor on the distribution of electronic state populations of the atomic N and O radiating species is examined in a highly non-equilibrium flow condition using DSMC and PMC methods and the corresponding change of the radiative heat flux due to the non-local radiation is also investigated.
Novel search algorithms for a mid-infrared spectral library of cotton contaminants.
Loudermilk, J Brian; Himmelsbach, David S; Barton, Franklin E; de Haseth, James A
2008-06-01
During harvest, a variety of plant based contaminants are collected along with cotton lint. The USDA previously created a mid-infrared, attenuated total reflection (ATR), Fourier transform infrared (FT-IR) spectral library of cotton contaminants for contaminant identification as the contaminants have negative impacts on yarn quality. This library has shown impressive identification rates for extremely similar cellulose based contaminants in cases where the library was representative of the samples searched. When spectra of contaminant samples from crops grown in different geographic locations, seasons, and conditions and measured with a different spectrometer and accessories were searched, identification rates for standard search algorithms decreased significantly. Six standard algorithms were examined: dot product, correlation, sum of absolute values of differences, sum of the square root of the absolute values of differences, sum of absolute values of differences of derivatives, and sum of squared differences of derivatives. Four categories of contaminants derived from cotton plants were considered: leaf, stem, seed coat, and hull. Experiments revealed that the performance of the standard search algorithms depended upon the category of sample being searched and that different algorithms provided complementary information about sample identity. These results indicated that choosing a single standard algorithm to search the library was not possible. Three voting scheme algorithms based on result frequency, result rank, category frequency, or a combination of these factors for the results returned by the standard algorithms were developed and tested for their capability to overcome the unpredictability of the standard algorithms' performances. The group voting scheme search was based on the number of spectra from each category of samples represented in the library returned in the top ten results of the standard algorithms. This group algorithm was able to identify correctly as many test spectra as the best standard algorithm without relying on human choice to select a standard algorithm to perform the searches.
NASA Astrophysics Data System (ADS)
Fougere, N.; Combi, M. R.; Tenishev, V.; Bieler, A. M.; Migliorini, A.; Bockelée-Morvan, D.; Toth, G.; Huang, Z.; Gombosi, T. I.; Hansen, K. C.; Capaccioni, F.; Filacchione, G.; Piccioni, G.; Debout, V.; Erard, S.; Leyrat, C.; Fink, U.; Rubin, M.; Altwegg, K.; Tzou, C. Y.; Le Roy, L.; Calmonte, U.; Berthelier, J. J.; Rème, H.; Hässig, M.; Fuselier, S. A.; Fiethe, B.; De Keyser, J.
2015-12-01
As it orbits around comet 67P/Churyumov-Gerasimenko (CG), the Rosetta spacecraft acquires more information about its main target. The numerous observations made at various geometries and at different times enable a good spatial and temporal coverage of the evolution of CG's cometary coma. However, the question regarding the link between the coma measurements and the nucleus activity remains relatively open notably due to gas expansion and strong kinetic effects in the comet's rarefied atmosphere. In this work, we use coma observations made by the ROSINA-DFMS instrument to constrain the activity at the surface of the nucleus. The distribution of the H2O and CO2 outgassing is described with the use of spherical harmonics. The coordinates in the orthogonal system represented by the spherical harmonics are computed using a least squared method, minimizing the sum of the square residuals between an analytical coma model and the DFMS data. Then, the previously deduced activity distributions are used in a Direct Simulation Monte Carlo (DSMC) model to compute a full description of the H2O and CO2 coma of comet CG from the nucleus' surface up to several hundreds of kilometers. The DSMC outputs are used to create synthetic images, which can be directly compared with VIRTIS measurements. The good agreement between the VIRTIS observations and the DSMC model, itself constrained with ROSINA data, provides a compelling juxtaposition of the measurements from these two instruments. Acknowledgements Work at UofM was supported by contracts JPL#1266313, JPL#1266314 and NASA grant NNX09AB59G. Work at UoB was funded by the State of Bern, the Swiss National Science Foundation and by the ESA PRODEX Program. Work at Southwest Research institute was supported by subcontract #1496541 from the JPL. Work at BIRA-IASB was supported by the Belgian Science Policy Office via PRODEX/ROSINA PEA 90020. The authors would like to thank ASI, CNES, DLR, NASA for supporting this research. VIRTIS was built by a consortium formed by Italy, France and Germany, under the scientific responsibility of the IAPS of INAF, which guides also the scientific operations. The consortium includes also the LESIA of the Observatoire de Paris, and the Institut für Planetenforschung of DLR. The authors wish to thank the RSGS and the RMOC for their continuous support.
Subsurface Gas Flow and Ice Grain Acceleration within Enceladus and Europa Fissures: 2D DSMC Models
NASA Astrophysics Data System (ADS)
Tucker, O. J.; Combi, M. R.; Tenishev, V.
2014-12-01
The ejection of material from geysers is a ubiquitous occurrence on outer solar system bodies. Water vapor plumes have been observed emanating from the southern hemispheres of Enceladus and Europa (Hansen et al. 2011, Roth et al. 2014), and N2plumes carrying ice and ark particles on Triton (Soderblom et al. 2009). The gas and ice grain distributions in the Enceladus plume depend on the subsurface gas properties and the geometry of the fissures e.g., (Schmidt et al. 2008, Ingersoll et al. 2010). Of course the fissures can have complex geometries due to tidal stresses, melting, freezing etc., but directly sampled and inferred gas and grain properties for the plume (source rate, bulk velocity, terminal grain velocity) can be used to provide a basis to constrain characteristic dimensions of vent width and depth. We used a 2-dimensional Direct Simulation Monte Carlo (DSMC) technique to model venting from both axi-symmetric canyons with widths ~2 km and narrow jets with widths ~15-40 m. For all of our vent geometries, considered the water vapor source rates (1027 - 1028 s-1) and bulk gas velocities (~330 - 670 m/s) obtained at the surface were consistent with inferred values obtained by fits of the data for the plume densities (1026 - 1028 s-1, 250 - 1000 m/s) respectively. However, when using the resulting DSMC gas distribution for the canyon geometries to integrate the trajectories of ice grains we found it insufficient to accelerate submicron ice grains to Enceladus' escape speed. On the other hand, the gas distributions in the jet like vents accelerated grains > 10 μm significantly above Enceladus' escape speed. It has been suggested that micron-sized grains are ejected from the vents with speeds comparable to the Enceladus escape speed. Here we report on these results including comparisons to results obtained from 1D models as well as discuss the implications of our plume model results. We also show preliminary results for similar considerations applied to Europa. References: Hansen, 2011. Geophys. Res. Lett. 38, L11202; Ingersoll, 2010. Icarus 206, 594 - 607; Schmidt, 2008. Nature 451, 685 - 688; Soderblom, 2009. Science 250, 412 - 415; Roth, 2013l. Science http://dx.doi.org/10.1126/science.1247051 2013
NASA Astrophysics Data System (ADS)
Peng, Ao-Ping; Li, Zhi-Hui; Wu, Jun-Lin; Jiang, Xin-Yu
2016-12-01
Based on the previous researches of the Gas-Kinetic Unified Algorithm (GKUA) for flows from highly rarefied free-molecule transition to continuum, a new implicit scheme of cell-centered finite volume method is presented for directly solving the unified Boltzmann model equation covering various flow regimes. In view of the difficulty in generating the single-block grid system with high quality for complex irregular bodies, a multi-block docking grid generation method is designed on the basis of data transmission between blocks, and the data structure is constructed for processing arbitrary connection relations between blocks with high efficiency and reliability. As a result, the gas-kinetic unified algorithm with the implicit scheme and multi-block docking grid has been firstly established and used to solve the reentry flow problems around the multi-bodies covering all flow regimes with the whole range of Knudsen numbers from 10 to 3.7E-6. The implicit and explicit schemes are applied to computing and analyzing the supersonic flows in near-continuum and continuum regimes around a circular cylinder with careful comparison each other. It is shown that the present algorithm and modelling possess much higher computational efficiency and faster converging properties. The flow problems including two and three side-by-side cylinders are simulated from highly rarefied to near-continuum flow regimes, and the present computed results are found in good agreement with the related DSMC simulation and theoretical analysis solutions, which verify the good accuracy and reliability of the present method. It is observed that the spacing of the multi-body is smaller, the cylindrical throat obstruction is greater with the flow field of single-body asymmetrical more obviously and the normal force coefficient bigger. While in the near-continuum transitional flow regime of near-space flying surroundings, the spacing of the multi-body increases to six times of the diameter of the single-body, the interference effects of the multi-bodies tend to be negligible. The computing practice has confirmed that it is feasible for the present method to compute the aerodynamics and reveal flow mechanism around complex multi-body vehicles covering all flow regimes from the gas-kinetic point of view of solving the unified Boltzmann model velocity distribution function equation.
Demonstration of Hybrid DSMC-CFD Capability for Nonequilibrium Reacting Flow
2018-02-09
Lens-XX facility. This flow was chosen since a recent blind-code validation exercise revealed differences in CFD predictions and experimental data... experimental data that could be due to rarefied flow effects. The CFD solutions (using the US3D code) were run with no-slip boundary conditions and with...excellent agreement with that predicted by CFD. This implies that the dif- ference between CFD predictions and experimental data is not due to rarefied
1992-10-01
sealed bidding and competitive proposals. governed by the same regulations and laws The sealed bidding procedure requires ade- that govern procurement ...Summary xiv NDI ACQUISITION: An Alternative to "Business as Usual" to successful, effective government procure - posal Cover Sheet). Moreover, the...became policy when the OPlP ;,;sued the first opment costs. These benefits may be offset by in a series of memoranda governing procure - performance
Integrated Logistics Guide. Second Edition
1994-06-14
FORMER FACULTY DEPARTMENT CHAIRMAN MR. JOHN RIFFEE MR. JOEL MANARY CDR DALE IMMEL, USN COL SHAROLYN HAYES, USA LT COL RICHARD EZZELL , USAF DSMC LOGISTICS...Compliance with the requirement by program management should depict of DoDI 5000.2, Part 7A, to establish an ILS the most essential support program mile ...system level fac- tors and the performance of readiness simu- 3.4 SUMMARY lations. e Initial LSA activities prior to Mile - 3.5 REFERENCES stone 0 and
2011-10-01
specific modules as needed. The term “startup” is inclusive of any point in a DoD acquisition program. As noted above, methodology for conducting...Acquisition Sustainment =Decision Point =Milestone Review =Decision Point if PDR is not conducted before Milestone B ProgramA B Initiation) C IOC FOC...start a new program 2.2 Background Conclusions flowing from these observations led the Office of the Secretary of Defense, the De - fense Acquisition
Heat transfer in nonequilibrium boundary layer flow over a partly catalytic wall
NASA Astrophysics Data System (ADS)
Wang, Zhi-Hui
2016-11-01
Surface catalysis has a huge influence on the aeroheating performance of hypersonic vehicles. For the reentry flow problem of a traditional blunt vehicle, it is reasonable to assume a frozen boundary layer surrounding the vehicles' nose, and the catalytic heating can be decoupled with the heat conduction. However, when considering a hypersonic cruise vehicle flying in the medium-density near space, the boundary layer flow around its sharp leading-edge is likely to be nonequilibrium rather than frozen due to rarefied gas effects. As a result, there will be a competition between the heat conduction and the catalytic heating. In this paper, the theoretical modeling and the direct simulation Monte Carlo (DSMC) method are employed to study the corresponding rarefied nonequilibrium flow and heat transfer phenomena near the leading edge of the near space hypersonic vehicles. It is found that even under identical rarefication degree, the nonequilibrium degree of the flow and the corresponding heat transfer performance of the sharp leading edges could be different from that of the big blunt noses. A generalized model is preliminarily proposed to describe and to evaluate the competitive effects between the homogeneous recombination of atoms inside the nonequilibrium boundary layer and the heterogeneous recombination of atoms on the catalytic wall surface. The introduced nonequilibrium criterion and the analytical formula are validated and calibrated by the DSMC results, and the physical mechanism is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Tong, E-mail: tongzhu2@illinois.edu; Levin, Deborah A., E-mail: deblevin@illinois.edu; Li, Zheng, E-mail: zul107@psu.edu
2016-08-14
A high fidelity internal energy relaxation model for N{sub 2}–N suitable for use in direct simulation Monte Carlo (DSMC) modeling of chemically reacting flows is proposed. A novel two-dimensional binning approach with variable bin energy resolutions in the rotational and vibrational modes is developed for treating the internal mode of N{sub 2}. Both bin-to-bin and state-specific relaxation cross sections are obtained using the molecular dynamics/quasi-classical trajectory (MD/QCT) method with two potential energy surfaces as well as the state-specific database of Jaffe et al. The MD/QCT simulations of inelastic energy exchange between N{sub 2} and N show that there is amore » strong forward-preferential scattering behavior at high collision velocities. The 99 bin model is used in homogeneous DSMC relaxation simulations and is found to be able to recover the state-specific master equation results of Panesi et al. when the Jaffe state-specific cross sections are used. Rotational relaxation energy profiles and relaxation times obtained using the ReaxFF and Jaffe potential energy surfaces (PESs) are in general agreement but there are larger differences between the vibrational relaxation times. These differences become smaller as the translational temperature increases because the difference in the PES energy barrier becomes less important.« less
Parsons, Neal; Levin, Deborah A; van Duin, Adri C T; Zhu, Tong
2014-12-21
The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N2(Σg+1)-N2(Σg+1) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections.
2017-01-01
A space propulsion system is important for the normal mission operations of a spacecraft by adjusting its attitude and maneuver. Generally, a mono- and a bipropellant thruster have been mainly used for low thrust liquid rocket engines. But as the plume gas expelled from these small thrusters diffuses freely in a vacuum space along all directions, unwanted effects due to the plume collision onto the spacecraft surfaces can dramatically cause a deterioration of the function and performance of a spacecraft. Thus, aim of the present study is to investigate and compare the major differences of the plume gas impingement effects quantitatively between the small mono- and bipropellant thrusters using the computational fluid dynamics (CFD). For an efficiency of the numerical calculations, the whole calculation domain is divided into two different flow regimes depending on the flow characteristics, and then Navier-Stokes equations and parallelized Direct Simulation Monte Carlo (DSMC) method are adopted for each flow regime. From the present analysis, thermal and mass influences of the plume gas impingements on the spacecraft were analyzed for the mono- and the bipropellant thrusters. As a result, it is concluded that a careful understanding on the plume impingement effects depending on the chemical characteristics of different propellants are necessary for the efficient design of the spacecraft. PMID:28636625
NASA Astrophysics Data System (ADS)
Gicquel, Adeline; Vincent, Jean-Baptiste; Sierks, Holger; Rose, Martin; Agarwal, Jessica; Deller, Jakob; Guettler, Carsten; Hoefner, Sebastian; Hofmann, Marc; Hu, Xuanyu; Kovacs, Gabor; Oklay Vincent, Nilda; Shi, Xian; Tubiana, Cecilia; Barbieri, Cesare; Lamy, Phylippe; Rodrigo, Rafael; Koschny, Detlef; Rickman, Hans; OSIRIS Team
2016-10-01
Images of the nucleus and the coma (gas and dust) of comet 67P/Churyumov- Gerasimenko have been acquired by the OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) cameras system since March 2014 using both the wide angle camera (WAC) and the narrow angle camera (NAC). We are using the NAC camera to study the bright outburst observed on July 29th, 2015 in the southern hemisphere. The NAC camera's wavelength ranges between 250-1000 nm with a combination of 12 filters. The high spatial resolution is needed to localize the source point of the outburst on the surface of the nucleus. At the time of the observations, the heliocentric distance was 1.25AU and the distance between the spacecraft and the comet was 126 km. We aim to understand the physics leading to such outgassing: Is the jet associated to the outbursts controlled by the micro-topography? Or by ice suddenly exposed? We are using the Direct Simulation Monte Carlo (DSMC) method to study the gas flow close to the nucleus. The goal of the DSMC code is to reproduce the opening angle of the jet, and constrain the outgassing ratio between outburst source and local region. The results of this model will be compared to the images obtained with the NAC camera.
NASA Technical Reports Server (NTRS)
Holden, Michael S.; Harvey, John K.; Boyd, Iain D.; George, Jyothish; Horvath, Thomas J.
1997-01-01
This paper summarizes the results of a series of experimental studies in the LENS shock tunnel and computations with DSMC and Navier Stokes codes which have been made to examine the aerothermal and flowfield characteristics of the flow over a sting-supported planetary probe configuration in hypervelocity air and nitrogen flows. The experimental program was conducted in the LENS hypervelocity shock tunnel at total enthalpies of 5and 10 MJkg for a range of reservoir pressure conditions from 70 to 500 bars. Heat transfer and pressure measurements were made on the front and rear face of the probe and along the supporting sting. High-speed and single shot schlieren photography were also employed to examine the flow over the model and the time to establish the flow in the base recirculation region. Predictions of the flowfield characteristics and the distributions of heat transfer and pressure were made with DSMC codes for rarefied flow conditions and with the Navier-Stokes solvers for the higher pressure conditions where the flows were assumed to be laminar. Analysis of the time history records from the heat transfer and pressure instrumentation on the face of the probe and in the base region indicated that the base flow was fully established in under 4 milliseconds from flow initiation or between 35 and 50 flow lengths based on base height. The measurements made in three different tunnel entries with two models of identical geometries but with different instrumentation packages, one prepared by NASA Langley and the second prepared by CUBRC, demonstrated good agreement between heat transfer measurements made with two different types of thin film and coaxial gage instrumentation. The measurements of heat transfer and pressure to the front face of the probe were in good agreement with theoretical predictions from both the DSMC and Navier Stokes codes. For the measurements made in low density flows, computations with the DSMC code were found to compare well with the pressure and heat transfer measurements on the sting, although the computed heat transfer rates in the recirculation region did not exhibit the same characteristics as the measurements. For the 10MJkg and 500 bar reservoir match point condition, the measurements and heat transfer along the sting from the first group of studies were in agreement with the Navier Stokes solutions for laminar conditions. A similar set of measurements made in later tests where the model was moved to a slightly different position in the test section indicated that the boundary layer in the reattachment compression region was close to transition or transitional where small changes in the test environment can result in larger than laminar heating rates. The maximum heating coefficients on the sting observed in the present studies was a small fraction of similar measurements obtained at nominally the same conditions in the HEG shock tunnel, where it is possible for transition to occur in the base flow, and in the low enthalpy studies conducted in the NASA Langley high Reynolds number Mach 10 tunnel where the base flow was shown to be turbulent. While the hybrid Navier- StokedDMSC calculations by Gochberg et al. (Reference 1) suggested that employing the Navier- Stokes calculations for the entire flowfield could be seriously in error in the base region for the 10 MJkg, 500 bar test case, similar calculations performed by Cornell, presented here, do not.
Liang, Tengfei; Li, Qi; Ye, Wenjing
2013-07-01
A systematic study on the performance of two empirical gas-wall interaction models, the Maxwell model and the Cercignani-Lampis (CL) model, in the entire Knudsen range is conducted. The models are evaluated by examining the accuracy of key macroscopic quantities such as temperature, density, and pressure, in three benchmark thermal problems, namely the Fourier thermal problem, the Knudsen force problem, and the thermal transpiration problem. The reference solutions are obtained from a validated hybrid DSMC-MD algorithm developed in-house. It has been found that while both models predict temperature and density reasonably well in the Fourier thermal problem, the pressure profile obtained from Maxwell model exhibits a trend that opposes that from the reference solution. As a consequence, the Maxwell model is unable to predict the orientation change of the Knudsen force acting on a cold cylinder embedded in a hot cylindrical enclosure at a certain Knudsen number. In the simulation of the thermal transpiration coefficient, although all three models overestimate the coefficient, the coefficient obtained from CL model is the closest to the reference solution. The Maxwell model performs the worst. The cause of the overestimated coefficient is investigated and its link to the overly constrained correlation between the tangential momentum accommodation coefficient and the tangential energy accommodation coefficient inherent in the models is pointed out. Directions for further improvement of models are suggested.
Establishing a Department of Defense Program Management Body of Knowledge
1991-09-01
systems included, "...thousands of jet fighters, bombers and transport aircraft; one hundred new combat and support vessels; and thousands of tanks and...cannon-carrying troop transports and strategic and tactical missiles" (12:9). Such systems were designed to achieve goals and performance levels never...to L. A a 20-week Program Mnageme-.nt .ur., ’ DSMc b-,o : taking command of a mra or pLog-im. A Major De ?-n.5 Acquisition (Category I) Program in the
2009-03-27
ones like the Lennard - Jones potential with established parameters for each gas (e.g. N2 and 02), and for inelastic collisions DSMC method employs...solution of the collision integral. Lennard - Jones potential with two free parameters is used to obtain the elastic cross-section of the gas molecules...and the so called "combinatory relations" are used to obtain parameters of Lennard - Jones potential for an interaction of molecule A with molecule B
1995-02-01
ANo11C ,ing Eio Collie J. Johnson Art Director Greg Caruth K Typography nod Design Paula Croisetiere > Jeanne Elmore es~ Protrm Mlanager (ISSN 0199...and is especially helpful in two cisions. "The message here is to all of small-purchase categories - under us - from program directors, to pro...Facilitation Center riers in meetings due to emotions , rank and personality; The facility uses GROUPWARE wil enabe the * parallel processing, as all partici
Molecular gas dynamics applied to low-thrust propulsion
NASA Astrophysics Data System (ADS)
Zelesnik, Donna; Penko, Paul F.; Boyd, Iain D.
1993-11-01
The Direct Simulation Monte Carlo method is currently being applied to study flowfields of small thrusters, including both the internal nozzle and the external plume flow. The DSMC method is employed because of its inherent ability to capture nonequilibrium effects and proper boundary physics in low-density flow that are not readily obtained by continuum methods. Accurate prediction of both the internal and external nozzle flow is important in determining plume expansion which, in turn, bears directly on impingement and contamination effects.
2008-07-02
In order to cover a range of molecular species, argon , nitrogen, and methane were used as test gases. The polarizability to mass ratio of these gases...Japan, 21-25 July 2008. 14. ABSTRACT The Direct Simulation Monte Carlo (DSMC) method was used to investigate the interaction between argon ...reducing the maximum temperature. The optimal intervening time was found to be 0.7, 1.0 and 0.25 ns for argon , nitrogen, and methane at one atmosphere
Molecular gas dynamics applied to low-thrust propulsion
NASA Technical Reports Server (NTRS)
Zelesnik, Donna; Penko, Paul F.; Boyd, Iain D.
1993-01-01
The Direct Simulation Monte Carlo method is currently being applied to study flowfields of small thrusters, including both the internal nozzle and the external plume flow. The DSMC method is employed because of its inherent ability to capture nonequilibrium effects and proper boundary physics in low-density flow that are not readily obtained by continuum methods. Accurate prediction of both the internal and external nozzle flow is important in determining plume expansion which, in turn, bears directly on impingement and contamination effects.
1990-09-01
decrease in average consumer prices , to think of Europe 1992 as a starting date or a point of departure for what some have called the largest...overall The European Community’s four consumer prices . executive institutions-- Commission, Parliament, Council of Ministers and Court In 1985, the...of the draft, but also for may want to skim Chapter One and go to the extra effort he put forth to ensure that Chapter Two’s discussion on parallel
The Role and Nature of Anti-Tamper Techniques in U.S. Defense Acquisition
1999-01-01
sales to an ally, accidental loss, or capture during a conflict by an enemy. Because U.S. military hardware and software have a high technical content...that provides a qualitative edge, protection of this technological superiority is a high priority. Program managers can mitigate such risks with a...dealing with technical and military topics. He is a graduate of DSMC’s APMC 97-3 and the USAF Test Pilot School . He has an M.S. degree in aerospace
Modeling shock waves in an ideal gas: combining the Burnett approximation and Holian's conjecture.
He, Yi-Guang; Tang, Xiu-Zhang; Pu, Yi-Kang
2008-07-01
We model a shock wave in an ideal gas by combining the Burnett approximation and Holian's conjecture. We use the temperature in the direction of shock propagation rather than the average temperature in the Burnett transport coefficients. The shock wave profiles and shock thickness are compared with other theories. The results are found to agree better with the nonequilibrium molecular dynamics (NEMD) and direct simulation Monte Carlo (DSMC) data than the Burnett equations and the modified Navier-Stokes theory.
Jha, Abhinav K.; Kupinski, Matthew A.; Rodríguez, Jeffrey J.; Stephen, Renu M.; Stopeck, Alison T.
2012-01-01
In many studies, the estimation of the apparent diffusion coefficient (ADC) of lesions in visceral organs in diffusion-weighted (DW) magnetic resonance images requires an accurate lesion-segmentation algorithm. To evaluate these lesion-segmentation algorithms, region-overlap measures are used currently. However, the end task from the DW images is accurate ADC estimation, and the region-overlap measures do not evaluate the segmentation algorithms on this task. Moreover, these measures rely on the existence of gold-standard segmentation of the lesion, which is typically unavailable. In this paper, we study the problem of task-based evaluation of segmentation algorithms in DW imaging in the absence of a gold standard. We first show that using manual segmentations instead of gold-standard segmentations for this task-based evaluation is unreliable. We then propose a method to compare the segmentation algorithms that does not require gold-standard or manual segmentation results. The no-gold-standard method estimates the bias and the variance of the error between the true ADC values and the ADC values estimated using the automated segmentation algorithm. The method can be used to rank the segmentation algorithms on the basis of both accuracy and precision. We also propose consistency checks for this evaluation technique. PMID:22713231
DSMC simulation of two-phase plume flow with UV radiation
NASA Astrophysics Data System (ADS)
Li, Jie; Liu, Ying; Wang, Ning; Jin, Ling
2014-12-01
Rarefied gas-particle two-phase plume in which the phase of particles is liquid or solid flows from a solid propellant rocket of hypersonic vehicle flying at high altitudes, the aluminum oxide particulates not only impact the rarefied gas flow properties, but also make a great difference to plume radiation signature, so the radiation prediction of the rarefied gas-particle two-phase plume flow is very important for space target detection of hypersonic vehicles. Accordingly, this project aims to study the rarefied gas-particle two-phase flow and ultraviolet radiation (UV) characteristics. Considering a two-way interphase coupling of momentum and energy, the direct simulation Monte Carlo (DSMC) method is developed for particle phase change and the particle flow, including particulate collision, coalescence as well as separation, and a Monte Carlo ray trace model is implemented for the particulate UV radiation. A program for the numerical simulation of the gas-particle two-phase flow and radiation in which the gas flow nonequilibrium is strong is implemented as well. Ultraviolet radiation characteristics of the particle phase is studied based on the calculation of the flow field coupled with the radiation calculation, the radiation model for different size particles is analyzed, focusing on the effects of particle emission, absorption, scattering as well as the searchlight emission of the nozzle. A new approach may be proposed to describe the rarefied gas-particle two-phase plume flow and radiation transfer characteristics in this project.
DSMC simulations of leading edge flat-plate boundary layer flows at high Mach number
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev, , Dr.
2017-04-01
The flow over a 2D leading-edge flat plate is studied at Mach number Ma =(Uinf / \\setmn √{kBTinf / m}) in the range
DSMC simulations of leading edge flat-plate boundary layer flows at high Mach number
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev, , Dr.
2016-11-01
The flow over a 2D leading-edge flat plate is studied at Mach number Ma = (Uinf /√{kBTinf / m }) in the range
DSMC simulations of leading edge flat-plate boundary layer flows at high Mach number
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev, , Dr.
2017-01-01
The flow over a 2D leading-edge flat plate is studied at Mach number Ma = (Uinf /√{kBTinf / m }) in the range
DSMC simulations of leading edge flat-plate boundary layer flows at high Mach number
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev
2016-10-01
The flow over a 2D leading-edge flat plate is studied at Mach number Ma = (Uinf / {kBTinf /m}) in the range
DSMC simulations of leading edge flat-plate boundary layer flows at high Mach number
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev, , Dr.
The flow over a 2D leading-edge flat plate is studied at Mach number Ma = (Uinf / ∖ sqrt{kBTinf / m})in the range
NASA Astrophysics Data System (ADS)
Dickson, S.; Gausa, M. A.; Robertson, S. H.; Sternovsky, Z.
2012-12-01
We demonstrate that a channel electron multiplier (CEM) can be operated on a sounding rocket in the pulse-counting mode from 120 km to 75 km altitude without the cryogenic evacuation used in the past. Evacuation of the CEM is provided only by aerodynamic flow around the rocket. This demonstration is motivated by the need for additional flights of mass spectrometers to clarify the fate of metallic compounds and ions ablated from micrometeorites and their possible role in the nucleation of noctilucent clouds. The CEMs were flown as guest instruments on the two sounding rockets of the CHAMPS (CHarge And mass of Meteoritic smoke ParticleS) rocket campaign which were launched into the mesosphere in October 2011 from Andøya Rocket Range, Norway. Modeling of the aerodynamic flow around the payload with Direct Simulation Monte-Carlo (DSMC) code showed that the pressure is reduced below ambient in the void beneath an aft-facing surface. An enclosure containing the CEM was placed above an aft-facing deck and a valve was opened on the downleg to expose the CEM to the aerodynamically evacuated region below. The CEM operated successfully from apogee down to ~75 km. A Pirani gauge confirmed pressures reduced to as low as 20% of ambient with the extent of reduction dependent upon altitude and velocity. Additional DSMC simulations indicate that there are alternate payload designs with improved aerodynamic pumping for forward mounted instruments such as mass spectrometers.
A paradigm for modeling and computation of gas dynamics
NASA Astrophysics Data System (ADS)
Xu, Kun; Liu, Chang
2017-02-01
In the continuum flow regime, the Navier-Stokes (NS) equations are usually used for the description of gas dynamics. On the other hand, the Boltzmann equation is applied for the rarefied flow. These two equations are based on distinguishable modeling scales for flow physics. Fortunately, due to the scale separation, i.e., the hydrodynamic and kinetic ones, both the Navier-Stokes equations and the Boltzmann equation are applicable in their respective domains. However, in real science and engineering applications, they may not have such a distinctive scale separation. For example, around a hypersonic flying vehicle, the flow physics at different regions may correspond to different regimes, where the local Knudsen number can be changed significantly in several orders of magnitude. With a variation of flow physics, theoretically a continuous governing equation from the kinetic Boltzmann modeling to the hydrodynamic Navier-Stokes dynamics should be used for its efficient description. However, due to the difficulties of a direct modeling of flow physics in the scale between the kinetic and hydrodynamic ones, there is basically no reliable theory or valid governing equations to cover the whole transition regime, except resolving flow physics always down to the mean free path scale, such as the direct Boltzmann solver and the Direct Simulation Monte Carlo (DSMC) method. In fact, it is an unresolved problem about the exact scale for the validity of the NS equations, especially in the small Reynolds number cases. The computational fluid dynamics (CFD) is usually based on the numerical solution of partial differential equations (PDEs), and it targets on the recovering of the exact solution of the PDEs as mesh size and time step converging to zero. This methodology can be hardly applied to solve the multiple scale problem efficiently because there is no such a complete PDE for flow physics through a continuous variation of scales. For the non-equilibrium flow study, the direct modeling methods, such as DSMC, particle in cell, and smooth particle hydrodynamics, play a dominant role to incorporate the flow physics into the algorithm construction directly. It is fully legitimate to combine the modeling and computation together without going through the process of constructing PDEs. In other words, the CFD research is not only to obtain the numerical solution of governing equations but to model flow dynamics as well. This methodology leads to the unified gas-kinetic scheme (UGKS) for flow simulation in all flow regimes. Based on UGKS, the boundary for the validation of the Navier-Stokes equations can be quantitatively evaluated. The combination of modeling and computation provides a paradigm for the description of multiscale transport process.
Mental Computation or Standard Algorithm? Children's Strategy Choices on Multi-Digit Subtractions
ERIC Educational Resources Information Center
Torbeyns, Joke; Verschaffel, Lieven
2016-01-01
This study analyzed children's use of mental computation strategies and the standard algorithm on multi-digit subtractions. Fifty-eight Flemish 4th graders of varying mathematical achievement level were individually offered subtractions that either stimulated the use of mental computation strategies or the standard algorithm in one choice and two…
2014-11-21
cover in the region where gas expands all the way round the nozzle exit in the vacuum of space. This geome- try is investigated using hybrid NS/DSMC with...Final 3. DATES COVERED (From - To) 19 May 2014 – 18 Oct 2014 4. TITLE AND SUBTITLE Report on Rarefied Gas Dynamics Research Status 5a...Air Force about the current status of research in rarefied gas dynamics and related fields, primarily via the 29th International Symposium on Rarefied
Rarefaction and Non-equilibrium Effects in Hypersonic Flows about Leading Edges of Small Bluntness
NASA Astrophysics Data System (ADS)
Ivanov, Mikhail; Khotyanovsky, Dmitry; Kudryavtsev, Alexey; Shershnev, Anton; Bondar, Yevgeniy; Yonemura, Shigeru
2011-05-01
A hypersonic flow about a cylindrically blunted thick plate at a zero angle of attack is numerically studied with the kinetic (DSMC) and continuum (Navier-Stokes equations) approaches. The Navier-Stokes equations with velocity slip and temperature jump boundary conditions correctly predict the flow fields and surface parameters for values of the Knudsen number (based on the radius of leading edge curvature) smaller than 0.1. The results of computations demonstrate significant effects of the entropy layer on the boundary layer characteristics.
Nonequilibrium diffusive gas dynamics: Poiseuille microflow
NASA Astrophysics Data System (ADS)
Abramov, Rafail V.; Otto, Jasmine T.
2018-05-01
We test the recently developed hierarchy of diffusive moment closures for gas dynamics together with the near-wall viscosity scaling on the Poiseuille flow of argon and nitrogen in a one micrometer wide channel, and compare it against the corresponding Direct Simulation Monte Carlo computations. We find that the diffusive regularized Grad equations with viscosity scaling provide the most accurate approximation to the benchmark DSMC results. At the same time, the conventional Navier-Stokes equations without the near-wall viscosity scaling are found to be the least accurate among the tested closures.
The Program Manager’s Support System (PMSS). An Executive Overview and System Description,
1987-01-01
process. The PMSS tool will, when completed, support the program management process in all stages of program nanagement; that is, birth of the...module, developed as a template on LOTUS 1-2-3, is an application of the Constructive Cost Model (COCOMO) developed by B. Boehm. The DSMC SWCE module, a...developed for a specific program office but can be modified for use by others. It is a "template" system designed to operate on a Zenith Z-150 using Lotus 1
1992-05-01
one manager -to-player inter- coaching styles are being used in tions do best with structured and actions, which diminish as each these outside...May-june 1992’ MANAGER Journal of the Defense Systems Management College Program management ,teI hIN be pl~ vrb~c aeese and sole; its 92-19864 92 7...23 l 9~3 PROGRAM MANAGER Journal of the Defense Systems Management College Vol. XXI, No. 3, DSMC 108 2 8 Is There Going to Be a High- Rebuilding the
DSMC modeling of flows with recombination reactions
NASA Astrophysics Data System (ADS)
Gimelshein, Sergey; Wysong, Ingrid
2017-06-01
An empirical microscopic recombination model is developed for the direct simulation Monte Carlo method that complements the extended weak vibrational bias model of dissociation. The model maintains the correct equilibrium reaction constant in a wide range of temperatures by using the collision theory to enforce the number of recombination events. It also strictly follows the detailed balance requirement for equilibrium gas. The model and its implementation are verified with oxygen and nitrogen heat bath relaxation and compared with available experimental data on atomic oxygen recombination in argon and molecular nitrogen.
Widdifield, Jessica; Bombardier, Claire; Bernatsky, Sasha; Paterson, J Michael; Green, Diane; Young, Jacqueline; Ivers, Noah; Butt, Debra A; Jaakkimainen, R Liisa; Thorne, J Carter; Tu, Karen
2014-06-23
We have previously validated administrative data algorithms to identify patients with rheumatoid arthritis (RA) using rheumatology clinic records as the reference standard. Here we reassessed the accuracy of the algorithms using primary care records as the reference standard. We performed a retrospective chart abstraction study using a random sample of 7500 adult patients under the care of 83 family physicians contributing to the Electronic Medical Record Administrative data Linked Database (EMRALD) in Ontario, Canada. Using physician-reported diagnoses as the reference standard, we computed and compared the sensitivity, specificity, and predictive values for over 100 administrative data algorithms for RA case ascertainment. We identified 69 patients with RA for a lifetime RA prevalence of 0.9%. All algorithms had excellent specificity (>97%). However, sensitivity varied (75-90%) among physician billing algorithms. Despite the low prevalence of RA, most algorithms had adequate positive predictive value (PPV; 51-83%). The algorithm of "[1 hospitalization RA diagnosis code] or [3 physician RA diagnosis codes with ≥1 by a specialist over 2 years]" had a sensitivity of 78% (95% CI 69-88), specificity of 100% (95% CI 100-100), PPV of 78% (95% CI 69-88) and NPV of 100% (95% CI 100-100). Administrative data algorithms for detecting RA patients achieved a high degree of accuracy amongst the general population. However, results varied slightly from our previous report, which can be attributed to differences in the reference standards with respect to disease prevalence, spectrum of disease, and type of comparator group.
ERIC Educational Resources Information Center
Raveh, Ira; Koichu, Boris; Peled, Irit; Zaslavsky, Orit
2016-01-01
In this article we present an integrative framework of knowledge for teaching the standard algorithms of the four basic arithmetic operations. The framework is based on a mathematical analysis of the algorithms, a connectionist perspective on teaching mathematics and an analogy with previous frameworks of knowledge for teaching arithmetic…
Adaptive image coding based on cubic-spline interpolation
NASA Astrophysics Data System (ADS)
Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien
2014-09-01
It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.
An information geometric approach to least squares minimization
NASA Astrophysics Data System (ADS)
Transtrum, Mark; Machta, Benjamin; Sethna, James
2009-03-01
Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.
Improvement of Frequency Locking Algorithm for Atomic Frequency Standards
NASA Astrophysics Data System (ADS)
Park, Young-Ho; Kang, Hoonsoo; Heyong Lee, Soo; Eon Park, Sang; Lee, Jong Koo; Lee, Ho Seong; Kwon, Taeg Yong
2010-09-01
The authors describe a novel method of frequency locking algorithm for atomic frequency standards. The new algorithm for locking the microwave frequency to the Ramsey resonance is compared with the old one that had been employed in the cesium atomic beam frequency standards such as NIST-7 and KRISS-1. Numerical simulations for testing the performance of the algorithm show that the new method has a noise filtering performance superior to the old one by a factor of 1.2 for the flicker signal noise and 1.4 for random-walk signal noise. The new algorithm can readily be used to enhance the frequency stability for a digital servo employing the slow square wave frequency modulation.
Analysis of high-speed rotating flow inside gas centrifuge casing
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev, , Dr.
2017-10-01
The generalized analytical model for the radial boundary layer inside the gas centrifuge casing in which the inner cylinder is rotating at a constant angular velocity Ω_i while the outer one is stationary, is formulated for studying the secondary gas flow field due to wall thermal forcing, inflow/outflow of light gas along the boundaries, as well as due to the combination of the above two external forcing. The analytical model includes the sixth order differential equation for the radial boundary layer at the cylindrical curved surface in terms of master potential (χ) , which is derived from the equations of motion in an axisymmetric (r - z) plane. The linearization approximation is used, where the equations of motion are truncated at linear order in the velocity and pressure disturbances to the base flow, which is a solid-body rotation. Additional approximations in the analytical model include constant temperature in the base state (isothermal compressible Couette flow), high aspect ratio (length is large compared to the annular gap), high Reynolds number, but there is no limitation on the Mach number. The discrete eigenvalues and eigenfunctions of the linear operators (sixth-order in the radial direction for the generalized analytical equation) are obtained. The solutions for the secondary flow is determined in terms of these eigenvalues and eigenfunctions. These solutions are compared with direct simulation Monte Carlo (DSMC) simulations and found excellent agreement (with a difference of less than 15%) between the predictions of the analytical model and the DSMC simulations, provided the boundary conditions in the analytical model are accurately specified.
Analysis of high-speed rotating flow inside gas centrifuge casing
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev, , Dr.
2017-09-01
The generalized analytical model for the radial boundary layer inside the gas centrifuge casing in which the inner cylinder is rotating at a constant angular velocity Ωi while the outer one is stationary, is formulated for studying the secondary gas flow field due to wall thermal forcing, inflow/outflow of light gas along the boundaries, as well as due to the combination of the above two external forcing. The analytical model includes the sixth order differential equation for the radial boundary layer at the cylindrical curved surface in terms of master potential (χ) , which is derived from the equations of motion in an axisymmetric (r - z) plane. The linearization approximation is used, where the equations of motion are truncated at linear order in the velocity and pressure disturbances to the base flow, which is a solid-body rotation. Additional approximations in the analytical model include constant temperature in the base state (isothermal compressible Couette flow), high aspect ratio (length is large compared to the annular gap), high Reynolds number, but there is no limitation on the Mach number. The discrete eigenvalues and eigenfunctions of the linear operators (sixth-order in the radial direction for the generalized analytical equation) are obtained. The solutions for the secondary flow is determined in terms of these eigenvalues and eigenfunctions. These solutions are compared with direct simulation Monte Carlo (DSMC) simulations and found excellent agreement (with a difference of less than 15%) between the predictions of the analytical model and the DSMC simulations, provided the boundary conditions in the analytical model are accurately specified.
Analysis of high-speed rotating flow inside gas centrifuge casing
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev
2017-11-01
The generalized analytical model for the radial boundary layer inside the gas centrifuge casing in which the inner cylinder is rotating at a constant angular velocity Ωi while the outer one is stationary, is formulated for studying the secondary gas flow field due to wall thermal forcing, inflow/outflow of light gas along the boundaries, as well as due to the combination of the above two external forcing. The analytical model includes the sixth order differential equation for the radial boundary layer at the cylindrical curved surface in terms of master potential (χ) , which is derived from the equations of motion in an axisymmetric (r - z) plane. The linearization approximation is used, where the equations of motion are truncated at linear order in the velocity and pressure disturbances to the base flow, which is a solid-body rotation. Additional approximations in the analytical model include constant temperature in the base state (isothermal compressible Couette flow), high aspect ratio (length is large compared to the annular gap), high Reynolds number, but there is no limitation on the Mach number. The discrete eigenvalues and eigenfunctions of the linear operators (sixth-order in the radial direction for the generalized analytical equation) are obtained. The solutions for the secondary flow is determined in terms of these eigenvalues and eigenfunctions. These solutions are compared with direct simulation Monte Carlo (DSMC) simulations and found excellent agreement (with a difference of less than 15%) between the predictions of the analytical model and the DSMC simulations, provided the boundary conditions in the analytical model are accurately specified.
Full System Model of Magnetron Sputter Chamber - Proof-of-Principle Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walton, C; Gilmer, G; Zepeda-Ruiz, L
2007-05-04
The lack of detailed knowledge of internal process conditions remains a key challenge in magnetron sputtering, both for chamber design and for process development. Fundamental information such as the pressure and temperature distribution of the sputter gas, and the energies and arrival angles of the sputtered atoms and other energetic species is often missing, or is only estimated from general formulas. However, open-source or low-cost tools are available for modeling most steps of the sputter process, which can give more accurate and complete data than textbook estimates, using only desktop computations. To get a better understanding of magnetron sputtering, wemore » have collected existing models for the 5 major process steps: the input and distribution of the neutral background gas using Direct Simulation Monte Carlo (DSMC), dynamics of the plasma using Particle In Cell-Monte Carlo Collision (PIC-MCC), impact of ions on the target using molecular dynamics (MD), transport of sputtered atoms to the substrate using DSMC, and growth of the film using hybrid Kinetic Monte Carlo (KMC) and MD methods. Models have been tested against experimental measurements. For example, gas rarefaction as observed by Rossnagel and others has been reproduced, and it is associated with a local pressure increase of {approx}50% which may strongly influence film properties such as stress. Results on energies and arrival angles of sputtered atoms and reflected gas neutrals are applied to the Kinetic Monte Carlo simulation of film growth. Model results and applications to growth of dense Cu and Be films are presented.« less
Thin film deposition using rarefied gas jet
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev, , Dr.
2017-01-01
The rarefied gas jet of aluminium is studied at Mach number Ma =(U_j /√{ kbTj / m }) in the range .01
Aerodynamic characterization of the jet of an arc wind tunnel
NASA Astrophysics Data System (ADS)
Zuppardi, Gennaro; Esposito, Antonio
2016-11-01
It is well known that, due to a very aggressive environment and to a rather high rarefaction level of the arc wind tunnel jet, the measurement of fluid-dynamic parameters is difficult. For this reason, the aerodynamic characterization of the jet relies also on computer codes, simulating the operation of the tunnel. The present authors already used successfully such a kind of computing procedure for the tests in the arc wind tunnel (SPES) in Naples (Italy). In the present work an improved procedure is proposed. Like the former procedure also the present procedure relies on two codes working in tandem: 1) one-dimensional code simulating the inviscid and thermally not-conducting flow field in the torch, in the mix-chamber and in the nozzle up to the position, along the nozzle axis, of the continuum breakdown, 2) Direct Simulation Monte Carlo (DSMC) code simulating the flow field in the remaining part of the nozzle. In the present procedure, the DSMC simulation includes the simulation both in the nozzle and in the test chamber. An interesting problem, considered in this paper by means of the present procedure, has been the simulation of the flow field around a Pitot tube and of the related measurement of the stagnation pressure. The measured stagnation pressure, under rarefied conditions, may be even four times the theoretical value. Therefore a substantial correction has to be applied to the measured pressure. In the present paper a correction factor for the stagnation pressure measured in SPES is proposed. The analysis relies on twelve tests made in SPES.
DSMC simulations of leading edge flat-plate boundary layer flows at high Mach number
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev
2016-09-01
The flow over a 2D leading-edge flat plate is studied at Mach number Ma = (Uinf /√{kBTinf / m }) in the range
Algorithm Estimates Microwave Water-Vapor Delay
NASA Technical Reports Server (NTRS)
Robinson, Steven E.
1989-01-01
Accuracy equals or exceeds conventional linear algorithms. "Profile" algorithm improved algorithm using water-vapor-radiometer data to produce estimates of microwave delays caused by water vapor in troposphere. Does not require site-specific and weather-dependent empirical parameters other than standard meteorological data, latitude, and altitude for use in conjunction with published standard atmospheric data. Basic premise of profile algorithm, wet-path delay approximated closely by solution to simplified version of nonlinear delay problem and generated numerically from each radiometer observation and simultaneous meteorological data.
Asian dust aerosol: Optical effect on satellite ocean color signal and a scheme of its correction
NASA Astrophysics Data System (ADS)
Fukushima, H.; Toratani, M.
1997-07-01
The paper first exhibits the influence of the Asian dust aerosol (KOSA) on a coastal zone color scanner (CZCS) image which records erroneously low or negative satellite-derived water-leaving radiance especially in a shorter wavelength region. This suggests the presence of spectrally dependent absorption which was disregarded in the past atmospheric correction algorithms. On the basis of the analysis of the scene, a semiempirical optical model of the Asian dust aerosol that relates aerosol single scattering albedo (ωA) to the spectral ratio of aerosol optical thickness between 550 nm and 670 nm is developed. Then, as a modification to a standard CZCS atmospheric correction algorithm (NASA standard algorithm), a scheme which estimates pixel-wise aerosol optical thickness, and in turn ωA, is proposed. The assumption of constant normalized water-leaving radiance at 550 nm is adopted together with a model of aerosol scattering phase function. The scheme is combined to the standard algorithm, performing atmospheric correction just the same as the standard version with a fixed Angstrom coefficient except in the case where the presence of Asian dust aerosol is detected by the lowered satellite-derived Angstrom exponent. Some of the model parameter values are determined so that the scheme does not produce any spatial discontinuity with the standard scheme. The algorithm was tested against the Japanese Asian dust CZCS scene with parameter values of the spectral dependency of ωA, first statistically determined and second optimized for selected pixels. Analysis suggests that the parameter values depend on the assumed Angstrom coefficient for standard algorithm, at the same time defining the spatial extent of the area to apply the Asian dust scheme. The algorithm was also tested for a Saharan dust scene, showing the relevance of the scheme but with different parameter setting. Finally, the algorithm was applied to a data set of 25 CZCS scenes to produce a monthly composite of pigment concentration for April 1981. Through these analyses, the modified algorithm is considered robust in the sense that it operates most compatibly with the standard algorithm yet performs adaptively in response to the magnitude of the dust effect.
A spectral, quasi-cylindrical and dispersion-free Particle-In-Cell algorithm
Lehe, Remi; Kirchen, Manuel; Andriyash, Igor A.; ...
2016-02-17
We propose a spectral Particle-In-Cell (PIC) algorithm that is based on the combination of a Hankel transform and a Fourier transform. For physical problems that have close-to-cylindrical symmetry, this algorithm can be much faster than full 3D PIC algorithms. In addition, unlike standard finite-difference PIC codes, the proposed algorithm is free of spurious numerical dispersion, in vacuum. This algorithm is benchmarked in several situations that are of interest for laser-plasma interactions. These benchmarks show that it avoids a number of numerical artifacts, that would otherwise affect the physics in a standard PIC algorithm - including the zero-order numerical Cherenkov effect.
49 CFR 236.1033 - Communications and security requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... shall: (1) Use an algorithm approved by the National Institute of Standards (NIST) or a similarly...; or (ii) When the key algorithm reaches its lifespan as defined by the standards body responsible for approval of the algorithm. (c) The cleartext form of the cryptographic keys shall be protected from...
49 CFR 236.1033 - Communications and security requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... shall: (1) Use an algorithm approved by the National Institute of Standards (NIST) or a similarly...; or (ii) When the key algorithm reaches its lifespan as defined by the standards body responsible for approval of the algorithm. (c) The cleartext form of the cryptographic keys shall be protected from...
49 CFR 236.1033 - Communications and security requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... shall: (1) Use an algorithm approved by the National Institute of Standards (NIST) or a similarly...; or (ii) When the key algorithm reaches its lifespan as defined by the standards body responsible for approval of the algorithm. (c) The cleartext form of the cryptographic keys shall be protected from...
49 CFR 236.1033 - Communications and security requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... shall: (1) Use an algorithm approved by the National Institute of Standards (NIST) or a similarly...; or (ii) When the key algorithm reaches its lifespan as defined by the standards body responsible for approval of the algorithm. (c) The cleartext form of the cryptographic keys shall be protected from...
49 CFR 236.1033 - Communications and security requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... shall: (1) Use an algorithm approved by the National Institute of Standards (NIST) or a similarly...; or (ii) When the key algorithm reaches its lifespan as defined by the standards body responsible for approval of the algorithm. (c) The cleartext form of the cryptographic keys shall be protected from...
NASA Astrophysics Data System (ADS)
Basri, M.; Mawengkang, H.; Zamzami, E. M.
2018-03-01
Limitations of storage sources is one option to switch to cloud storage. Confidentiality and security of data stored on the cloud is very important. To keep up the confidentiality and security of such data can be done one of them by using cryptography techniques. Data Encryption Standard (DES) is one of the block cipher algorithms used as standard symmetric encryption algorithm. This DES will produce 8 blocks of ciphers combined into one ciphertext, but the ciphertext are weak against brute force attacks. Therefore, the last 8 block cipher will be converted into 8 random images using Least Significant Bit (LSB) algorithm which later draws the result of cipher of DES algorithm to be merged into one.
NASA Astrophysics Data System (ADS)
Lai, Ian-Lin; Su, Cheng-Chin; Ip, Wing-Huen; Wei, Chen-En; Wu, Jong-Shinn; Lo, Ming-Chung; Liao, Ying; Thomas, Nicolas
2016-03-01
With a combination of the Direct Simulation Monte Carlo (DSMC) calculation and test particle computation, the ballistic transport process of the hydroxyl radicals and oxygen atoms produced by photodissociation of water molecules in the coma of comet 67P/Churyumov-Gerasimenko is modelled. We discuss the key elements and essential features of such simulations which results can be compared with the remote-sensing and in situ measurements of cometary gas coma from the Rosetta mission at different orbital phases of this comet.
Accuracy Analysis of DSMC Chemistry Models Applied to a Normal Shock Wave
2012-06-20
CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18 . NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON A. Ketsdever a. REPORT Unclassified b. ABSTRACT...coefficient from [4] is assumed to be 2×10−19 m3/s at 5000 K and 7− 18 m3/s at 10,000K ; the QK prediction using the present VHS collision parameters...is 9−20 m3/s at 5000 K and 2− 18 m3/s at 10000K. Note that the QK for the present work was modified for use with AHO energy levels for consistency
Geographic Gossip: Efficient Averaging for Sensor Networks
NASA Astrophysics Data System (ADS)
Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.
Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niemkiewicz, J; Palmiotti, A; Miner, M
2014-06-01
Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU valuesmore » were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation treatment planning accuracy.« less
Enhancement of Fast Face Detection Algorithm Based on a Cascade of Decision Trees
NASA Astrophysics Data System (ADS)
Khryashchev, V. V.; Lebedev, A. A.; Priorov, A. L.
2017-05-01
Face detection algorithm based on a cascade of ensembles of decision trees (CEDT) is presented. The new approach allows detecting faces other than the front position through the use of multiple classifiers. Each classifier is trained for a specific range of angles of the rotation head. The results showed a high rate of productivity for CEDT on images with standard size. The algorithm increases the area under the ROC-curve of 13% compared to a standard Viola-Jones face detection algorithm. Final realization of given algorithm consist of 5 different cascades for frontal/non-frontal faces. One more thing which we take from the simulation results is a low computational complexity of CEDT algorithm in comparison with standard Viola-Jones approach. This could prove important in the embedded system and mobile device industries because it can reduce the cost of hardware and make battery life longer.
Singh, Manav Deep; Jain, Kanika
2017-11-01
To find out whether 30-2 Swedish Interactive Threshold Algorithm (SITA) Fast is comparable to 30-2 SITA Standard as a tool for perimetry among the patients with intracranial tumors. This was a prospective cross-sectional study involving 80 patients aged ≥18 years with imaging proven intracranial tumors and visual acuity better than 20/60. The patients underwent multiple visual field examinations using the two algorithms till consistent and repeatable results were obtained. A total of 140 eyes of 80 patients were analyzed. Almost 60% of patients undergoing perimetry with SITA Standard required two or more sessions to obtain consistent results, whereas the same could be obtained in 81.42% with SITA Fast in the first session itself. Of 140 eyes, 70 eyes had recordable field defects and the rest had no defects as detected by either of the two algorithms. Mean deviation (MD) (P = 0.56), pattern standard deviation (PSD) (P = 0.22), visual field index (P = 0.83) and number of depressed points at P < 5%, 2%, 1%, and 0.5% on MD and PSD probability plots showed no statistically significant difference between two algorithms. Bland-Altman test showed that considerable variability existed between two algorithms. Perimetry performed by SITA Standard and SITA Fast algorithm of Humphrey Field Analyzer gives comparable results among the patients of intracranial tumors. Being more time efficient and with a shorter learning curve, SITA Fast my be recommended as a standard test for the purpose of perimetry among these patients.
ERIC Educational Resources Information Center
Hus, Vanessa; Lord, Catherine
2014-01-01
The recently published Autism Diagnostic Observation Schedule, 2nd edition (ADOS-2) includes revised diagnostic algorithms and standardized severity scores for modules used to assess younger children. A revised algorithm and severity scores are not yet available for Module 4, used with verbally fluent adults. The current study revises the Module 4…
Algorithms for the explicit computation of Penrose diagrams
NASA Astrophysics Data System (ADS)
Schindler, J. C.; Aguirre, A.
2018-05-01
An algorithm is given for explicitly computing Penrose diagrams for spacetimes of the form . The resulting diagram coordinates are shown to extend the metric continuously and nondegenerately across an arbitrary number of horizons. The method is extended to include piecewise approximations to dynamically evolving spacetimes using a standard hypersurface junction procedure. Examples generated by an implementation of the algorithm are shown for standard and new cases. In the appendix, this algorithm is compared to existing methods.
DSMC Simulations of Irregular Source Geometries for Io's Pele Plume
NASA Astrophysics Data System (ADS)
McDoniel, William; Goldstein, D. B.; Varghese, P. L.; Trafton, L. M.; Buchta, D. A.; Freund, J.; Kieffer, S. W.
2010-10-01
Volcanic plumes on Io represent a complex rarefied flow into a near-vacuum in the presence of gravity. A 3D rarefied gas dynamics method (DSMC) is used to investigate the gas dynamics of such plumes, with a focus on the effects of source geometry on far-field deposition patterns. These deposition patterns, such as the deposition ring's shape and orientation, as well as the presence and shape of ash deposits around the vent, are linked to the shape of the vent from which the plume material arises. We will present three-dimensional simulations for a variety of possible vent geometries for Pele based on observations of the volcano's caldera. One is a curved line source corresponding to a Galileo IR image of a particularly hot region in the volcano's caldera and the other is a large area source corresponding to the entire lava lake at the center of the plume. The curvature of the former is seen to be sufficient to produce the features seen in observations of Pele's deposition pattern, but the particular orientation of the source is found to be such that it cannot match the orientation of these features on Io's surface. The latter corrects the error in orientation while losing some of the structure, suggesting that the actual source may correspond well with part of the shore of the lava lake. In addition, we are collaborating with a group at the University of Illinois at Urbana-Champaign to develop a hybrid method to link the continuum flow beneath Io's surface and very close to the vent to the more rarefied flow in the large volcanic plumes. This work was funded by NASA-PATM grant NNX08AE72G.
In Depth Analysis of AVCOAT TPS Response to a Reentry Flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Titov, E. V.; Kumar, Rakesh; Levin, D. A.
2011-05-20
Modeling of the high altitude portion of reentry vehicle trajectories with DSMC or statistical BGK solvers requires accurate evaluation of the boundary conditions at the ablating TPS surface. Presented in this article is a model which takes into account the complex ablation physics including the production of pyrolysis gases, and chemistry at the TPS surface. Since the ablation process is time dependent the modeling of the material response to the high energy reentry flow starts with the solution of the rarefied flow over the vehicle and then loosely couples with the material response. The objective of the present work ismore » to carry out conjugate thermal analysis by weakly coupling a flow solver to a material thermal response model. The latter model solves the one dimensional heat conduction equation accounting for the pyrolysis process that takes place in the reaction zone of an ablative thermal protection system (TPS) material. An estimate of the temperature range within which the pyrolysis reaction (decomposition and volatilization) takes place is obtained from Ref. [1]. The pyrolysis reaction results in the formation of char and the release of gases through the porous charred material. These gases remove additional amount of heat as they pass through the material, thus cooling the material (the process known as transpiration cooling). In the present work, we incorporate the transpiration cooling model in the material thermal response code in addition to the pyrolysis model. The flow in the boundary layer and in the vicinity of the TPS material is in the transitional flow regime. Therefore, we use a previously validated statistical BGK method to model the flow physics in the vicinity of the micro-cracks, since the BGK method allows simulations of flow at pressures higher than can be computed using DSMC.« less
Heavy particle transport in sputtering systems
NASA Astrophysics Data System (ADS)
Trieschmann, Jan
2015-09-01
This contribution aims to discuss the theoretical background of heavy particle transport in plasma sputtering systems such as direct current magnetron sputtering (dcMS), high power impulse magnetron sputtering (HiPIMS), or multi frequency capacitively coupled plasmas (MFCCP). Due to inherently low process pressures below one Pa only kinetic simulation models are suitable. In this work a model appropriate for the description of the transport of film forming particles sputtered of a target material has been devised within the frame of the OpenFOAM software (specifically dsmcFoam). The three dimensional model comprises of ejection of sputtered particles into the reactor chamber, their collisional transport through the volume, as well as deposition of the latter onto the surrounding surfaces (i.e. substrates, walls). An angular dependent Thompson energy distribution fitted to results from Monte-Carlo simulations is assumed initially. Binary collisions are treated via the M1 collision model, a modified variable hard sphere (VHS) model. The dynamics of sputtered and background gas species can be resolved self-consistently following the direct simulation Monte-Carlo (DSMC) approach or, whenever possible, simplified based on the test particle method (TPM) with the assumption of a constant, non-stationary background at a given temperature. At the example of an MFCCP research reactor the transport of sputtered aluminum is specifically discussed. For the peculiar configuration and under typical process conditions with argon as process gas the transport of aluminum sputtered of a circular target is shown to be governed by a one dimensional interaction of the imposed and backscattered particle fluxes. The results are analyzed and discussed on the basis of the obtained velocity distribution functions (VDF). This work is supported by the German Research Foundation (DFG) in the frame of the Collaborative Research Centre TRR 87.
Radiolytic Model for Chemical Composition of Europa's Atmosphere and Surface
NASA Technical Reports Server (NTRS)
Cooper, John F.
2004-01-01
The overall objective of the present effort is to produce models for major and selected minor components of Europa s neutral atmosphere in 1-D versus altitude and in 2-D versus altitude and longitude or latitude. A 3-D model versus all three coordinates (alt, long, lat) will be studied but development on this is at present limited by computing facilities available to the investigation team. In this first year we have focused on 1-D modeling with Co-I Valery Shematovich s Direct Simulation Monte Carlo (DSMC) code for water group species (H2O, O2, O, OH) and on 2-D with Co-I Mau Wong's version of a similar code for O2, O, CO, CO2, and Na. Surface source rates of H2O and O2 from sputtering and radiolysis are used in the 1-D model, while observations for CO2 at the Europa surface and Na detected in a neutral cloud ejected from Europa are used, along with the O2 sputtering rate, to constrain source rates in the 2-D version. With these separate approaches we are investigating a range of processes important to eventual implementation of a comprehensive 3-D atmospheric model which could be used to understand present observations and develop science requirements for future observations, e.g. from Earth and in Europa orbit. Within the second year we expect to merge the full water group calculations into the 2-D version of the DSMC code which can then be extended to 3-D, pending availability of computing resources. Another important goal in the second year would be the inclusion of sulk and its more volatile oxides (SO, SO2).
NASA Astrophysics Data System (ADS)
Finklenburg, S.; Thomas, N.; Su, C. C.; Wu, J.-S.
2014-07-01
The near nucleus coma of Comet 9P/Tempel 1 has been simulated with the 3D Direct Simulation Monte Carlo (DSMC) code PDSC++ (Su, C.-C. [2013]. Parallel Direct Simulation Monte Carlo (DSMC) Methods for Modeling Rarefied Gas Dynamics. PhD Thesis, National Chiao Tung University, Taiwan) and the derived column densities have been compared to observations of the water vapour distribution found by using infrared imaging spectrometer on the Deep Impact spacecraft (Feaga, L.M., A’Hearn, M.F., Sunshine, J.M., Groussin, O., Farnham, T.L. [2007]. Icarus 191(2), 134-145. http://dx.doi.org/10.1016/j.icarus.2007.04.038). Modelled total production rates are also compared to various observations made at the time of the Deep Impact encounter. Three different models were tested. For all models, the shape model constructed from the Deep Impact observations by Thomas et al. (Thomas, P.C., Veverka, J., Belton, M.J.S., Hidy, A., A’Hearn, M.F., Farnham, T.L., et al. [2007]. Icarus, 187(1), 4-15. http://dx.doi.org/10.1016/j.icarus.2006.12.013) was used. Outgassing depending only on the cosine of the solar insolation angle on each shape model facet is shown to provide an unsatisfactory model. Models constructed on the basis of active areas suggested by Kossacki and Szutowicz (Kossacki, K., Szutowicz, S. [2008]. Icarus, 195(2), 705-724. http://dx.doi.org/10.1016/j.icarus.2007.12.014) are shown to be superior. The Kossacki and Szutowicz model, however, also shows deficits which we have sought to improve upon. For the best model we investigate the properties of the outflow.
Aerodynamic characteristics of the upper stages of a launch vehicle in low-density regime
NASA Astrophysics Data System (ADS)
Oh, Bum Seok; Lee, Joon Ho
2016-11-01
Aerodynamic characteristics of the orbital block (remaining configuration after separation of nose fairing and 1st and 2nd stages of the launch vehicle) and the upper 2-3stage (configuration after separation of 1st stage) of the 3 stages launch vehicle (KSLV-II, Korea Space Launch Vehicle) at high altitude of low-density regime are analyzed by SMILE code which is based on DSMC (Direct Simulation Monte-Carlo) method. To validating of the SMILE code, coefficients of axial force and normal forces of Apollo capsule are also calculated and the results agree very well with the data predicted by others. For the additional validations and applications of the DSMC code, aerodynamic calculation results of simple shapes of plate and wedge in low-density regime are also introduced. Generally, aerodynamic characteristics in low-density regime differ from those of continuum regime. To understand those kinds of differences, aerodynamic coefficients of the upper stages (including upper 2-3 stage and the orbital block) of the launch vehicle in low-density regime are analyzed as a function of Mach numbers and altitudes. The predicted axial force coefficients of the upper stages of the launch vehicle are very high compared to those in continuum regime. In case of the orbital block which flies at very high altitude (higher than 250km), all aerodynamic coefficients are more dependent on velocity variations than altitude variations. In case of the upper 2-3 stage which flies at high altitude (80km-150km), while the axial force coefficients and the locations of center of pressure are less changed with the variations of Knudsen numbers (altitudes), the normal force coefficients and pitching moment coefficients are more affected by variations of Knudsen numbers (altitude).
1989-11-01
GPS-UTC TIME SYNCHRONIZATION C. H. MCKENZIE W. A. FEESS R, H. LUCAS H. HOLTZ A. L. SATIN The Aerospace Corporation El Segundo, California...Abstract Two automatic algorithms for synchronizing the GPS time standard to the UTC time standard are evaluated. Both algorithms control GPS-UTC...is required to synchronize its broadcast time standard to within one microsecond o f the time standard maintained by the US Naval Observatory
Accuracy metrics for judging time scale algorithms
NASA Technical Reports Server (NTRS)
Douglas, R. J.; Boulanger, J.-S.; Jacques, C.
1994-01-01
Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.
ERIC Educational Resources Information Center
Nanna, Robert J.
2016-01-01
Algorithms and representations have been an important aspect of the work of mathematics, especially for understanding concepts and communicating ideas about concepts and mathematical relationships. They have played a key role in various mathematics standards documents, including the Common Core State Standards for Mathematics. However, there have…
Improved Bat Algorithm Applied to Multilevel Image Thresholding
2014-01-01
Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733
Digital health technology and trauma: development of an app to standardize care.
Hsu, Jeremy M
2015-04-01
Standardized practice results in less variation, therefore reducing errors and improving outcome. Optimal trauma care is achieved through standardization, as is evidenced by the widespread adoption of the Advanced Trauma Life Support approach. The challenge for an individual institution is how does one educate and promulgate these standardized processes widely and efficiently? In today's world, digital health technology must be considered in the process. The aim of this study was to describe the process of developing an app, which includes standardized trauma algorithms. The objective of the app was to allow easy, real-time access to trauma algorithms, and therefore reduce omissions/errors. A set of trauma algorithms, relevant to the local setting, was derived from the best available evidence. After obtaining grant funding, a collaborative endeavour was undertaken with an external specialist app developing company. The process required 6 months to translate the existing trauma algorithms into an app. The app contains 32 separate trauma algorithms, formatted as a single-page flow diagram. It utilizes specific smartphone features such as 'pinch to zoom', jump-words and pop-ups to allow rapid access to the desired information. Improvements in trauma care outcomes result from reducing variation. By incorporating digital health technology, a trauma app has been developed, allowing easy and intuitive access to evidenced-based algorithms. © 2015 Royal Australasian College of Surgeons.
Quantitative Imaging Biomarkers: A Review of Statistical Methods for Computer Algorithm Comparisons
2014-01-01
Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. PMID:24919829
Comparison of snow depth retrieval algorithm in Northeastern China based on AMSR2 and FY3B-MWRI data
NASA Astrophysics Data System (ADS)
Fan, Xintong; Gu, Lingjia; Ren, Ruizhi; Zhou, Tingting
2017-09-01
Snow accumulation has a very important influence on the natural environment and human activities. Meanwhile, improving the estimation accuracy of passive microwave snow depth (SD) retrieval is a hotspot currently. Northeastern China is a typical snow study area including many different land cover types, such as forest, grassland and farmland. Especially, there is relatively stable snow accumulation in January every year. The brightness temperatures which are observed by the Advanced Microwave Scanning Radiometer 2 (AMSR2) on GCOM-W1 and FengYun3B Microwave Radiation Imager (FY3B-MWRI) in the same period in 2013 are selected as the study data in the research. The results of snow depth retrieval using AMSR2 standard algorithm and Jiang's FY operational algorithm are compared in the research. Moreover, to validate the accuracy of the two algorithms, the retrieval results are compared with the SD data observed at the national meteorological stations in Northeastern China. Furthermore, the retrieval SD is also compared with AMSR2 and FY standard SD products, respectively. The root mean square errors (RMSE) results using AMSR2 standard algorithms and FY operational algorithm are close in the forest surface, which are 6.33cm and 6.28cm, respectively. However, The FY operational algorithm shows a better result than the AMSR2 standard algorithms in the grassland and farmland surface. The RMSE results using FY operational algorithm in the grassland and farmland surface are 2.44cm and 6.13cm, respectively.
Efficient model learning methods for actor-critic control.
Grondman, Ivo; Vaandrager, Maarten; Buşoniu, Lucian; Babuska, Robert; Schuitema, Erik
2012-06-01
We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.
Status Report on the First Round of the Development of the Advanced Encryption Standard
Nechvatal, James; Barker, Elaine; Dodson, Donna; Dworkin, Morris; Foti, James; Roback, Edward
1999-01-01
In 1997, the National Institute of Standards and Technology (NIST) initiated a process to select a symmetric-key encryption algorithm to be used to protect sensitive (unclassified) Federal information in furtherance of NIST’s statutory responsibilities. In 1998, NIST announced the acceptance of 15 candidate algorithms and requested the assistance of the cryptographic research community in analyzing the candidates. This analysis included an initial examination of the security and efficiency characteristics for each algorithm. NIST has reviewed the results of this research and selected five algorithms (MARS, RC6™, Rijndael, Serpent and Twofish) as finalists. The research results and rationale for the selection of the finalists are documented in this report. The five finalists will be the subject of further study before the selection of one or more of these algorithms for inclusion in the Advanced Encryption Standard.
Andreini, Daniele; Lin, Fay Y; Rizvi, Asim; Cho, Iksung; Heo, Ran; Pontone, Gianluca; Bartorelli, Antonio L; Mushtaq, Saima; Villines, Todd C; Carrascosa, Patricia; Choi, Byoung Wook; Bloom, Stephen; Wei, Han; Xing, Yan; Gebow, Dan; Gransar, Heidi; Chang, Hyuk-Jae; Leipsic, Jonathon; Min, James K
2018-06-01
Motion artifact can reduce the diagnostic accuracy of coronary CT angiography (CCTA) for coronary artery disease (CAD). The purpose of this study was to compare the diagnostic performance of an algorithm dedicated to correcting coronary motion artifact with the performance of standard reconstruction methods in a prospective international multicenter study. Patients referred for clinically indicated invasive coronary angiography (ICA) for suspected CAD prospectively underwent an investigational CCTA examination free from heart rate-lowering medications before they underwent ICA. Blinded core laboratory interpretations of motion-corrected and standard reconstructions for obstructive CAD (≥ 50% stenosis) were compared with ICA findings. Segments unevaluable owing to artifact were considered obstructive. The primary endpoint was per-subject diagnostic accuracy of the intracycle motion correction algorithm for obstructive CAD found at ICA. Among 230 patients who underwent CCTA with the motion correction algorithm and standard reconstruction, 92 (40.0%) had obstructive CAD on the basis of ICA findings. At a mean heart rate of 68.0 ± 11.7 beats/min, the motion correction algorithm reduced the number of nondiagnostic scans compared with standard reconstruction (20.4% vs 34.8%; p < 0.001). Diagnostic accuracy for obstructive CAD with the motion correction algorithm (62%; 95% CI, 56-68%) was not significantly different from that of standard reconstruction on a per-subject basis (59%; 95% CI, 53-66%; p = 0.28) but was superior on a per-vessel basis: 77% (95% CI, 74-80%) versus 72% (95% CI, 69-75%) (p = 0.02). The motion correction algorithm was superior in subgroups of patients with severely obstructive (≥ 70%) stenosis, heart rate ≥ 70 beats/min, and vessels in the atrioventricular groove. The motion correction algorithm studied reduces artifacts and improves diagnostic performance for obstructive CAD on a per-vessel basis and in selected subgroups on a per-subject basis.
DSMC Simulations in Support of the Columbia Shuttle Orbiter Accident Investigation
NASA Technical Reports Server (NTRS)
Boyles, Katie; LeBeau, Gerald J.; Gallis, Michael A.
2004-01-01
Three-dimensional Direct Simulation Monte Carlo simulations of Columbia Shuttle Orbiter flight STS-107 are presented. The aim of this work is to determine the aerodynamic and heating behavior of the Orbiter during aerobraking maneuvers and to provide piecewise integration of key scenario events to assess the plausibility of the candidate failure scenarios. The flight of the Orbiter is examined at two altitudes: 350-kft and 300-kft. The flowfield around the Orbiter and the heat transfer to it are calculated for the undamaged configuration. The flow inside the wing for an assumed damage to the leading edge in the form of a 10- inch hole is studied.
Direct simulation with vibration-dissociation coupling
NASA Technical Reports Server (NTRS)
Hash, David B.; Hassan, H. A.
1992-01-01
The majority of implementations of the Direct Simulation Monte Carlo (DSMC) method of Bird do not account for vibration-dissociation coupling. Haas and Boyd have proposed the vibrationally-favored dissociation model to accomplish this task. This model requires measurements of induction distance to determine model constants. A more general expression has been derived that does not require any experimental input. The model is used to calculate one-dimensional shock waves in nitrogen and the flow past a lunar transfer vehicle (LTV). For the conditions considered in the simulation, the influence of vibration-dissociation coupling on heat transfer in the stagnation region of the LTV can be significant.
N2 Temperature of Vibration instrument for sounding rocket observation in the lower thermosphere
NASA Astrophysics Data System (ADS)
Kurihara, J.; Iwagami, N.; Oyama, K.-I.
2013-11-01
The N2 Temperature of Vibration (NTV) instrument was developed to study energetics and structure of the lower thermosphere, applying the Electron Beam Fluorescence (EBF) technique to measurements of vibrational temperature, rotational temperature and number density of atmospheric N2. The sounding rocket experiments using this instrument have been conducted four times, including one failure of the electron gun. Aerodynamic effects on the measurement caused by the supersonic motion of the rocket were analyzed quantitatively using three-dimensional simulation of Direct Simulation Monte Carlo (DSMC) method, and the absolute density profile was obtained through the correction of the spin modulation.
Using Chaotic System in Encryption
NASA Astrophysics Data System (ADS)
Findik, Oğuz; Kahramanli, Şirzat
In this paper chaotic systems and RSA encryption algorithm are combined in order to develop an encryption algorithm which accomplishes the modern standards. E.Lorenz's weather forecast' equations which are used to simulate non-linear systems are utilized to create chaotic map. This equation can be used to generate random numbers. In order to achieve up-to-date standards and use online and offline status, a new encryption technique that combines chaotic systems and RSA encryption algorithm has been developed. The combination of RSA algorithm and chaotic systems makes encryption system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, K; Huang, T; Buttler, D
We present the C-Cat Wordnet package, an open source library for using and modifying Wordnet. The package includes four key features: an API for modifying Synsets; implementations of standard similarity metrics, implementations of well known Word Sense Disambiguation algorithms, and an implementation of the Castanet algorithm. The library is easily extendible and usable in many runtime environments. We demonstrate it's use on two standard Word Sense Disambiguation tasks and apply the Castanet algorithm to a corpus.
Walking Distance Estimation Using Walking Canes with Inertial Sensors
Suh, Young Soo
2018-01-01
A walking distance estimation algorithm for cane users is proposed using an inertial sensor unit attached to various positions on the cane. A standard inertial navigation algorithm using an indirect Kalman filter was applied to update the velocity and position of the cane during movement. For quadripod canes, a standard zero-velocity measurement-updating method is proposed. For standard canes, a velocity-updating method based on an inverted pendulum model is proposed. The proposed algorithms were verified by three walking experiments with two different types of canes and different positions of the sensor module. PMID:29342971
NASA Astrophysics Data System (ADS)
Wu, Fan; Cao, Pin; Yang, Yongying; Li, Chen; Chai, Huiting; Zhang, Yihui; Xiong, Haoliang; Xu, Wenlin; Yan, Kai; Zhou, Lin; Liu, Dong; Bai, Jian; Shen, Yibing
2016-11-01
The inspection of surface defects is one of significant sections of optical surface quality evaluation. Based on microscopic scattering dark-field imaging, sub-aperture scanning and stitching, the Surface Defects Evaluating System (SDES) can acquire full-aperture image of defects on optical elements surface and then extract geometric size and position information of defects with image processing such as feature recognization. However, optical distortion existing in the SDES badly affects the inspection precision of surface defects. In this paper, a distortion correction algorithm based on standard lattice pattern is proposed. Feature extraction, polynomial fitting and bilinear interpolation techniques in combination with adjacent sub-aperture stitching are employed to correct the optical distortion of the SDES automatically in high accuracy. Subsequently, in order to digitally evaluate surface defects with American standard by using American military standards MIL-PRF-13830B to judge the surface defects information obtained from the SDES, an American standard-based digital evaluation algorithm is proposed, which mainly includes a judgment method of surface defects concentration. The judgment method establishes weight region for each defect and adopts the method of overlap of weight region to calculate defects concentration. This algorithm takes full advantage of convenience of matrix operations and has merits of low complexity and fast in running, which makes itself suitable very well for highefficiency inspection of surface defects. Finally, various experiments are conducted and the correctness of these algorithms are verified. At present, these algorithms have been used in SDES.
Rogers, Melinda C.; Gawron, Andrew; Grande, David; Keswani, Rajesh N.
2017-01-01
Background and study aims Incomplete colonoscopy may occur as a result of colon angulation (adhesions or diverticulosis), endoscope looping, or both. Specialty endoscopes/devices have been shown to successfully complete prior incomplete colonoscopies, but may not be widely available. Radiographic or other image-based evaluations have been shown to be effective but may miss small or flat lesions, and colonoscopy is often still indicated if a large lesion is identified. The purpose of this study was to develop and validate an algorithm to determine the optimum endoscope to ensure completion of the examination in patients with prior incomplete colonoscopy. Patients and methods This was a prospective cohort study of 175 patients with prior incomplete colonoscopy who were referred to a single endoscopist at a single academic medical center over a 3-year period from 2012 through 2015. Colonoscopy outcomes from the initial 50 patients were used to develop an algorithm to determine the optimal standard endoscope and technique to achieve cecal intubation. The algorithm was validated on the subsequent 125 patients. Results The overall repeat colonoscopy success rate using a standard endoscope was 94 %. The initial standard endoscope specified by the algorithm was used and completed the colonoscopy in 90 % of patients. Conclusions This study identifies an effective strategy for completing colonoscopy in patients with prior incomplete examination, using widely available standard endoscopes and an algorithm based on patient characteristics and reasons for prior incomplete colonoscopy. PMID:28924595
Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A
2017-12-01
The onset of muscle activity, as measured by electromyography (EMG), is a commonly applied metric in biomechanics. Intramuscular EMG is often used to examine deep musculature and there are currently no studies examining the effectiveness of algorithms for intramuscular EMG onset. The present study examines standard surface EMG onset algorithms (linear envelope, Teager-Kaiser Energy Operator, and sample entropy) and novel algorithms (time series mean-variance analysis, sequential/batch processing with parametric and nonparametric methods, and Bayesian changepoint analysis). Thirteen male and 5 female subjects had intramuscular EMG collected during isolated biceps brachii and vastus lateralis contractions, resulting in 103 trials. EMG onset was visually determined twice by 3 blinded reviewers. Since the reliability of visual onset was high (ICC (1,1) : 0.92), the mean of the 6 visual assessments was contrasted with the algorithmic approaches. Poorly performing algorithms were stepwise eliminated via (1) root mean square error analysis, (2) algorithm failure to identify onset/premature onset, (3) linear regression analysis, and (4) Bland-Altman plots. The top performing algorithms were all based on Bayesian changepoint analysis of rectified EMG and were statistically indistinguishable from visual analysis. Bayesian changepoint analysis has the potential to produce more reliable, accurate, and objective intramuscular EMG onset results than standard methodologies.
Shrestha, Swastina; Dave, Amish J; Losina, Elena; Katz, Jeffrey N
2016-07-07
Administrative health care data are frequently used to study disease burden and treatment outcomes in many conditions including osteoarthritis (OA). OA is a chronic condition with significant disease burden affecting over 27 million adults in the US. There are few studies examining the performance of administrative data algorithms to diagnose OA. The purpose of this study is to perform a systematic review of administrative data algorithms for OA diagnosis; and, to evaluate the diagnostic characteristics of algorithms based on restrictiveness and reference standards. Two reviewers independently screened English-language articles published in Medline, Embase, PubMed, and Cochrane databases that used administrative data to identify OA cases. Each algorithm was classified as restrictive or less restrictive based on number and type of administrative codes required to satisfy the case definition. We recorded sensitivity and specificity of algorithms and calculated positive likelihood ratio (LR+) and positive predictive value (PPV) based on assumed OA prevalence of 0.1, 0.25, and 0.50. The search identified 7 studies that used 13 algorithms. Of these 13 algorithms, 5 were classified as restrictive and 8 as less restrictive. Restrictive algorithms had lower median sensitivity and higher median specificity compared to less restrictive algorithms when reference standards were self-report and American college of Rheumatology (ACR) criteria. The algorithms compared to reference standard of physician diagnosis had higher sensitivity and specificity than those compared to self-reported diagnosis or ACR criteria. Restrictive algorithms are more specific for OA diagnosis and can be used to identify cases when false positives have higher costs e.g. interventional studies. Less restrictive algorithms are more sensitive and suited for studies that attempt to identify all cases e.g. screening programs.
Quantitative imaging biomarkers: a review of statistical methods for computer algorithm comparisons.
Obuchowski, Nancy A; Reeves, Anthony P; Huang, Erich P; Wang, Xiao-Feng; Buckler, Andrew J; Kim, Hyun J Grace; Barnhart, Huiman X; Jackson, Edward F; Giger, Maryellen L; Pennello, Gene; Toledano, Alicia Y; Kalpathy-Cramer, Jayashree; Apanasovich, Tatiyana V; Kinahan, Paul E; Myers, Kyle J; Goldgof, Dmitry B; Barboriak, Daniel P; Gillies, Robert J; Schwartz, Lawrence H; Sullivan, Daniel C
2015-02-01
Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Discrete size optimization of steel trusses using a refined big bang-big crunch algorithm
NASA Astrophysics Data System (ADS)
Hasançebi, O.; Kazemzadeh Azad, S.
2014-01-01
This article presents a methodology that provides a method for design optimization of steel truss structures based on a refined big bang-big crunch (BB-BC) algorithm. It is shown that a standard formulation of the BB-BC algorithm occasionally falls short of producing acceptable solutions to problems from discrete size optimum design of steel trusses. A reformulation of the algorithm is proposed and implemented for design optimization of various discrete truss structures according to American Institute of Steel Construction Allowable Stress Design (AISC-ASD) specifications. Furthermore, the performance of the proposed BB-BC algorithm is compared to its standard version as well as other well-known metaheuristic techniques. The numerical results confirm the efficiency of the proposed algorithm in practical design optimization of truss structures.
Guidelines and algorithms for managing the difficult airway.
Gómez-Ríos, M A; Gaitini, L; Matter, I; Somri, M
2018-01-01
The difficult airway constitutes a continuous challenge for anesthesiologists. Guidelines and algorithms are key to preserving patient safety, by recommending specific plans and strategies that address predicted or unexpected difficult airway. However, there are currently no "gold standard" algorithms or universally accepted standards. The aim of this article is to present a synthesis of the recommendations of the main guidelines and difficult airway algorithms. Copyright © 2017 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.
A hybrid Jaya algorithm for reliability-redundancy allocation problems
NASA Astrophysics Data System (ADS)
Ghavidel, Sahand; Azizivahed, Ali; Li, Li
2018-04-01
This article proposes an efficient improved hybrid Jaya algorithm based on time-varying acceleration coefficients (TVACs) and the learning phase introduced in teaching-learning-based optimization (TLBO), named the LJaya-TVAC algorithm, for solving various types of nonlinear mixed-integer reliability-redundancy allocation problems (RRAPs) and standard real-parameter test functions. RRAPs include series, series-parallel, complex (bridge) and overspeed protection systems. The search power of the proposed LJaya-TVAC algorithm for finding the optimal solutions is first tested on the standard real-parameter unimodal and multi-modal functions with dimensions of 30-100, and then tested on various types of nonlinear mixed-integer RRAPs. The results are compared with the original Jaya algorithm and the best results reported in the recent literature. The optimal results obtained with the proposed LJaya-TVAC algorithm provide evidence for its better and acceptable optimization performance compared to the original Jaya algorithm and other reported optimal results.
VDLLA: A virtual daddy-long legs optimization
NASA Astrophysics Data System (ADS)
Yaakub, Abdul Razak; Ghathwan, Khalil I.
2016-08-01
Swarm intelligence is a strong optimization algorithm based on a biological behavior of insects or animals. The success of any optimization algorithm is depending on the balance between exploration and exploitation. In this paper, we present a new swarm intelligence algorithm, which is based on daddy long legs spider (VDLLA) as a new optimization algorithm with virtual behavior. In VDLLA, each agent (spider) has nine positions which represent the legs of spider and each position represent one solution. The proposed VDLLA is tested on four standard functions using average fitness, Medium fitness and standard deviation. The results of proposed VDLLA have been compared against Particle Swarm Optimization (PSO), Differential Evolution (DE) and Bat Inspired Algorithm (BA). Additionally, the T-Test has been conducted to show the significant deference between our proposed and other algorithms. VDLLA showed very promising results on benchmark test functions for unconstrained optimization problems and also significantly improved the original swarm algorithms.
Reaction rates for a generalized reaction-diffusion master equation
Hellander, Stefan; Petzold, Linda
2016-01-19
It has been established that there is an inherent limit to the accuracy of the reaction-diffusion master equation. Specifically, there exists a fundamental lower bound on the mesh size, below which the accuracy deteriorates as the mesh is refined further. In this paper we extend the standard reaction-diffusion master equation to allow molecules occupying neighboring voxels to react, in contrast to the traditional approach in which molecules react only when occupying the same voxel. We derive reaction rates, in two dimensions as well as three dimensions, to obtain an optimal match to the more fine-grained Smoluchowski model, and show inmore » two numerical examples that the extended algorithm is accurate for a wide range of mesh sizes, allowing us to simulate systems that are intractable with the standard reaction-diffusion master equation. In addition, we show that for mesh sizes above the fundamental lower limit of the standard algorithm, the generalized algorithm reduces to the standard algorithm. We derive a lower limit for the generalized algorithm which, in both two dimensions and three dimensions, is on the order of the reaction radius of a reacting pair of molecules.« less
Reaction rates for a generalized reaction-diffusion master equation
Hellander, Stefan; Petzold, Linda
2016-01-01
It has been established that there is an inherent limit to the accuracy of the reaction-diffusion master equation. Specifically, there exists a fundamental lower bound on the mesh size, below which the accuracy deteriorates as the mesh is refined further. In this paper we extend the standard reaction-diffusion master equation to allow molecules occupying neighboring voxels to react, in contrast to the traditional approach in which molecules react only when occupying the same voxel. We derive reaction rates, in two dimensions as well as three dimensions, to obtain an optimal match to the more fine-grained Smoluchowski model, and show in two numerical examples that the extended algorithm is accurate for a wide range of mesh sizes, allowing us to simulate systems that are intractable with the standard reaction-diffusion master equation. In addition, we show that for mesh sizes above the fundamental lower limit of the standard algorithm, the generalized algorithm reduces to the standard algorithm. We derive a lower limit for the generalized algorithm which, in both two dimensions and three dimensions, is on the order of the reaction radius of a reacting pair of molecules. PMID:26871190
Algorithms of maximum likelihood data clustering with applications
NASA Astrophysics Data System (ADS)
Giada, Lorenzo; Marsili, Matteo
2002-12-01
We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.
NASA Astrophysics Data System (ADS)
Lehmann, Thomas M.
2002-05-01
Reliable evaluation of medical image processing is of major importance for routine applications. Nonetheless, evaluation is often omitted or methodically defective when novel approaches or algorithms are introduced. Adopted from medical diagnosis, we define the following criteria to classify reference standards: 1. Reliance, if the generation or capturing of test images for evaluation follows an exactly determined and reproducible protocol. 2. Equivalence, if the image material or relationships considered within an algorithmic reference standard equal real-life data with respect to structure, noise, or other parameters of importance. 3. Independence, if any reference standard relies on a different procedure than that to be evaluated, or on other images or image modalities than that used routinely. This criterion bans the simultaneous use of one image for both, training and test phase. 4. Relevance, if the algorithm to be evaluated is self-reproducible. If random parameters or optimization strategies are applied, reliability of the algorithm must be shown before the reference standard is applied for evaluation. 5. Significance, if the number of reference standard images that are used for evaluation is sufficient large to enable statistically founded analysis. We demand that a true gold standard must satisfy the Criteria 1 to 3. Any standard only satisfying two criteria, i.e., Criterion 1 and Criterion 2 or Criterion 1 and Criterion 3, is referred to as silver standard. Other standards are termed to be from plastic. Before exhaustive evaluation based on gold or silver standards is performed, its relevance must be shown (Criterion 4) and sufficient tests must be carried out to found statistical analysis (Criterion 5). In this paper, examples are given for each class of reference standards.
An Evaluation of the Sniffer Global Optimization Algorithm Using Standard Test Functions
NASA Astrophysics Data System (ADS)
Butler, Roger A. R.; Slaminka, Edward E.
1992-03-01
The performance of Sniffer—a new global optimization algorithm—is compared with that of Simulated Annealing. Using the number of function evaluations as a measure of efficiency, the new algorithm is shown to be significantly better at finding the global minimum of seven standard test functions. Several of the test functions used have many local minima and very steep walls surrounding the global minimum. Such functions are intended to thwart global minimization algorithms.
Gaussian-input Gaussian mixture model for representing density maps and atomic models.
Kawabata, Takeshi
2018-07-01
A new Gaussian mixture model (GMM) has been developed for better representations of both atomic models and electron microscopy 3D density maps. The standard GMM algorithm employs an EM algorithm to determine the parameters. It accepted a set of 3D points with weights, corresponding to voxel or atomic centers. Although the standard algorithm worked reasonably well; however, it had three problems. First, it ignored the size (voxel width or atomic radius) of the input, and thus it could lead to a GMM with a smaller spread than the input. Second, the algorithm had a singularity problem, as it sometimes stopped the iterative procedure due to a Gaussian function with almost zero variance. Third, a map with a large number of voxels required a long computation time for conversion to a GMM. To solve these problems, we have introduced a Gaussian-input GMM algorithm, which considers the input atoms or voxels as a set of Gaussian functions. The standard EM algorithm of GMM was extended to optimize the new GMM. The new GMM has identical radius of gyration to the input, and does not suddenly stop due to the singularity problem. For fast computation, we have introduced a down-sampled Gaussian functions (DSG) by merging neighboring voxels into an anisotropic Gaussian function. It provides a GMM with thousands of Gaussian functions in a short computation time. We also have introduced a DSG-input GMM: the Gaussian-input GMM with the DSG as the input. This new algorithm is much faster than the standard algorithm. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-12-01
We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.
Review of TRMM/GPM Rainfall Algorithm Validation
NASA Technical Reports Server (NTRS)
Smith, Eric A.
2004-01-01
A review is presented concerning current progress on evaluation and validation of standard Tropical Rainfall Measuring Mission (TRMM) precipitation retrieval algorithms and the prospects for implementing an improved validation research program for the next generation Global Precipitation Measurement (GPM) Mission. All standard TRMM algorithms are physical in design, and are thus based on fundamental principles of microwave radiative transfer and its interaction with semi-detailed cloud microphysical constituents. They are evaluated for consistency and degree of equivalence with one another, as well as intercompared to radar-retrieved rainfall at TRMM's four main ground validation sites. Similarities and differences are interpreted in the context of the radiative and microphysical assumptions underpinning the algorithms. Results indicate that the current accuracies of the TRMM Version 6 algorithms are approximately 15% at zonal-averaged / monthly scales with precisions of approximately 25% for full resolution / instantaneous rain rate estimates (i.e., level 2 retrievals). Strengths and weaknesses of the TRMM validation approach are summarized. Because the dew of convergence of level 2 TRMM algorithms is being used as a guide for setting validation requirements for the GPM mission, it is important that the GPM algorithm validation program be improved to ensure concomitant improvement in the standard GPM retrieval algorithms. An overview of the GPM Mission's validation plan is provided including a description of a new type of physical validation model using an analytic 3-dimensional radiative transfer model.
Performance analysis of structured gradient algorithm. [for adaptive beamforming linear arrays
NASA Technical Reports Server (NTRS)
Godara, Lal C.
1990-01-01
The structured gradient algorithm uses a structured estimate of the array correlation matrix (ACM) to estimate the gradient required for the constrained least-mean-square (LMS) algorithm. This structure reflects the structure of the exact array correlation matrix for an equispaced linear array and is obtained by spatial averaging of the elements of the noisy correlation matrix. In its standard form the LMS algorithm does not exploit the structure of the array correlation matrix. The gradient is estimated by multiplying the array output with the receiver outputs. An analysis of the two algorithms is presented to show that the covariance of the gradient estimated by the structured method is less sensitive to the look direction signal than that estimated by the standard method. The effect of the number of elements on the signal sensitivity of the two algorithms is studied.
Validation of neural spike sorting algorithms without ground-truth information.
Barnett, Alex H; Magland, Jeremy F; Greengard, Leslie F
2016-05-01
The throughput of electrophysiological recording is growing rapidly, allowing thousands of simultaneous channels, and there is a growing variety of spike sorting algorithms designed to extract neural firing events from such data. This creates an urgent need for standardized, automatic evaluation of the quality of neural units output by such algorithms. We introduce a suite of validation metrics that assess the credibility of a given automatic spike sorting algorithm applied to a given dataset. By rerunning the spike sorter two or more times, the metrics measure stability under various perturbations consistent with variations in the data itself, making no assumptions about the internal workings of the algorithm, and minimal assumptions about the noise. We illustrate the new metrics on standard sorting algorithms applied to both in vivo and ex vivo recordings, including a time series with overlapping spikes. We compare the metrics to existing quality measures, and to ground-truth accuracy in simulated time series. We provide a software implementation. Metrics have until now relied on ground-truth, simulated data, internal algorithm variables (e.g. cluster separation), or refractory violations. By contrast, by standardizing the interface, our metrics assess the reliability of any automatic algorithm without reference to internal variables (e.g. feature space) or physiological criteria. Stability is a prerequisite for reproducibility of results. Such metrics could reduce the significant human labor currently spent on validation, and should form an essential part of large-scale automated spike sorting and systematic benchmarking of algorithms. Copyright © 2016 Elsevier B.V. All rights reserved.
Bare, Kimberly; Drain, Jerri; Timko-Progar, Monica; Stallings, Bobbie; Smith, Kimberly; Ward, Naomi; Wright, Sandra
Many nurses have limited experience with ostomy management. We sought to provide a standardized approach to ostomy education and management to support nurses in early identification of stomal and peristomal complications, pouching problems, and provide standardized solutions for managing ostomy care in general while improving utilization of formulary products. This article describes development and testing of an ostomy algorithm tool.
Schoenberg, Mike R; Lange, Rael T; Saklofske, Donald H
2007-11-01
Establishing a comparison standard in neuropsychological assessment is crucial to determining change in function. There is no available method to estimate premorbid intellectual functioning for the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV). The WISC-IV provided normative data for both American and Canadian children aged 6 to 16 years old. This study developed regression algorithms as a proposed method to estimate full-scale intelligence quotient (FSIQ) for the Canadian WISC-IV. Participants were the Canadian WISC-IV standardization sample (n = 1,100). The sample was randomly divided into two groups (development and validation groups). The development group was used to generate regression algorithms; 1 algorithm only included demographics, and 11 combined demographic variables with WISC-IV subtest raw scores. The algorithms accounted for 18% to 70% of the variance in FSIQ (standard error of estimate, SEE = 8.6 to 14.2). Estimated FSIQ significantly correlated with actual FSIQ (r = .30 to .80), and the majority of individual FSIQ estimates were within +/-10 points of actual FSIQ. The demographic-only algorithm was less accurate than algorithms combining demographic variables with subtest raw scores. The current algorithms yielded accurate estimates of current FSIQ for Canadian individuals aged 6-16 years old. The potential application of the algorithms to estimate premorbid FSIQ is reviewed. While promising, clinical validation of the algorithms in a sample of children and/or adolescents with known neurological dysfunction is needed to establish these algorithms as a premorbid estimation procedure.
An extended CFD model to predict the pumping curve in low pressure plasma etch chamber
NASA Astrophysics Data System (ADS)
Zhou, Ning; Wu, Yuanhao; Han, Wenbin; Pan, Shaowu
2014-12-01
Continuum based CFD model is extended with slip wall approximation and rarefaction effect on viscosity, in an attempt to predict the pumping flow characteristics in low pressure plasma etch chambers. The flow regime inside the chamber ranges from slip wall (Kn ˜ 0.01), and up to free molecular (Kn = 10). Momentum accommodation coefficient and parameters for Kn-modified viscosity are first calibrated against one set of measured pumping curve. Then the validity of this calibrated CFD models are demonstrated in comparison with additional pumping curves measured in chambers of different geometry configurations. More detailed comparison against DSMC model for flow conductance over slits with contraction and expansion sections is also discussed.
1988-11-01
library . o Air Force Tech Order Management System - Final Report, library o DLA CALS 1988 Implementation Plan, library . Where to go for Additional...0. ~l l LU; 0. 0 0 2. o 3 0 I) 0 U no V) 4- C 4. U) 00 u c.C0 Cco C Cl) cc m~0-CU . d" CD 0 m mooc Er.C 0 .0> s -w 2 c IM CO (aC wi E 0 r. X 0-0 a 0...as well as wider application. The Air Force AFTOMS Automation Plan, a copy of which is in the library , has excellent discussions of the expected
Assessment of predictive capabilities for aerodynamic heating in hypersonic flow
NASA Astrophysics Data System (ADS)
Knight, Doyle; Chazot, Olivier; Austin, Joanna; Badr, Mohammad Ali; Candler, Graham; Celik, Bayram; Rosa, Donato de; Donelli, Raffaele; Komives, Jeffrey; Lani, Andrea; Levin, Deborah; Nompelis, Ioannis; Panesi, Marco; Pezzella, Giuseppe; Reimann, Bodo; Tumuklu, Ozgur; Yuceil, Kemal
2017-04-01
The capability for CFD prediction of hypersonic shock wave laminar boundary layer interaction was assessed for a double wedge model at Mach 7.1 in air and nitrogen at 2.1 MJ/kg and 8 MJ/kg. Simulations were performed by seven research organizations encompassing both Navier-Stokes and Direct Simulation Monte Carlo (DSMC) methods as part of the NATO STO AVT Task Group 205 activity. Comparison of the CFD simulations with experimental heat transfer and schlieren visualization suggest the need for accurate modeling of the tunnel startup process in short-duration hypersonic test facilities, and the importance of fully 3-D simulations of nominally 2-D (i.e., non-axisymmmetric) experimental geometries.
Jayaraman, Chandrasekaran; Mummidisetty, Chaithanya Krishna; Mannix-Slobig, Alannah; McGee Koch, Lori; Jayaraman, Arun
2018-03-13
Monitoring physical activity and leveraging wearable sensor technologies to facilitate active living in individuals with neurological impairment has been shown to yield benefits in terms of health and quality of living. In this context, accurate measurement of physical activity estimates from these sensors are vital. However, wearable sensor manufacturers generally only provide standard proprietary algorithms based off of healthy individuals to estimate physical activity metrics which may lead to inaccurate estimates in population with neurological impairment like stroke and incomplete spinal cord injury (iSCI). The main objective of this cross-sectional investigation was to evaluate the validity of physical activity estimates provided by standard proprietary algorithms for individuals with stroke and iSCI. Two research grade wearable sensors used in clinical settings were chosen and the outcome metrics estimated using standard proprietary algorithms were validated against designated golden standard measures (Cosmed K4B2 for energy expenditure and metabolic equivalent and manual tallying for step counts). The influence of sensor location, sensor type and activity characteristics were also studied. 28 participants (Healthy (n = 10); incomplete SCI (n = 8); stroke (n = 10)) performed a spectrum of activities in a laboratory setting using two wearable sensors (ActiGraph and Metria-IH1) at different body locations. Manufacturer provided standard proprietary algorithms estimated the step count, energy expenditure (EE) and metabolic equivalent (MET). These estimates were compared with the estimates from gold standard measures. For verifying validity, a series of Kruskal Wallis ANOVA tests (Games-Howell multiple comparison for post-hoc analyses) were conducted to compare the mean rank and absolute agreement of outcome metrics estimated by each of the devices in comparison with the designated gold standard measurements. The sensor type, sensor location, activity characteristics and the population specific condition influences the validity of estimation of physical activity metrics using standard proprietary algorithms. Implementing population specific customized algorithms accounting for the influences of sensor location, type and activity characteristics for estimating physical activity metrics in individuals with stroke and iSCI could be beneficial.
A Log-Scaling Fault Tolerant Agreement Algorithm for a Fault Tolerant MPI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hursey, Joshua J; Naughton, III, Thomas J; Vallee, Geoffroy R
The lack of fault tolerance is becoming a limiting factor for application scalability in HPC systems. The MPI does not provide standardized fault tolerance interfaces and semantics. The MPI Forum's Fault Tolerance Working Group is proposing a collective fault tolerant agreement algorithm for the next MPI standard. Such algorithms play a central role in many fault tolerant applications. This paper combines a log-scaling two-phase commit agreement algorithm with a reduction operation to provide the necessary functionality for the new collective without any additional messages. Error handling mechanisms are described that preserve the fault tolerance properties while maintaining overall scalability.
Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai
2005-10-01
This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.
Open Source Software Openfoam as a New Aerodynamical Simulation Tool for Rocket-Borne Measurements
NASA Astrophysics Data System (ADS)
Staszak, T.; Brede, M.; Strelnikov, B.
2015-09-01
The only way to do in-situ measurements, which are very important experimental studies for atmospheric science, in the mesoshere/lower thermosphere (MLT) is to use sounding rockets. The drawback of using rockets is the shock wave appearing because of the very high speed of the rocket motion (typically about 1000 mIs). This shock wave disturbs the density, the temperature and the velocity fields in the vicinity of the rocket, compared to undisturbed values of the atmosphere. This effect, however, can be quantified and the measured data has to be corrected not just to make it more precise but simply usable. The commonly accepted and widely used tool for this calculations is the Direct Simulation Monte Carlo (DSMC) technique developed by GA. Bird which is available as stand-alone program limited to use a single processor. Apart from complications with simulations of flows around bodies related to different flow regimes in the altitude range of MLT, that rise due to exponential density change by several orders of magnitude, a particular hardware configuration introduces significant difficulty for aerodynamical calculations due to choice of the grid sizes mainly depending on the demands on adequate DSMCs and good resolution of geometries with scale differences of factor of iO~. This makes either the calculation time unreasonably long or even prevents the calculation algorithm from converging. In this paper we apply the free open source software OpenFOAM (licensed under GNU GPL) for a three-dimensional CFD-Simulation of a flow around a sounding rocket instrumentation. An advantage of this software package, among other things, is that it can run on high performance clusters, which are easily scalable. We present the first results and discuss the potential of the new tool in applications for sounding rockets.
Verification of IEEE Compliant Subtractive Division Algorithms
NASA Technical Reports Server (NTRS)
Miner, Paul S.; Leathrum, James F., Jr.
1996-01-01
A parameterized definition of subtractive floating point division algorithms is presented and verified using PVS. The general algorithm is proven to satisfy a formal definition of an IEEE standard for floating point arithmetic. The utility of the general specification is illustrated using a number of different instances of the general algorithm.
40 CFR 51.357 - Test procedures and standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
... invalid test condition, unsafe conditions, fast pass/fail algorithms, or, in the case of the on-board... using approved fast pass or fast fail algorithms and multiple pass/fail algorithms may be used during the test cycle to eliminate false failures. The transient test procedure, including algorithms and...
40 CFR 51.357 - Test procedures and standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
... invalid test condition, unsafe conditions, fast pass/fail algorithms, or, in the case of the on-board... using approved fast pass or fast fail algorithms and multiple pass/fail algorithms may be used during the test cycle to eliminate false failures. The transient test procedure, including algorithms and...
40 CFR 51.357 - Test procedures and standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
... invalid test condition, unsafe conditions, fast pass/fail algorithms, or, in the case of the on-board... using approved fast pass or fast fail algorithms and multiple pass/fail algorithms may be used during the test cycle to eliminate false failures. The transient test procedure, including algorithms and...
40 CFR 51.357 - Test procedures and standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
... invalid test condition, unsafe conditions, fast pass/fail algorithms, or, in the case of the on-board... using approved fast pass or fast fail algorithms and multiple pass/fail algorithms may be used during the test cycle to eliminate false failures. The transient test procedure, including algorithms and...
Robust tuning of robot control systems
NASA Technical Reports Server (NTRS)
Minis, I.; Uebel, M.
1992-01-01
The computed torque control problem is examined for a robot arm with flexible, geared, joint drive systems which are typical in many industrial robots. The standard computed torque algorithm is not directly applicable to this class of manipulators because of the dynamics introduced by the joint drive system. The proposed approach to computed torque control combines a computed torque algorithm with torque controller at each joint. Three such control schemes are proposed. The first scheme uses the joint torque control system currently implemented on the robot arm and a novel form of the computed torque algorithm. The other two use the standard computed torque algorithm and a novel model following torque control system based on model following techniques. Standard tasks and performance indices are used to evaluate the performance of the controllers. Both numerical simulations and experiments are used in evaluation. The study shows that all three proposed systems lead to improved tracking performance over a conventional PD controller.
Towards improving the NASA standard soil moisture retrieval algorithm and product
NASA Astrophysics Data System (ADS)
Mladenova, I. E.; Jackson, T. J.; Njoku, E. G.; Bindlish, R.; Cosh, M. H.; Chan, S.
2013-12-01
Soil moisture mapping using passive-based microwave remote sensing techniques has proven to be one of the most effective ways of acquiring reliable global soil moisture information on a routine basis. An important step in this direction was made by the launch of the Advanced Microwave Scanning Radiometer on the NASA's Earth Observing System Aqua satellite (AMSR-E). Along with the standard NASA algorithm and operational AMSR-E product, the easy access and availability of the AMSR-E data promoted the development and distribution of alternative retrieval algorithms and products. Several evaluation studies have demonstrated issues with the standard NASA AMSR-E product such as dampened temporal response and limited range of the final retrievals and noted that the available global passive-based algorithms, even though based on the same electromagnetic principles, produce different results in terms of accuracy and temporal dynamics. Our goal is to identify the theoretical causes that determine the reduced sensitivity of the NASA AMSR-E product and outline ways to improve the operational NASA algorithm, if possible. Properly identifying the underlying reasons that cause the above mentioned features of the NASA AMSR-E product and differences between the alternative algorithms requires a careful examination of the theoretical basis of each approach. Specifically, the simplifying assumptions and parametrization approaches adopted by each algorithm to reduce the dimensionality of unknowns and characterize the observing system. Statistically-based error analyses, which are useful and necessary, provide information on the relative accuracy of each product but give very little information on the theoretical causes, knowledge that is essential for algorithm improvement. Thus, we are currently examining the possibility of improving the standard NASA AMSR-E global soil moisture product by conducting a thorough theoretically-based review of and inter-comparisons between several well established global retrieval techniques. A detailed discussion focused on the theoretical basis of each approach and algorithms sensitivity to assumptions and parametrization approaches will be presented. USDA is an equal opportunity provider and employer.
GPU implementation of prior image constrained compressed sensing (PICCS)
NASA Astrophysics Data System (ADS)
Nett, Brian E.; Tang, Jie; Chen, Guang-Hong
2010-04-01
The Prior Image Constrained Compressed Sensing (PICCS) algorithm (Med. Phys. 35, pg. 660, 2008) has been applied to several computed tomography applications with both standard CT systems and flat-panel based systems designed for guiding interventional procedures and radiation therapy treatment delivery. The PICCS algorithm typically utilizes a prior image which is reconstructed via the standard Filtered Backprojection (FBP) reconstruction algorithm. The algorithm then iteratively solves for the image volume that matches the measured data, while simultaneously assuring the image is similar to the prior image. The PICCS algorithm has demonstrated utility in several applications including: improved temporal resolution reconstruction, 4D respiratory phase specific reconstructions for radiation therapy, and cardiac reconstruction from data acquired on an interventional C-arm. One disadvantage of the PICCS algorithm, just as other iterative algorithms, is the long computation times typically associated with reconstruction. In order for an algorithm to gain clinical acceptance reconstruction must be achievable in minutes rather than hours. In this work the PICCS algorithm has been implemented on the GPU in order to significantly reduce the reconstruction time of the PICCS algorithm. The Compute Unified Device Architecture (CUDA) was used in this implementation.
NASA Astrophysics Data System (ADS)
Brajard, J.; Moulin, C.; Thiria, S.
2008-10-01
This paper presents a comparison of the atmospheric correction accuracy between the standard sea-viewing wide field-of-view sensor (SeaWiFS) algorithm and the NeuroVaria algorithm for the ocean off the Indian coast in March 1999. NeuroVaria is a general method developed to retrieve aerosol optical properties and water-leaving reflectances for all types of aerosols, including absorbing ones. It has been applied to SeaWiFS images of March 1999, during an episode of transport of absorbing aerosols coming from pollutant sources in India. Water-leaving reflectances and aerosol optical thickness estimated by the two methods were extracted along a transect across the aerosol plume for three days. The comparison showed that NeuroVaria allows the retrieval of oceanic properties in the presence of absorbing aerosols with a better spatial and temporal stability than the standard SeaWiFS algorithm. NeuroVaria was then applied to the available SeaWiFS images over a two-week period. NeuroVaria algorithm retrieves ocean products for a larger number of pixels than the standard one and eliminates most of the discontinuities and artifacts associated with the standard algorithm in presence of absorbing aerosols.
Designing image segmentation studies: Statistical power, sample size and reference standard quality.
Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C
2017-12-01
Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Hus, Vanessa; Lord, Catherine
2014-08-01
The recently published Autism Diagnostic Observation Schedule, 2nd edition (ADOS-2) includes revised diagnostic algorithms and standardized severity scores for modules used to assess younger children. A revised algorithm and severity scores are not yet available for Module 4, used with verbally fluent adults. The current study revises the Module 4 algorithm and calibrates raw overall and domain totals to provide metrics of autism spectrum disorder (ASD) symptom severity. Sensitivity and specificity of the revised Module 4 algorithm exceeded 80 % in the overall sample. Module 4 calibrated severity scores provide quantitative estimates of ASD symptom severity that are relatively independent of participant characteristics. These efforts increase comparability of ADOS scores across modules and should facilitate efforts to examine symptom trajectories from toddler to adulthood.
Lipinski, Doug; Mohseni, Kamran
2010-03-01
A ridge tracking algorithm for the computation and extraction of Lagrangian coherent structures (LCS) is developed. This algorithm takes advantage of the spatial coherence of LCS by tracking the ridges which form LCS to avoid unnecessary computations away from the ridges. We also make use of the temporal coherence of LCS by approximating the time dependent motion of the LCS with passive tracer particles. To justify this approximation, we provide an estimate of the difference between the motion of the LCS and that of tracer particles which begin on the LCS. In addition to the speedup in computational time, the ridge tracking algorithm uses less memory and results in smaller output files than the standard LCS algorithm. Finally, we apply our ridge tracking algorithm to two test cases, an analytically defined double gyre as well as the more complicated example of the numerical simulation of a swimming jellyfish. In our test cases, we find up to a 35 times speedup when compared with the standard LCS algorithm.
Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A
2017-01-01
The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60-90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity.
A comparison of the fractal and JPEG algorithms
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Shahshahani, M.
1991-01-01
A proprietary fractal image compression algorithm and the Joint Photographic Experts Group (JPEG) industry standard algorithm for image compression are compared. In every case, the JPEG algorithm was superior to the fractal method at a given compression ratio according to a root mean square criterion and a peak signal to noise criterion.
Gate-Level Commercial Microelectronics Verification with Standard Cell Recognition
2015-03-26
21 2.2.1.4 Algorithm Insufficiencies as Applied to DARPA’s Cir- cuit Verification Efforts . . . . . . . . . . . . . . . . . . 22 vi Page...58 4.2 Discussion of SCR Algorithm and Code . . . . . . . . . . . . . . . . . . . 91 4.2.1 Explication of SCR Algorithm ...93 4.2.2 Algorithm Attributes . . . . . . . . . . . . . . . . . . . . . . . . . 118 4.3 Advantages of Transistor-level Verification with SCR
Screening Algorithm to Guide Decisions on Whether to Conduct a Health Impact Assessment
Provides a visual aid in the form of a decision algorithm that helps guide discussions about whether to proceed with an HIA. The algorithm can help structure, standardize, and document the decision process.
Plume-Free Stream Interaction Heating Effects During Orion Crew Module Reentry
NASA Technical Reports Server (NTRS)
Marichalar, J.; Lumpkin, F.; Boyles, K.
2012-01-01
During reentry of the Orion Crew Module (CM), vehicle attitude control will be performed by firing reaction control system (RCS) thrusters. Simulation of RCS plumes and their interaction with the oncoming flow has been difficult for the analysis community due to the large scarf angles of the RCS thrusters and the unsteady nature of the Orion capsule backshell environments. The model for the aerothermal database has thus relied on wind tunnel test data to capture the heating effects of thruster plume interactions with the freestream. These data are only valid for the continuum flow regime of the reentry trajectory. A Direct Simulation Monte Carlo (DSMC) analysis was performed to study the vehicle heating effects that result from the RCS thruster plume interaction with the oncoming freestream flow at high altitudes during Orion CM reentry. The study was performed with the DSMC Analysis Code (DAC). The inflow boundary conditions for the jets were obtained from Data Parallel Line Relaxation (DPLR) computational fluid dynamics (CFD) solutions. Simulations were performed for the roll, yaw, pitch-up and pitch-down jets at altitudes of 105 km, 125 km and 160 km as well as vacuum conditions. For comparison purposes (see Figure 1), the freestream conditions were based on previous DAC simulations performed without active RCS to populate the aerodynamic database for the Orion CM. Other inputs to the analysis included a constant Orbital reentry velocity of 7.5 km/s and angle of attack of 160 degrees. The results of the study showed that the interaction effects decrease quickly with increasing altitude. Also, jets with highly scarfed nozzles cause more severe heating compared to the nozzles with lower scarf angles. The difficulty of performing these simulations was based on the maximum number density and the ratio of number densities between the freestream and the plume for each simulation. The lowest altitude solutions required a substantial amount of computational resources (up to 1800 processors) to simulate approximately 2 billion molecules for the refined (adapted) solutions.
Aero-thermo-dynamic analysis of the Spaceliner-7.1 vehicle in high altitude flight
NASA Astrophysics Data System (ADS)
Zuppardi, Gennaro; Morsa, Luigi; Sippel, Martin; Schwanekamp, Tobias
2014-12-01
SpaceLiner, designed by DLR, is a visionary, extremely fast passenger transportation concept. It consists of two stages: a winged booster, a vehicle. After separation of the two stages, the booster makes a controlled re-entry and returns to the launch site. According to the current project, version 7-1 of SpaceLiner (SpaceLiner-7.1), the vehicle should be brought at an altitude of 75 km and then released, undertaking the descent path. In the perspective that the vehicle of SpaceLiner-7.1 could be brought to altitudes higher than 75 km, e.g. 100 km or above and also for a speculative purpose, in this paper the aerodynamic parameters of the SpaceLiner-7.1 vehicle are calculated in the whole transition regime, from continuum low density to free molecular flows. Computer simulations have been carried out by three codes: two DSMC codes, DS3V in the altitude interval 100-250 km for the evaluation of the global aerodynamic coefficients and DS2V at the altitude of 60 km for the evaluation of the heat flux and pressure distributions along the vehicle nose, and the DLR HOTSOSE code for the evaluation of the global aerodynamic coefficients in continuum, hypersonic flow at the altitude of 44.6 km. The effectiveness of the flaps with deflection angle of -35 deg. was evaluated in the above mentioned altitude interval. The vehicle showed longitudinal stability in the whole altitude interval even with no flap. The global bridging formulae verified to be proper for the evaluation of the aerodynamic coefficients in the altitude interval 80-100 km where the computations cannot be fulfilled either by CFD, because of the failure of the classical equations computing the transport coefficients, or by DSMC because of the requirement of very high computer resources both in terms of the core storage (a high number of simulated molecules is needed) and to the very long processing time.
ERIC Educational Resources Information Center
Gerlach, Vernon S.; And Others
An algorithm is defined here as an unambiguous procedure which will always produce the correct result when applied to any problem of a given class of problems. This paper gives an extended discussion of the definition of an algorithm. It also explores in detail the elements of an algorithm, the representation of algorithms in standard prose, flow…
NASA Technical Reports Server (NTRS)
Essias, Wayne E.; Abbott, Mark; Carder, Kendall; Campbell, Janet; Clark, Dennis; Evans, Robert; Brown, Otis; Kearns, Ed; Kilpatrick, Kay; Balch, W.
2003-01-01
Simplistic models relating global satellite ocean color, temperature, and light to ocean net primary production (ONPP) are sensitive to the accuracy and limitations of the satellite estimate of chlorophyll and other input fields, as well as the primary productivity model. The standard MODIS ONPP product uses the new semi-analytic chlorophyll algorithm as its input for two ONPP indexes. The three primary MODIS chlorophyll Q estimates from MODIS, as well as the SeaWiFS 4 chlorophyll product, were used to assess global and regional performance in estimating ONPP for the full mission, but concentrating on 2001. The two standard ONPP algorithms were examined with 8-day and 39 kilometer resolution to quantify chlorophyll algorithm dependency of ONPP. Ancillary data (MLD from FNMOC, MODIS SSTD1, and PAR from the GSFC DAO) were identical. The standard MODIS ONPP estimates for annual production in 2001 was 59 and 58 GT C for the two ONPP algorithms. Differences in ONPP using alternate chlorophylls were on the order of 10% for global annual ONPP, but ranged to 100% regionally. On all scales the differences in ONPP were smaller between MODIS and SeaWiFS than between ONPP models, or among chlorophyll algorithms within MODIS. Largest regional ONPP differences were found in the Southern Ocean (SO). In the SO, application of the semi-analytic chlorophyll resulted in not only a magnitude difference in ONPP (2x), but also a temporal shift in the time of maximum production compared to empirical algorithms when summed over standard oceanic areas. The resulting increase in global ONPP (6-7 GT) is supported by better performance of the semi-analytic chlorophyll in the SO and other high chlorophyll regions. The differences are significant in terms of understanding regional differences and dynamics of ocean carbon transformations.
Noël, Peter B; Engels, Stephan; Köhler, Thomas; Muenzel, Daniela; Franz, Daniela; Rasper, Michael; Rummeny, Ernst J; Dobritz, Martin; Fingerle, Alexander A
2018-01-01
Background The explosive growth of computer tomography (CT) has led to a growing public health concern about patient and population radiation dose. A recently introduced technique for dose reduction, which can be combined with tube-current modulation, over-beam reduction, and organ-specific dose reduction, is iterative reconstruction (IR). Purpose To evaluate the quality, at different radiation dose levels, of three reconstruction algorithms for diagnostics of patients with proven liver metastases under tumor follow-up. Material and Methods A total of 40 thorax-abdomen-pelvis CT examinations acquired from 20 patients in a tumor follow-up were included. All patients were imaged using the standard-dose and a specific low-dose CT protocol. Reconstructed slices were generated by using three different reconstruction algorithms: a classical filtered back projection (FBP); a first-generation iterative noise-reduction algorithm (iDose4); and a next generation model-based IR algorithm (IMR). Results The overall detection of liver lesions tended to be higher with the IMR algorithm than with FBP or iDose4. The IMR dataset at standard dose yielded the highest overall detectability, while the low-dose FBP dataset showed the lowest detectability. For the low-dose protocols, a significantly improved detectability of the liver lesion can be reported compared to FBP or iDose 4 ( P = 0.01). The radiation dose decreased by an approximate factor of 5 between the standard-dose and the low-dose protocol. Conclusion The latest generation of IR algorithms significantly improved the diagnostic image quality and provided virtually noise-free images for ultra-low-dose CT imaging.
Analytic continuation of quantum Monte Carlo data by stochastic analytical inference.
Fuchs, Sebastian; Pruschke, Thomas; Jarrell, Mark
2010-05-01
We present an algorithm for the analytic continuation of imaginary-time quantum Monte Carlo data which is strictly based on principles of Bayesian statistical inference. Within this framework we are able to obtain an explicit expression for the calculation of a weighted average over possible energy spectra, which can be evaluated by standard Monte Carlo simulations, yielding as by-product also the distribution function as function of the regularization parameter. Our algorithm thus avoids the usual ad hoc assumptions introduced in similar algorithms to fix the regularization parameter. We apply the algorithm to imaginary-time quantum Monte Carlo data and compare the resulting energy spectra with those from a standard maximum-entropy calculation.
Combinatorial algorithms for design of DNA arrays.
Hannenhalli, Sridhar; Hubell, Earl; Lipshutz, Robert; Pevzner, Pavel A
2002-01-01
Optimal design of DNA arrays requires the development of algorithms with two-fold goals: reducing the effects caused by unintended illumination (border length minimization problem) and reducing the complexity of masks (mask decomposition problem). We describe algorithms that reduce the number of rectangles in mask decomposition by 20-30% as compared to a standard array design under the assumption that the arrangement of oligonucleotides on the array is fixed. This algorithm produces provably optimal solution for all studied real instances of array design. We also address the difficult problem of finding an arrangement which minimizes the border length and come up with a new idea of threading that significantly reduces the border length as compared to standard designs.
An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Gutknecht, Martin H.; Nachtigal, Noel M.
1991-01-01
The nonsymmetric Lanczos method can be used to compute eigenvalues of large sparse non-Hermitian matrices or to solve large sparse non-Hermitian linear systems. However, the original Lanczos algorithm is susceptible to possible breakdowns and potential instabilities. An implementation is presented of a look-ahead version of the Lanczos algorithm that, except for the very special situation of an incurable breakdown, overcomes these problems by skipping over those steps in which a breakdown or near-breakdown would occur in the standard process. The proposed algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products as the standard Lanczos process without look-ahead.
Improved classification accuracy by feature extraction using genetic algorithms
NASA Astrophysics Data System (ADS)
Patriarche, Julia; Manduca, Armando; Erickson, Bradley J.
2003-05-01
A feature extraction algorithm has been developed for the purposes of improving classification accuracy. The algorithm uses a genetic algorithm / hill-climber hybrid to generate a set of linearly recombined features, which may be of reduced dimensionality compared with the original set. The genetic algorithm performs the global exploration, and a hill climber explores local neighborhoods. Hybridizing the genetic algorithm with a hill climber improves both the rate of convergence, and the final overall cost function value; it also reduces the sensitivity of the genetic algorithm to parameter selection. The genetic algorithm includes the operators: crossover, mutation, and deletion / reactivation - the last of these effects dimensionality reduction. The feature extractor is supervised, and is capable of deriving a separate feature space for each tissue (which are reintegrated during classification). A non-anatomical digital phantom was developed as a gold standard for testing purposes. In tests with the phantom, and with images of multiple sclerosis patients, classification with feature extractor derived features yielded lower error rates than using standard pulse sequences, and with features derived using principal components analysis. Using the multiple sclerosis patient data, the algorithm resulted in a mean 31% reduction in classification error of pure tissues.
COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R826238)
This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard methods that we ...
Report on the Development of the Advanced Encryption Standard (AES).
Nechvatal, J; Barker, E; Bassham, L; Burr, W; Dworkin, M; Foti, J; Roback, E
2001-01-01
In 1997, the National Institute of Standards and Technology (NIST) initiated a process to select a symmetric-key encryption algorithm to be used to protect sensitive (unclassified) Federal information in furtherance of NIST's statutory responsibilities. In 1998, NIST announced the acceptance of 15 candidate algorithms and requested the assistance of the cryptographic research community in analyzing the candidates. This analysis included an initial examination of the security and efficiency characteristics for each algorithm. NIST reviewed the results of this preliminary research and selected MARS, RC™, Rijndael, Serpent and Twofish as finalists. Having reviewed further public analysis of the finalists, NIST has decided to propose Rijndael as the Advanced Encryption Standard (AES). The research results and rationale for this selection are documented in this report.
An efficient Cellular Potts Model algorithm that forbids cell fragmentation
NASA Astrophysics Data System (ADS)
Durand, Marc; Guesnet, Etienne
2016-11-01
The Cellular Potts Model (CPM) is a lattice based modeling technique which is widely used for simulating cellular patterns such as foams or biological tissues. Despite its realism and generality, the standard Monte Carlo algorithm used in the scientific literature to evolve this model preserves connectivity of cells on a limited range of simulation temperature only. We present a new algorithm in which cell fragmentation is forbidden for all simulation temperatures. This allows to significantly enhance realism of the simulated patterns. It also increases the computational efficiency compared with the standard CPM algorithm even at same simulation temperature, thanks to the time spared in not doing unrealistic moves. Moreover, our algorithm restores the detailed balance equation, ensuring that the long-term stage is independent of the chosen acceptance rate and chosen path in the temperature space.
Ckmeans.1d.dp: Optimal k-means Clustering in One Dimension by Dynamic Programming.
Wang, Haizhou; Song, Mingzhou
2011-12-01
The heuristic k -means algorithm, widely used for cluster analysis, does not guarantee optimality. We developed a dynamic programming algorithm for optimal one-dimensional clustering. The algorithm is implemented as an R package called Ckmeans.1d.dp . We demonstrate its advantage in optimality and runtime over the standard iterative k -means algorithm.
Van Herpe, Tom; De Brabanter, Jos; Beullens, Martine; De Moor, Bart; Van den Berghe, Greet
2008-01-01
Introduction Blood glucose (BG) control performed by intensive care unit (ICU) nurses is becoming standard practice for critically ill patients. New (semi-automated) 'BG control' algorithms (or 'insulin titration' algorithms) are under development, but these require stringent validation before they can replace the currently used algorithms. Existing methods for objectively comparing different insulin titration algorithms show weaknesses. In the current study, a new approach for appropriately assessing the adequacy of different algorithms is proposed. Methods Two ICU patient populations (with different baseline characteristics) were studied, both treated with a similar 'nurse-driven' insulin titration algorithm targeting BG levels of 80 to 110 mg/dl. A new method for objectively evaluating BG deviations from normoglycemia was founded on a smooth penalty function. Next, the performance of this new evaluation tool was compared with the current standard assessment methods, on an individual as well as a population basis. Finally, the impact of four selected parameters (the average BG sampling frequency, the duration of algorithm application, the severity of disease, and the type of illness) on the performance of an insulin titration algorithm was determined by multiple regression analysis. Results The glycemic penalty index (GPI) was proposed as a tool for assessing the overall glycemic control behavior in ICU patients. The GPI of a patient is the average of all penalties that are individually assigned to each measured BG value based on the optimized smooth penalty function. The computation of this index returns a number between 0 (no penalty) and 100 (the highest penalty). For some patients, the assessment of the BG control behavior using the traditional standard evaluation methods was different from the evaluation with GPI. Two parameters were found to have a significant impact on GPI: the BG sampling frequency and the duration of algorithm application. A higher BG sampling frequency and a longer algorithm application duration resulted in an apparently better performance, as indicated by a lower GPI. Conclusion The GPI is an alternative method for evaluating the performance of BG control algorithms. The blood glucose sampling frequency and the duration of algorithm application should be similar when comparing algorithms. PMID:18302732
Kirişli, H A; Schaap, M; Metz, C T; Dharampal, A S; Meijboom, W B; Papadopoulou, S L; Dedic, A; Nieman, K; de Graaf, M A; Meijs, M F L; Cramer, M J; Broersen, A; Cetin, S; Eslami, A; Flórez-Valencia, L; Lor, K L; Matuszewski, B; Melki, I; Mohr, B; Oksüz, I; Shahzad, R; Wang, C; Kitslaar, P H; Unal, G; Katouzian, A; Örkisz, M; Chen, C M; Precioso, F; Najman, L; Masood, S; Ünay, D; van Vliet, L; Moreno, R; Goldenberg, R; Vuçini, E; Krestin, G P; Niessen, W J; van Walsum, T
2013-12-01
Though conventional coronary angiography (CCA) has been the standard of reference for diagnosing coronary artery disease in the past decades, computed tomography angiography (CTA) has rapidly emerged, and is nowadays widely used in clinical practice. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms devised to detect and quantify the coronary artery stenoses, and to segment the coronary artery lumen in CTA data. The objective of this evaluation framework is to demonstrate the feasibility of dedicated algorithms to: (1) (semi-)automatically detect and quantify stenosis on CTA, in comparison with quantitative coronary angiography (QCA) and CTA consensus reading, and (2) (semi-)automatically segment the coronary lumen on CTA, in comparison with expert's manual annotation. A database consisting of 48 multicenter multivendor cardiac CTA datasets with corresponding reference standards are described and made available. The algorithms from 11 research groups were quantitatively evaluated and compared. The results show that (1) some of the current stenosis detection/quantification algorithms may be used for triage or as a second-reader in clinical practice, and that (2) automatic lumen segmentation is possible with a precision similar to that obtained by experts. The framework is open for new submissions through the website, at http://coronary.bigr.nl/stenoses/. Copyright © 2013 Elsevier B.V. All rights reserved.
Tweedell, Andrew J.; Haynes, Courtney A.
2017-01-01
The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60–90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity. PMID:28489897
Atmospheric correction of SeaWiFS imagery for turbid coastal and inland waters.
Ruddick, K G; Ovidio, F; Rijkeboer, M
2000-02-20
The standard SeaWiFS atmospheric correction algorithm, designed for open ocean water, has been extended for use over turbid coastal and inland waters. Failure of the standard algorithm over turbid waters can be attributed to invalid assumptions of zero water-leaving radiance for the near-infrared bands at 765 and 865 nm. In the present study these assumptions are replaced by the assumptions of spatial homogeneity of the 765:865-nm ratios for aerosol reflectance and for water-leaving reflectance. These two ratios are imposed as calibration parameters after inspection of the Rayleigh-corrected reflectance scatterplot. The performance of the new algorithm is demonstrated for imagery of Belgian coastal waters and yields physically realistic water-leaving radiance spectra. A preliminary comparison with in situ radiance spectra for the Dutch Lake Markermeer shows significant improvement over the standard atmospheric correction algorithm. An analysis is made of the sensitivity of results to the choice of calibration parameters, and perspectives for application of the method to other sensors are briefly discussed.
Schoenberg, Mike R; Lange, Rael T; Brickell, Tracey A; Saklofske, Donald H
2007-04-01
Neuropsychologic evaluation requires current test performance be contrasted against a comparison standard to determine if change has occurred. An estimate of premorbid intelligence quotient (IQ) is often used as a comparison standard. The Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) is a commonly used intelligence test. However, there is no method to estimate premorbid IQ for the WISC-IV, limiting the test's utility for neuropsychologic assessment. This study develops algorithms to estimate premorbid Full Scale IQ scores. Participants were the American WISC-IV standardization sample (N = 2172). The sample was randomly divided into 2 groups (development and validation). The development group was used to generate 12 algorithms. These algorithms were accurate predictors of WISC-IV Full Scale IQ scores in healthy children and adolescents. These algorithms hold promise as a method to predict premorbid IQ for patients with known or suspected neurologic dysfunction; however, clinical validation is required.
Patel, Sanjay R.; Weng, Jia; Rueschman, Michael; Dudley, Katherine A.; Loredo, Jose S.; Mossavar-Rahmani, Yasmin; Ramirez, Maricelle; Ramos, Alberto R.; Reid, Kathryn; Seiger, Ashley N.; Sotres-Alvarez, Daniela; Zee, Phyllis C.; Wang, Rui
2015-01-01
Study Objectives: While actigraphy is considered objective, the process of setting rest intervals to calculate sleep variables is subjective. We sought to evaluate the reproducibility of actigraphy-derived measures of sleep using a standardized algorithm for setting rest intervals. Design: Observational study. Setting: Community-based. Participants: A random sample of 50 adults aged 18–64 years free of severe sleep apnea participating in the Sueño sleep ancillary study to the Hispanic Community Health Study/Study of Latinos. Interventions: N/A. Measurements and Results: Participants underwent 7 days of continuous wrist actigraphy and completed daily sleep diaries. Studies were scored twice by each of two scorers. Rest intervals were set using a standardized hierarchical approach based on event marker, diary, light, and activity data. Sleep/wake status was then determined for each 30-sec epoch using a validated algorithm, and this was used to generate 11 variables: mean nightly sleep duration, nap duration, 24-h sleep duration, sleep latency, sleep maintenance efficiency, sleep fragmentation index, sleep onset time, sleep offset time, sleep midpoint time, standard deviation of sleep duration, and standard deviation of sleep midpoint. Intra-scorer intraclass correlation coefficients (ICCs) were high, ranging from 0.911 to 0.995 across all 11 variables. Similarly, inter-scorer ICCs were high, also ranging from 0.911 to 0.995, and mean inter-scorer differences were small. Bland-Altman plots did not reveal any systematic disagreement in scoring. Conclusions: With use of a standardized algorithm to set rest intervals, scoring of actigraphy for the purpose of generating a wide array of sleep variables is highly reproducible. Citation: Patel SR, Weng J, Rueschman M, Dudley KA, Loredo JS, Mossavar-Rahmani Y, Ramirez M, Ramos AR, Reid K, Seiger AN, Sotres-Alvarez D, Zee PC, Wang R. Reproducibility of a standardized actigraphy scoring algorithm for sleep in a US Hispanic/Latino population. SLEEP 2015;38(9):1497–1503. PMID:25845697
Transient heat transfer in viscous rarefied gas between concentric cylinders. Effect of curvature
NASA Astrophysics Data System (ADS)
Gospodinov, P.; Roussinov, V.; Dankov, D.
2015-10-01
The thermoacoustic waves arising in cylindrical or planar Couette rarefied gas flow between rotating cylinders is studied in the cases of suddenly cylinder (active) wall velocity direction turn on. An unlimited increase in the radius of the inner cylinder flow can be interpreted as Couette flow between the two flat plates. Based on the developed in previous publications Navier-Stockes-Fourier (NSF) model and Direct Simulation Monte Carlo (DSMC) method and their numerical solutions, are considered transient processes in the gas phase. Macroscopic flow characteristics (velocity, density, temperature) are received. The cylindrical flow cases for fixed velocity and temperature of the both walls are considered. The curvature effects over the wave's distribution and attenuation are studied numerically.
Rarefaction effects on Galileo probe aerodynamics
NASA Technical Reports Server (NTRS)
Moss, James N.; LeBeau, Gerald J.; Blanchard, Robert C.; Price, Joseph M.
1996-01-01
Solutions of aerodynamic characteristics are presented for the Galileo Probe entering Jupiter's hydrogen-helium atmosphere at a nominal relative velocity of 47.4 km/s. Focus is on predicting the aerodynamic drag coefficient during the transitional flow regime using the direct simulation Monte Carlo (DSMC) method. Accuracy of the probe's drag coefficient directly impacts the inferred atmospheric properties that are being extracted from the deceleration measurements made by onboard accelerometers as part of the Atmospheric Structure Experiment. The range of rarefaction considered in the present study extends from the free molecular limit to continuum conditions. Comparisons made with previous calculations and experimental measurements show the present results for drag to merge well with Navier-Stokes and experimental results for the least rarefied conditions considered.
NASA Astrophysics Data System (ADS)
Huang, Z.; Jia, X.; Rubin, M.; Fougere, N.; Gombosi, T. I.; Tenishev, V.; Combi, M. R.; Bieler, A. M.; Toth, G.; Hansen, K. C.; Shou, Y.
2014-12-01
We study the plasma environment of the comet Churyumov-Gerasimenko, which is the target of the Rosetta mission, by performing large scale numerical simulations. Our model is based on BATS-R-US within the Space Weather Modeling Framework that solves the governing multifluid MHD equations, which describe the behavior of the cometary heavy ions, the solar wind protons, and electrons. The model includes various mass loading processes, including ionization, charge exchange, dissociative ion-electron recombination, as well as collisional interactions between different fluids. The neutral background used in our MHD simulations is provided by a kinetic Direct Simulation Monte Carlo (DSMC) model. We will simulate how the cometary plasma environment changes at different heliocentric distances.
A particle-particle hybrid method for kinetic and continuum equations
NASA Astrophysics Data System (ADS)
Tiwari, Sudarshan; Klar, Axel; Hardt, Steffen
2009-10-01
We present a coupling procedure for two different types of particle methods for the Boltzmann and the Navier-Stokes equations. A variant of the DSMC method is applied to simulate the Boltzmann equation, whereas a meshfree Lagrangian particle method, similar to the SPH method, is used for simulations of the Navier-Stokes equations. An automatic domain decomposition approach is used with the help of a continuum breakdown criterion. We apply adaptive spatial and time meshes. The classical Sod's 1D shock tube problem is solved for a large range of Knudsen numbers. Results from Boltzmann, Navier-Stokes and hybrid solvers are compared. The CPU time for the hybrid solver is 3-4 times faster than for the Boltzmann solver.
DSMC analysis of species separation in rarefied nozzle flows
NASA Technical Reports Server (NTRS)
Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren; Penko, Paul F.
1992-01-01
The direct-simulation Monte Carlo method has been used to investigate the behavior of a small amount of a harmful species in the plume and the backflow region of nuclear thermal propulsion rockets. Species separation due to pressure diffusion and nonequilibrium effects due to rapid expansion into a surrounding low-density environment are the most important factors in this type of flow. It is shown that a relatively large amount of the lighter species is scattered into the backflow region and the heavier species becomes negligible in this region due to the extreme separation between species. It is also shown that the type of molecular interaction between the species can have a substantial effect on separation of the species.
COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R825173)
This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard...
Interband coding extension of the new lossless JPEG standard
NASA Astrophysics Data System (ADS)
Memon, Nasir D.; Wu, Xiaolin; Sippy, V.; Miller, G.
1997-01-01
Due to the perceived inadequacy of current standards for lossless image compression, the JPEG committee of the International Standards Organization (ISO) has been developing a new standard. A baseline algorithm, called JPEG-LS, has already been completed and is awaiting approval by national bodies. The JPEG-LS baseline algorithm despite being simple is surprisingly efficient, and provides compression performance that is within a few percent of the best and more sophisticated techniques reported in the literature. Extensive experimentations performed by the authors seem to indicate that an overall improvement by more than 10 percent in compression performance will be difficult to obtain even at the cost of great complexity; at least not with traditional approaches to lossless image compression. However, if we allow inter-band decorrelation and modeling in the baseline algorithm, nearly 30 percent improvement in compression gains for specific images in the test set become possible with a modest computational cost. In this paper we propose and investigate a few techniques for exploiting inter-band correlations in multi-band images. These techniques have been designed within the framework of the baseline algorithm, and require minimal changes to the basic architecture of the baseline, retaining its essential simplicity.
NASA Astrophysics Data System (ADS)
Martens, Koen J. A.; Bader, Arjen N.; Baas, Sander; Rieger, Bernd; Hohlbein, Johannes
2018-03-01
We present a fast and model-free 2D and 3D single-molecule localization algorithm that allows more than 3 × 106 localizations per second to be calculated on a standard multi-core central processing unit with localization accuracies in line with the most accurate algorithms currently available. Our algorithm converts the region of interest around a point spread function to two phase vectors (phasors) by calculating the first Fourier coefficients in both the x- and y-direction. The angles of these phasors are used to localize the center of the single fluorescent emitter, and the ratio of the magnitudes of the two phasors is a measure for astigmatism, which can be used to obtain depth information (z-direction). Our approach can be used both as a stand-alone algorithm for maximizing localization speed and as a first estimator for more time consuming iterative algorithms.
Bouslimi, D; Coatrieux, G; Roux, Ch
2011-01-01
In this paper, we propose a new joint watermarking/encryption algorithm for the purpose of verifying the reliability of medical images in both encrypted and spatial domains. It combines a substitutive watermarking algorithm, the quantization index modulation (QIM), with a block cipher algorithm, the Advanced Encryption Standard (AES), in CBC mode of operation. The proposed solution gives access to the outcomes of the image integrity and of its origins even though the image is stored encrypted. Experimental results achieved on 8 bits encoded Ultrasound images illustrate the overall performances of the proposed scheme. By making use of the AES block cipher in CBC mode, the proposed solution is compliant with or transparent to the DICOM standard.
An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices, part 1
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Gutknecht, Martin H.; Nachtigal, Noel M.
1990-01-01
The nonsymmetric Lanczos method can be used to compute eigenvalues of large sparse non-Hermitian matrices or to solve large sparse non-Hermitian linear systems. However, the original Lanczos algorithm is susceptible to possible breakdowns and potential instabilities. We present an implementation of a look-ahead version of the Lanczos algorithm which overcomes these problems by skipping over those steps in which a breakdown or near-breakdown would occur in the standard process. The proposed algorithm can handle look-ahead steps of any length and is not restricted to steps of length 2, as earlier implementations are. Also, our implementation has the feature that it requires roughly the same number of inner products as the standard Lanczos process without look-ahead.
Hus, Vanessa; Lord, Catherine
2014-01-01
The Autism Diagnostic Observation Schedule, 2nd Edition includes revised diagnostic algorithms and standardized severity scores for modules used to assess children and adolescents of varying language abilities. Comparable revisions have not yet been applied to the Module 4, used with verbally fluent adults. The current study revises the Module 4 algorithm and calibrates raw overall and domain totals to provide metrics of ASD symptom severity. Sensitivity and specificity of the revised Module 4 algorithm exceeded 80% in the overall sample. Module 4 calibrated severity scores provide quantitative estimates of ASD symptom severity that are relatively independent of participant characteristics. These efforts increase comparability of ADOS scores across modules and should facilitate efforts to increase understanding of adults with ASD. PMID:24590409
Postmortem validation of breast density using dual-energy mammography
Molloi, Sabee; Ducote, Justin L.; Ding, Huanjun; Feig, Stephen A.
2014-01-01
Purpose: Mammographic density has been shown to be an indicator of breast cancer risk and also reduces the sensitivity of screening mammography. Currently, there is no accepted standard for measuring breast density. Dual energy mammography has been proposed as a technique for accurate measurement of breast density. The purpose of this study is to validate its accuracy in postmortem breasts and compare it with other existing techniques. Methods: Forty postmortem breasts were imaged using a dual energy mammography system. Glandular and adipose equivalent phantoms of uniform thickness were used to calibrate a dual energy basis decomposition algorithm. Dual energy decomposition was applied after scatter correction to calculate breast density. Breast density was also estimated using radiologist reader assessment, standard histogram thresholding and a fuzzy C-mean algorithm. Chemical analysis was used as the reference standard to assess the accuracy of different techniques to measure breast composition. Results: Breast density measurements using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm, and dual energy were in good agreement with the measured fibroglandular volume fraction using chemical analysis. The standard error estimates using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean, and dual energy were 9.9%, 8.6%, 7.2%, and 4.7%, respectively. Conclusions: The results indicate that dual energy mammography can be used to accurately measure breast density. The variability in breast density estimation using dual energy mammography was lower than reader assessment rankings, standard histogram thresholding, and fuzzy C-mean algorithm. Improved quantification of breast density is expected to further enhance its utility as a risk factor for breast cancer. PMID:25086548
Postmortem validation of breast density using dual-energy mammography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molloi, Sabee, E-mail: symolloi@uci.edu; Ducote, Justin L.; Ding, Huanjun
2014-08-15
Purpose: Mammographic density has been shown to be an indicator of breast cancer risk and also reduces the sensitivity of screening mammography. Currently, there is no accepted standard for measuring breast density. Dual energy mammography has been proposed as a technique for accurate measurement of breast density. The purpose of this study is to validate its accuracy in postmortem breasts and compare it with other existing techniques. Methods: Forty postmortem breasts were imaged using a dual energy mammography system. Glandular and adipose equivalent phantoms of uniform thickness were used to calibrate a dual energy basis decomposition algorithm. Dual energy decompositionmore » was applied after scatter correction to calculate breast density. Breast density was also estimated using radiologist reader assessment, standard histogram thresholding and a fuzzy C-mean algorithm. Chemical analysis was used as the reference standard to assess the accuracy of different techniques to measure breast composition. Results: Breast density measurements using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm, and dual energy were in good agreement with the measured fibroglandular volume fraction using chemical analysis. The standard error estimates using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean, and dual energy were 9.9%, 8.6%, 7.2%, and 4.7%, respectively. Conclusions: The results indicate that dual energy mammography can be used to accurately measure breast density. The variability in breast density estimation using dual energy mammography was lower than reader assessment rankings, standard histogram thresholding, and fuzzy C-mean algorithm. Improved quantification of breast density is expected to further enhance its utility as a risk factor for breast cancer.« less
NASA Astrophysics Data System (ADS)
Chen, Buxin; Zhang, Zheng; Sidky, Emil Y.; Xia, Dan; Pan, Xiaochuan
2017-11-01
Optimization-based algorithms for image reconstruction in multispectral (or photon-counting) computed tomography (MCT) remains a topic of active research. The challenge of optimization-based image reconstruction in MCT stems from the inherently non-linear data model that can lead to a non-convex optimization program for which no mathematically exact solver seems to exist for achieving globally optimal solutions. In this work, based upon a non-linear data model, we design a non-convex optimization program, derive its first-order-optimality conditions, and propose an algorithm to solve the program for image reconstruction in MCT. In addition to consideration of image reconstruction for the standard scan configuration, the emphasis is on investigating the algorithm’s potential for enabling non-standard scan configurations with no or minimum hardware modification to existing CT systems, which has potential practical implications for lowered hardware cost, enhanced scanning flexibility, and reduced imaging dose/time in MCT. Numerical studies are carried out for verification of the algorithm and its implementation, and for a preliminary demonstration and characterization of the algorithm in reconstructing images and in enabling non-standard configurations with varying scanning angular range and/or x-ray illumination coverage in MCT.
Simultaneous and semi-alternating projection algorithms for solving split equality problems.
Dong, Qiao-Li; Jiang, Dan
2018-01-01
In this article, we first introduce two simultaneous projection algorithms for solving the split equality problem by using a new choice of the stepsize, and then propose two semi-alternating projection algorithms. The weak convergence of the proposed algorithms is analyzed under standard conditions. As applications, we extend the results to solve the split feasibility problem. Finally, a numerical example is presented to illustrate the efficiency and advantage of the proposed algorithms.
Improvement of the cost-benefit analysis algorithm for high-rise construction projects
NASA Astrophysics Data System (ADS)
Gafurov, Andrey; Skotarenko, Oksana; Plotnikov, Vladimir
2018-03-01
The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the "Project analysis scenario" flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.
NVU dynamics. I. Geodesic motion on the constant-potential-energy hypersurface.
Ingebrigtsen, Trond S; Toxvaerd, Søren; Heilmann, Ole J; Schrøder, Thomas B; Dyre, Jeppe C
2011-09-14
An algorithm is derived for computer simulation of geodesics on the constant-potential-energy hypersurface of a system of N classical particles. First, a basic time-reversible geodesic algorithm is derived by discretizing the geodesic stationarity condition and implementing the constant-potential-energy constraint via standard Lagrangian multipliers. The basic NVU algorithm is tested by single-precision computer simulations of the Lennard-Jones liquid. Excellent numerical stability is obtained if the force cutoff is smoothed and the two initial configurations have identical potential energy within machine precision. Nevertheless, just as for NVE algorithms, stabilizers are needed for very long runs in order to compensate for the accumulation of numerical errors that eventually lead to "entropic drift" of the potential energy towards higher values. A modification of the basic NVU algorithm is introduced that ensures potential-energy and step-length conservation; center-of-mass drift is also eliminated. Analytical arguments confirmed by simulations demonstrate that the modified NVU algorithm is absolutely stable. Finally, we present simulations showing that the NVU algorithm and the standard leap-frog NVE algorithm have identical radial distribution functions for the Lennard-Jones liquid. © 2011 American Institute of Physics
Report on the Development of the Advanced Encryption Standard (AES)
Nechvatal, James; Barker, Elaine; Bassham, Lawrence; Burr, William; Dworkin, Morris; Foti, James; Roback, Edward
2001-01-01
In 1997, the National Institute of Standards and Technology (NIST) initiated a process to select a symmetric-key encryption algorithm to be used to protect sensitive (unclassified) Federal information in furtherance of NIST’s statutory responsibilities. In 1998, NIST announced the acceptance of 15 candidate algorithms and requested the assistance of the cryptographic research community in analyzing the candidates. This analysis included an initial examination of the security and efficiency characteristics for each algorithm. NIST reviewed the results of this preliminary research and selected MARS, RC™, Rijndael, Serpent and Twofish as finalists. Having reviewed further public analysis of the finalists, NIST has decided to propose Rijndael as the Advanced Encryption Standard (AES). The research results and rationale for this selection are documented in this report. PMID:27500035
An Augmentation of G-Guidance Algorithms
NASA Technical Reports Server (NTRS)
Carson, John M. III; Acikmese, Behcet
2011-01-01
The original G-Guidance algorithm provided an autonomous guidance and control policy for small-body proximity operations that took into account uncertainty and dynamics disturbances. However, there was a lack of robustness in regards to object proximity while in autonomous mode. The modified GGuidance algorithm was augmented with a second operational mode that allows switching into a safety hover mode. This will cause a spacecraft to hover in place until a mission-planning algorithm can compute a safe new trajectory. No state or control constraints are violated. When a new, feasible state trajectory is calculated, the spacecraft will return to standard mode and maneuver toward the target. The main goal of this augmentation is to protect the spacecraft in the event that a landing surface or obstacle is closer or further than anticipated. The algorithm can be used for the mitigation of any unexpected trajectory or state changes that occur during standard mode operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
AISL-CRYPTO is a library of cryptography functions supporting other AISL software. It provides various crypto functions for Common Lisp, including Digital Signature Algorithm, Data Encryption Standard, Secure Hash Algorithm, and public-key cryptography.
A portable MPI-based parallel vector template library
NASA Technical Reports Server (NTRS)
Sheffler, Thomas J.
1995-01-01
This paper discusses the design and implementation of a polymorphic collection library for distributed address-space parallel computers. The library provides a data-parallel programming model for C++ by providing three main components: a single generic collection class, generic algorithms over collections, and generic algebraic combining functions. Collection elements are the fourth component of a program written using the library and may be either of the built-in types of C or of user-defined types. Many ideas are borrowed from the Standard Template Library (STL) of C++, although a restricted programming model is proposed because of the distributed address-space memory model assumed. Whereas the STL provides standard collections and implementations of algorithms for uniprocessors, this paper advocates standardizing interfaces that may be customized for different parallel computers. Just as the STL attempts to increase programmer productivity through code reuse, a similar standard for parallel computers could provide programmers with a standard set of algorithms portable across many different architectures. The efficacy of this approach is verified by examining performance data collected from an initial implementation of the library running on an IBM SP-2 and an Intel Paragon.
A Portable MPI-Based Parallel Vector Template Library
NASA Technical Reports Server (NTRS)
Sheffler, Thomas J.
1995-01-01
This paper discusses the design and implementation of a polymorphic collection library for distributed address-space parallel computers. The library provides a data-parallel programming model for C + + by providing three main components: a single generic collection class, generic algorithms over collections, and generic algebraic combining functions. Collection elements are the fourth component of a program written using the library and may be either of the built-in types of c or of user-defined types. Many ideas are borrowed from the Standard Template Library (STL) of C++, although a restricted programming model is proposed because of the distributed address-space memory model assumed. Whereas the STL provides standard collections and implementations of algorithms for uniprocessors, this paper advocates standardizing interfaces that may be customized for different parallel computers. Just as the STL attempts to increase programmer productivity through code reuse, a similar standard for parallel computers could provide programmers with a standard set of algorithms portable across many different architectures. The efficacy of this approach is verified by examining performance data collected from an initial implementation of the library running on an IBM SP-2 and an Intel Paragon.
A Trajectory Algorithm to Support En Route and Terminal Area Self-Spacing Concepts
NASA Technical Reports Server (NTRS)
Abbott, Terence S.
2007-01-01
This document describes an algorithm for the generation of a four dimensional aircraft trajectory. Input data for this algorithm are similar to an augmented Standard Terminal Arrival Route (STAR) with the augmentation in the form of altitude or speed crossing restrictions at waypoints on the route. Wind data at each waypoint are also inputs into this algorithm. The algorithm calculates the altitude, speed, along path distance, and along path time for each waypoint.
[Standard algorithm of molecular typing of Yersinia pestis strains].
Eroshenko, G A; Odinokov, G N; Kukleva, L M; Pavlova, A I; Krasnov, Ia M; Shavina, N Iu; Guseva, N P; Vinogradova, N A; Kutyrev, V V
2012-01-01
Development of the standard algorithm of molecular typing of Yersinia pestis that ensures establishing of subspecies, biovar and focus membership of the studied isolate. Determination of the characteristic strain genotypes of plague infectious agent of main and nonmain subspecies from various natural foci of plague of the Russian Federation and the near abroad. Genotyping of 192 natural Y. pestis strains of main and nonmain subspecies was performed by using PCR methods, multilocus sequencing and multilocus analysis of variable tandem repeat number. A standard algorithm of molecular typing of plague infectious agent including several stages of Yersinia pestis differentiation by membership: in main and nonmain subspecies, various biovars of the main subspecies, specific subspecies; natural foci and geographic territories was developed. The algorithm is based on 3 typing methods--PCR, multilocus sequence typing and multilocus analysis of variable tandem repeat number using standard DNA targets--life support genes (terC, ilvN, inv, glpD, napA, rhaS and araC) and 7 loci of variable tandem repeats (ms01, ms04, ms06, ms07, ms46, ms62, ms70). The effectiveness of the developed algorithm is shown on the large number of natural Y. pestis strains. Characteristic sequence types of Y. pestis strains of various subspecies and biovars as well as MLVA7 genotypes of strains from natural foci of plague of the Russian Federation and the near abroad were established. The application of the developed algorithm will increase the effectiveness of epidemiologic monitoring of plague infectious agent, and analysis of epidemics and outbreaks of plague with establishing the source of origin of the strain and routes of introduction of the infection.
Machine-Learning Algorithms to Code Public Health Spending Accounts
Leider, Jonathon P.; Resnick, Beth A.; Alfonso, Y. Natalia; Bishai, David
2017-01-01
Objectives: Government public health expenditure data sets require time- and labor-intensive manipulation to summarize results that public health policy makers can use. Our objective was to compare the performances of machine-learning algorithms with manual classification of public health expenditures to determine if machines could provide a faster, cheaper alternative to manual classification. Methods: We used machine-learning algorithms to replicate the process of manually classifying state public health expenditures, using the standardized public health spending categories from the Foundational Public Health Services model and a large data set from the US Census Bureau. We obtained a data set of 1.9 million individual expenditure items from 2000 to 2013. We collapsed these data into 147 280 summary expenditure records, and we followed a standardized method of manually classifying each expenditure record as public health, maybe public health, or not public health. We then trained 9 machine-learning algorithms to replicate the manual process. We calculated recall, precision, and coverage rates to measure the performance of individual and ensembled algorithms. Results: Compared with manual classification, the machine-learning random forests algorithm produced 84% recall and 91% precision. With algorithm ensembling, we achieved our target criterion of 90% recall by using a consensus ensemble of ≥6 algorithms while still retaining 93% coverage, leaving only 7% of the summary expenditure records unclassified. Conclusions: Machine learning can be a time- and cost-saving tool for estimating public health spending in the United States. It can be used with standardized public health spending categories based on the Foundational Public Health Services model to help parse public health expenditure information from other types of health-related spending, provide data that are more comparable across public health organizations, and evaluate the impact of evidence-based public health resource allocation. PMID:28363034
Machine-Learning Algorithms to Code Public Health Spending Accounts.
Brady, Eoghan S; Leider, Jonathon P; Resnick, Beth A; Alfonso, Y Natalia; Bishai, David
Government public health expenditure data sets require time- and labor-intensive manipulation to summarize results that public health policy makers can use. Our objective was to compare the performances of machine-learning algorithms with manual classification of public health expenditures to determine if machines could provide a faster, cheaper alternative to manual classification. We used machine-learning algorithms to replicate the process of manually classifying state public health expenditures, using the standardized public health spending categories from the Foundational Public Health Services model and a large data set from the US Census Bureau. We obtained a data set of 1.9 million individual expenditure items from 2000 to 2013. We collapsed these data into 147 280 summary expenditure records, and we followed a standardized method of manually classifying each expenditure record as public health, maybe public health, or not public health. We then trained 9 machine-learning algorithms to replicate the manual process. We calculated recall, precision, and coverage rates to measure the performance of individual and ensembled algorithms. Compared with manual classification, the machine-learning random forests algorithm produced 84% recall and 91% precision. With algorithm ensembling, we achieved our target criterion of 90% recall by using a consensus ensemble of ≥6 algorithms while still retaining 93% coverage, leaving only 7% of the summary expenditure records unclassified. Machine learning can be a time- and cost-saving tool for estimating public health spending in the United States. It can be used with standardized public health spending categories based on the Foundational Public Health Services model to help parse public health expenditure information from other types of health-related spending, provide data that are more comparable across public health organizations, and evaluate the impact of evidence-based public health resource allocation.
A Revised Trajectory Algorithm to Support En Route and Terminal Area Self-Spacing Concepts
NASA Technical Reports Server (NTRS)
Abbott, Terence S.
2010-01-01
This document describes an algorithm for the generation of a four dimensional trajectory. Input data for this algorithm are similar to an augmented Standard Terminal Arrival (STAR) with the augmentation in the form of altitude or speed crossing restrictions at waypoints on the route. This version of the algorithm accommodates descent Mach values that are different from the cruise Mach values. Wind data at each waypoint are also inputs into this algorithm. The algorithm calculates the altitude, speed, along path distance, and along path time for each waypoint.
Information filtering via biased heat conduction.
Liu, Jian-Guo; Zhou, Tao; Guo, Qiang
2011-09-01
The process of heat conduction has recently found application in personalized recommendation [Zhou et al., Proc. Natl. Acad. Sci. USA 107, 4511 (2010)], which is of high diversity but low accuracy. By decreasing the temperatures of small-degree objects, we present an improved algorithm, called biased heat conduction, which could simultaneously enhance the accuracy and diversity. Extensive experimental analyses demonstrate that the accuracy on MovieLens, Netflix, and Delicious datasets could be improved by 43.5%, 55.4% and 19.2%, respectively, compared with the standard heat conduction algorithm and also the diversity is increased or approximately unchanged. Further statistical analyses suggest that the present algorithm could simultaneously identify users' mainstream and special tastes, resulting in better performance than the standard heat conduction algorithm. This work provides a creditable way for highly efficient information filtering.
A Christoffel function weighted least squares algorithm for collocation approximations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narayan, Akil; Jakeman, John D.; Zhou, Tao
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
A Christoffel function weighted least squares algorithm for collocation approximations
Narayan, Akil; Jakeman, John D.; Zhou, Tao
2016-11-28
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
NASA Astrophysics Data System (ADS)
Ren, Ruizhi; Gu, Lingjia; Fu, Haoyang; Sun, Chenglin
2017-04-01
An effective super-resolution (SR) algorithm is proposed for actual spectral remote sensing images based on sparse representation and wavelet preprocessing. The proposed SR algorithm mainly consists of dictionary training and image reconstruction. Wavelet preprocessing is used to establish four subbands, i.e., low frequency, horizontal, vertical, and diagonal high frequency, for an input image. As compared to the traditional approaches involving the direct training of image patches, the proposed approach focuses on the training of features derived from these four subbands. The proposed algorithm is verified using different spectral remote sensing images, e.g., moderate-resolution imaging spectroradiometer (MODIS) images with different bands, and the latest Chinese Jilin-1 satellite images with high spatial resolution. According to the visual experimental results obtained from the MODIS remote sensing data, the SR images using the proposed SR algorithm are superior to those using a conventional bicubic interpolation algorithm or traditional SR algorithms without preprocessing. Fusion algorithms, e.g., standard intensity-hue-saturation, principal component analysis, wavelet transform, and the proposed SR algorithms are utilized to merge the multispectral and panchromatic images acquired by the Jilin-1 satellite. The effectiveness of the proposed SR algorithm is assessed by parameters such as peak signal-to-noise ratio, structural similarity index, correlation coefficient, root-mean-square error, relative dimensionless global error in synthesis, relative average spectral error, spectral angle mapper, and the quality index Q4, and its performance is better than that of the standard image fusion algorithms.
Allaire, Brett T; DePaolis Kaluza, M Clara; Bruno, Alexander G; Samelson, Elizabeth J; Kiel, Douglas P; Anderson, Dennis E; Bouxsein, Mary L
2017-01-01
Current standard methods to quantify disc height, namely distortion compensated Roentgen analysis (DCRA), have been mostly utilized in the lumbar and cervical spine and have strict exclusion criteria. Specifically, discs adjacent to a vertebral fracture are excluded from measurement, thus limiting the use of DCRA in studies that include older populations with a high prevalence of vertebral fractures. Thus, we developed and tested a modified DCRA algorithm that does not depend on vertebral shape. Participants included 1186 men and women from the Framingham Heart Study Offspring and Third Generation Multidetector CT Study. Lateral CT scout images were used to place 6 morphometry points around each vertebra at 13 vertebral levels in each participant. Disc heights were calculated utilizing these morphometry points using DCRA methodology and our modified version of DCRA, which requires information from fewer morphometry points than the standard DCRA. Modified DCRA and standard DCRA measures of disc height are highly correlated, with concordance correlation coefficients above 0.999. Both measures demonstrate good inter- and intra-operator reproducibility. 13.9 % of available disc heights were not evaluable or excluded using the standard DCRA algorithm, while only 3.3 % of disc heights were not evaluable using our modified DCRA algorithm. Using our modified DCRA algorithm, it is not necessary to exclude vertebrae with fracture or other deformity from disc height measurements as in the standard DCRA. Modified DCRA also yields identical measurements to the standard DCRA. Thus, the use of modified DCRA for quantitative assessment of disc height will lead to less missing data without any loss of accuracy, making it a preferred alternative to the current standard methodology.
Algorithms and programming tools for image processing on the MPP:3
NASA Technical Reports Server (NTRS)
Reeves, Anthony P.
1987-01-01
This is the third and final report on the work done for NASA Grant 5-403 on Algorithms and Programming Tools for Image Processing on the MPP:3. All the work done for this grant is summarized in the introduction. Work done since August 1986 is reported in detail. Research for this grant falls under the following headings: (1) fundamental algorithms for the MPP; (2) programming utilities for the MPP; (3) the Parallel Pascal Development System; and (4) performance analysis. In this report, the results of two efforts are reported: region growing, and performance analysis of important characteristic algorithms. In each case, timing results from MPP implementations are included. A paper is included in which parallel algorithms for region growing on the MPP is discussed. These algorithms permit different sized regions to be merged in parallel. Details on the implementation and peformance of several important MPP algorithms are given. These include a number of standard permutations, the FFT, convolution, arbitrary data mappings, image warping, and pyramid operations, all of which have been implemented on the MPP. The permutation and image warping functions have been included in the standard development system library.
Domain Decomposition Algorithms for First-Order System Least Squares Methods
NASA Technical Reports Server (NTRS)
Pavarino, Luca F.
1996-01-01
Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.
NASA Astrophysics Data System (ADS)
Masoumi, Massoud; Raissi, Farshid; Ahmadian, Mahmoud; Keshavarzi, Parviz
2006-01-01
We are proposing that the recently proposed semiconductor-nanowire-molecular architecture (CMOL) is an optimum platform to realize encryption algorithms. The basic modules for the advanced encryption standard algorithm (Rijndael) have been designed using CMOL architecture. The performance of this design has been evaluated with respect to chip area and speed. It is observed that CMOL provides considerable improvement over implementation with regular CMOS architecture even with a 20% defect rate. Pseudo-optimum gate placement and routing are provided for Rijndael building blocks and the possibility of designing high speed, attack tolerant and long key encryptions are discussed.
Applying a Genetic Algorithm to Reconfigurable Hardware
NASA Technical Reports Server (NTRS)
Wells, B. Earl; Weir, John; Trevino, Luis; Patrick, Clint; Steincamp, Jim
2004-01-01
This paper investigates the feasibility of applying genetic algorithms to solve optimization problems that are implemented entirely in reconfgurable hardware. The paper highlights the pe$ormance/design space trade-offs that must be understood to effectively implement a standard genetic algorithm within a modem Field Programmable Gate Array, FPGA, reconfgurable hardware environment and presents a case-study where this stochastic search technique is applied to standard test-case problems taken from the technical literature. In this research, the targeted FPGA-based platform and high-level design environment was the Starbridge Hypercomputing platform, which incorporates multiple Xilinx Virtex II FPGAs, and the Viva TM graphical hardware description language.
A hardware-oriented concurrent TZ search algorithm for High-Efficiency Video Coding
NASA Astrophysics Data System (ADS)
Doan, Nghia; Kim, Tae Sung; Rhee, Chae Eun; Lee, Hyuk-Jae
2017-12-01
High-Efficiency Video Coding (HEVC) is the latest video coding standard, in which the compression performance is double that of its predecessor, the H.264/AVC standard, while the video quality remains unchanged. In HEVC, the test zone (TZ) search algorithm is widely used for integer motion estimation because it effectively searches the good-quality motion vector with a relatively small amount of computation. However, the complex computation structure of the TZ search algorithm makes it difficult to implement it in the hardware. This paper proposes a new integer motion estimation algorithm which is designed for hardware execution by modifying the conventional TZ search to allow parallel motion estimations of all prediction unit (PU) partitions. The algorithm consists of the three phases of zonal, raster, and refinement searches. At the beginning of each phase, the algorithm obtains the search points required by the original TZ search for all PU partitions in a coding unit (CU). Then, all redundant search points are removed prior to the estimation of the motion costs, and the best search points are then selected for all PUs. Compared to the conventional TZ search algorithm, experimental results show that the proposed algorithm significantly decreases the Bjøntegaard Delta bitrate (BD-BR) by 0.84%, and it also reduces the computational complexity by 54.54%.
Navigation strategy and filter design for solar electric missions
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Hagar, H., Jr.
1972-01-01
Methods which have been proposed to improve the navigation accuracy for the low-thrust space vehicle include modifications to the standard Sequential- and Batch-type orbit determination procedures and the use of inertial measuring units (IMU) which measures directly the acceleration applied to the vehicle. The navigation accuracy obtained using one of the more promising modifications to the orbit determination procedures is compared with a combined IMU-Standard. The unknown accelerations are approximated as both first-order and second-order Gauss-Markov processes. The comparison is based on numerical results obtained in a study of the navigation requirements of a numerically simulated 152-day low-thrust mission to the asteroid Eros. The results obtained in the simulation indicate that the DMC algorithm will yield a significant improvement over the navigation accuracies achieved with previous estimation algorithms. In addition, the DMC algorithms will yield better navigation accuracies than the IMU-Standard Orbit Determination algorithm, except for extremely precise IMU measurements, i.e., gyroplatform alignment .01 deg and accelerometer signal-to-noise ratio .07. Unless these accuracies are achieved, the IMU navigation accuracies are generally unacceptable.
Standard and Robust Methods in Regression Imputation
ERIC Educational Resources Information Center
Moraveji, Behjat; Jafarian, Koorosh
2014-01-01
The aim of this paper is to provide an introduction of new imputation algorithms for estimating missing values from official statistics in larger data sets of data pre-processing, or outliers. The goal is to propose a new algorithm called IRMI (iterative robust model-based imputation). This algorithm is able to deal with all challenges like…
2014-01-01
We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588
Wynant, Willy; Abrahamowicz, Michal
2016-11-01
Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Arterial cannula shape optimization by means of the rotational firefly algorithm
NASA Astrophysics Data System (ADS)
Tesch, K.; Kaczorowska, K.
2016-03-01
This article presents global optimization results of arterial cannula shapes by means of the newly modified firefly algorithm. The search for the optimal arterial cannula shape is necessary in order to minimize losses and prepare the flow that leaves the circulatory support system of a ventricle (i.e. blood pump) before it reaches the heart. A modification of the standard firefly algorithm, the so-called rotational firefly algorithm, is introduced. It is shown that the rotational firefly algorithm allows for better exploration of search spaces which results in faster convergence and better solutions in comparison with its standard version. This is particularly pronounced for smaller population sizes. Furthermore, it maintains greater diversity of populations for a longer time. A small population size and a low number of iterations are necessary to keep to a minimum the computational cost of the objective function of the problem, which comes from numerical solution of the nonlinear partial differential equations. Moreover, both versions of the firefly algorithm are compared to the state of the art, namely the differential evolution and covariance matrix adaptation evolution strategies.
Prosthetic joint infection development of an evidence-based diagnostic algorithm.
Mühlhofer, Heinrich M L; Pohlig, Florian; Kanz, Karl-Georg; Lenze, Ulrich; Lenze, Florian; Toepfer, Andreas; Kelch, Sarah; Harrasser, Norbert; von Eisenhart-Rothe, Rüdiger; Schauwecker, Johannes
2017-03-09
Increasing rates of prosthetic joint infection (PJI) have presented challenges for general practitioners, orthopedic surgeons and the health care system in the recent years. The diagnosis of PJI is complex; multiple diagnostic tools are used in the attempt to correctly diagnose PJI. Evidence-based algorithms can help to identify PJI using standardized diagnostic steps. We reviewed relevant publications between 1990 and 2015 using a systematic literature search in MEDLINE and PUBMED. The selected search results were then classified into levels of evidence. The keywords were prosthetic joint infection, biofilm, diagnosis, sonication, antibiotic treatment, implant-associated infection, Staph. aureus, rifampicin, implant retention, pcr, maldi-tof, serology, synovial fluid, c-reactive protein level, total hip arthroplasty (THA), total knee arthroplasty (TKA) and combinations of these terms. From an initial 768 publications, 156 publications were stringently reviewed. Publications with class I-III recommendations (EAST) were considered. We developed an algorithm for the diagnostic approach to display the complex diagnosis of PJI in a clear and logically structured process according to ISO 5807. The evidence-based standardized algorithm combines modern clinical requirements and evidence-based treatment principles. The algorithm provides a detailed transparent standard operating procedure (SOP) for diagnosing PJI. Thus, consistently high, examiner-independent process quality is assured to meet the demands of modern quality management in PJI diagnosis.
An enhanced fast scanning algorithm for image segmentation
NASA Astrophysics Data System (ADS)
Ismael, Ahmed Naser; Yusof, Yuhanis binti
2015-12-01
Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.
Hatt, Mathieu; Lee, John A.; Schmidtlein, Charles R.; Naqa, Issam El; Caldwell, Curtis; De Bernardi, Elisabetta; Lu, Wei; Das, Shiva; Geets, Xavier; Gregoire, Vincent; Jeraj, Robert; MacManus, Michael P.; Mawlawi, Osama R.; Nestle, Ursula; Pugachev, Andrei B.; Schöder, Heiko; Shepherd, Tony; Spezi, Emiliano; Visvikis, Dimitris; Zaidi, Habib; Kirov, Assen S.
2017-01-01
Purpose The purpose of this educational report is to provide an overview of the present state-of-the-art PET auto-segmentation (PET-AS) algorithms and their respective validation, with an emphasis on providing the user with help in understanding the challenges and pitfalls associated with selecting and implementing a PET-AS algorithm for a particular application. Approach A brief description of the different types of PET-AS algorithms is provided using a classification based on method complexity and type. The advantages and the limitations of the current PET-AS algorithms are highlighted based on current publications and existing comparison studies. A review of the available image datasets and contour evaluation metrics in terms of their applicability for establishing a standardized evaluation of PET-AS algorithms is provided. The performance requirements for the algorithms and their dependence on the application, the radiotracer used and the evaluation criteria are described and discussed. Finally, a procedure for algorithm acceptance and implementation, as well as the complementary role of manual and auto-segmentation are addressed. Findings A large number of PET-AS algorithms have been developed within the last 20 years. Many of the proposed algorithms are based on either fixed or adaptively selected thresholds. More recently, numerous papers have proposed the use of more advanced image analysis paradigms to perform semi-automated delineation of the PET images. However, the level of algorithm validation is variable and for most published algorithms is either insufficient or inconsistent which prevents recommending a single algorithm. This is compounded by the fact that realistic image configurations with low signal-to-noise ratios (SNR) and heterogeneous tracer distributions have rarely been used. Large variations in the evaluation methods used in the literature point to the need for a standardized evaluation protocol. Conclusions Available comparison studies suggest that PET-AS algorithms relying on advanced image analysis paradigms provide generally more accurate segmentation than approaches based on PET activity thresholds, particularly for realistic configurations. However, this may not be the case for simple shape lesions in situations with a narrower range of parameters, where simpler methods may also perform well. Recent algorithms which employ some type of consensus or automatic selection between several PET-AS methods have potential to overcome the limitations of the individual methods when appropriately trained. In either case, accuracy evaluation is required for each different PET scanner and scanning and image reconstruction protocol. For the simpler, less robust approaches, adaptation to scanning conditions, tumor type, and tumor location by optimization of parameters is necessary. The results from the method evaluation stage can be used to estimate the contouring uncertainty. All PET-AS contours should be critically verified by a physician. A standard test, i.e., a benchmark dedicated to evaluating both existing and future PET-AS algorithms needs to be designed, to aid clinicians in evaluating and selecting PET-AS algorithms and to establish performance limits for their acceptance for clinical use. The initial steps toward designing and building such a standard are undertaken by the task group members. PMID:28120467
Charlton, Peter H; Bonnici, Timothy; Tarassenko, Lionel; Clifton, David A; Beale, Richard; Watkinson, Peter J
2016-04-01
Over 100 algorithms have been proposed to estimate respiratory rate (RR) from the electrocardiogram (ECG) and photoplethysmogram (PPG). As they have never been compared systematically it is unclear which algorithm performs the best. Our primary aim was to determine how closely algorithms agreed with a gold standard RR measure when operating under ideal conditions. Secondary aims were: (i) to compare algorithm performance with IP, the clinical standard for continuous respiratory rate measurement in spontaneously breathing patients; (ii) to compare algorithm performance when using ECG and PPG; and (iii) to provide a toolbox of algorithms and data to allow future researchers to conduct reproducible comparisons of algorithms. Algorithms were divided into three stages: extraction of respiratory signals, estimation of RR, and fusion of estimates. Several interchangeable techniques were implemented for each stage. Algorithms were assembled using all possible combinations of techniques, many of which were novel. After verification on simulated data, algorithms were tested on data from healthy participants. RRs derived from ECG, PPG and IP were compared to reference RRs obtained using a nasal-oral pressure sensor using the limits of agreement (LOA) technique. 314 algorithms were assessed. Of these, 270 could operate on either ECG or PPG, and 44 on only ECG. The best algorithm had 95% LOAs of -4.7 to 4.7 bpm and a bias of 0.0 bpm when using the ECG, and -5.1 to 7.2 bpm and 1.0 bpm when using PPG. IP had 95% LOAs of -5.6 to 5.2 bpm and a bias of -0.2 bpm. Four algorithms operating on ECG performed better than IP. All high-performing algorithms consisted of novel combinations of time domain RR estimation and modulation fusion techniques. Algorithms performed better when using ECG than PPG. The toolbox of algorithms and data used in this study are publicly available.
Charlton, Peter H; Bonnici, Timothy; Tarassenko, Lionel; Clifton, David A; Beale, Richard; Watkinson, Peter J
2016-01-01
Abstract Over 100 algorithms have been proposed to estimate respiratory rate (RR) from the electrocardiogram (ECG) and photoplethysmogram (PPG). As they have never been compared systematically it is unclear which algorithm performs the best. Our primary aim was to determine how closely algorithms agreed with a gold standard RR measure when operating under ideal conditions. Secondary aims were: (i) to compare algorithm performance with IP, the clinical standard for continuous respiratory rate measurement in spontaneously breathing patients; (ii) to compare algorithm performance when using ECG and PPG; and (iii) to provide a toolbox of algorithms and data to allow future researchers to conduct reproducible comparisons of algorithms. Algorithms were divided into three stages: extraction of respiratory signals, estimation of RR, and fusion of estimates. Several interchangeable techniques were implemented for each stage. Algorithms were assembled using all possible combinations of techniques, many of which were novel. After verification on simulated data, algorithms were tested on data from healthy participants. RRs derived from ECG, PPG and IP were compared to reference RRs obtained using a nasal-oral pressure sensor using the limits of agreement (LOA) technique. 314 algorithms were assessed. Of these, 270 could operate on either ECG or PPG, and 44 on only ECG. The best algorithm had 95% LOAs of −4.7 to 4.7 bpm and a bias of 0.0 bpm when using the ECG, and −5.1 to 7.2 bpm and 1.0 bpm when using PPG. IP had 95% LOAs of −5.6 to 5.2 bpm and a bias of −0.2 bpm. Four algorithms operating on ECG performed better than IP. All high-performing algorithms consisted of novel combinations of time domain RR estimation and modulation fusion techniques. Algorithms performed better when using ECG than PPG. The toolbox of algorithms and data used in this study are publicly available. PMID:27027672
A hybrid method with deviational particles for spatial inhomogeneous plasma
NASA Astrophysics Data System (ADS)
Yan, Bokai
2016-03-01
In this work we propose a Hybrid method with Deviational Particles (HDP) for a plasma modeled by the inhomogeneous Vlasov-Poisson-Landau system. We split the distribution into a Maxwellian part evolved by a grid based fluid solver and a deviation part simulated by numerical particles. These particles, named deviational particles, could be both positive and negative. We combine the Monte Carlo method proposed in [31], a Particle in Cell method and a Macro-Micro decomposition method [3] to design an efficient hybrid method. Furthermore, coarse particles are employed to accelerate the simulation. A particle resampling technique on both deviational particles and coarse particles is also investigated and improved. This method is applicable in all regimes and significantly more efficient compared to a PIC-DSMC method near the fluid regime.
Multi-Species Fluxes for the Parallel Quiet Direct Simulation (QDS) Method
NASA Astrophysics Data System (ADS)
Cave, H. M.; Lim, C.-W.; Jermy, M. C.; Krumdieck, S. P.; Smith, M. R.; Lin, Y.-J.; Wu, J.-S.
2011-05-01
Fluxes of multiple species are implemented in the Quiet Direct Simulation (QDS) scheme for gas flows. Each molecular species streams independently. All species are brought to local equilibrium at the end of each time step. The multi species scheme is compared to DSMC simulation, on a test case of a Mach 20 flow of a xenon/helium mixture over a forward facing step. Depletion of the heavier species in the bow shock and the near-wall layer are seen. The multi-species QDS code is then used to model the flow in a pulsed-pressure chemical vapour deposition reactor set up for carbon film deposition. The injected gas is a mixture of methane and hydrogen. The temporal development of the spatial distribution of methane over the substrate is tracked.
1993-04-01
A;I r- c’I r- ’ Ae g-eac f ivenesSestr" 2’ý U c US ""’ c’ U SsS A ’ 3-a, ""r - Pedo ~~~~~e Ade~aU cl; a ," Marc ý4’ SAM-o"’ Rý 310C, ’at ’~ ">c ýxca-or...customer savs it is."• The second inherent concept is a Paradigm. Joel Barker. in his video tape Discovering the Future: The Busi- ness of Paradigms...management. I The company managemcnt philoso- viewed Tom Peters’ video tape entitled phv can be sunmnmarized in terms I will In Search of Excellence
NASA Astrophysics Data System (ADS)
Shimamura, Kohei
2016-09-01
To reduce the computational cost in the particle method for the numerical simulation of the laser plasma, we examined the simplification of the laser absorption process. Because the laser frequency is sufficiently larger than the collision frequency between the electron and heavy particles, we assumed that the electron obtained the constant value from the laser irradiation. First of all, the simplification of the laser absorption process was verified by the comparison of the EEDF and the laser-absorptivity with PIC-FDTD method. Secondary, the laser plasma induced by TEA CO2 laser in Argon atmosphere was modeled using the 1D3V DSMC method with the simplification of the laser-absorption. As a result, the LSDW was observed with the typical electron and neutral density distribution.
Rarefaction Effects in Hypersonic Aerodynamics
NASA Astrophysics Data System (ADS)
Riabov, Vladimir V.
2011-05-01
The Direct Simulation Monte-Carlo (DSMC) technique is used for numerical analysis of rarefied-gas hypersonic flows near a blunt plate, wedge, two side-by-side plates, disk, torus, and rotating cylinder. The role of various similarity parameters (Knudsen and Mach numbers, geometrical and temperature factors, specific heat ratios, and others) in aerodynamics of the probes is studied. Important kinetic effects that are specific for the transition flow regime have been found: non-monotonic lift and drag of plates, strong repulsive force between side-by-side plates and cylinders, dependence of drag on torus radii ratio, and the reverse Magnus effect on the lift of a rotating cylinder. The numerical results are in a good agreement with experimental data, which were obtained in a vacuum chamber at low and moderate Knudsen numbers from 0.01 to 10.
A Parametric Study of Jet Interactions with Rarefied Flow
NASA Technical Reports Server (NTRS)
Glass, C. E.
2004-01-01
Three-dimensional computational techniques, in particular the uncoupled CFD-DSMC of the present study, are available to be applied to problems such as jet interactions with variable density regions ranging from a continuum jet to a rarefied free stream. When the value of the jet to free stream momentum flux ratio approximately greater than 2000 for a sharp leading edge flat plate forward separation vortices induced by the jet interaction are present near the surface. Also as the free stream number density n (infinity) decreases, the extent and magnitude of normalized pressure increases and moves upstream of the nozzle exit. Thus for the flat plate model the effect of decreasing n (infinity) is to change the sign of the moment caused by the jet interaction on the flat plate surface.
Numerical Investigation of Physical Processes in High-Temperature MEMS-based Nozzle Flows
NASA Astrophysics Data System (ADS)
Alexeenko, A. A.; Levin, D. A.; Gimelshein, S. F.; Reed, B. D.
2003-05-01
Three-dimensional high-temperature flows in a MEMS-based micronozzle has been modeled using the DSMC method for Reynolds number at the throat from 30 to 440 and two different propellants. For these conditions, the gas flow and thrust performance are strongly influenced by surface effects, including friction and heat transfer losses. The calculated specific impulse is about 170 sec for Re=440 and about 120 sec for Re=43. In addition, the gas-surface interaction is the main mechanism for the change in vibrational energy of molecules in such flows. The calculated infrared spectra for the LAX112 propellant suggest that the infrared signal from such plumes can be detected and used to determine the influence of the cold wall boundary layer on the flow parameters at the nozzle exit.
Numerical Simulation of Rarefied Plume Flow Exhausting from a Small Nozzle
NASA Astrophysics Data System (ADS)
Hyakutake, Toru; Yamamoto, Kyoji
2003-05-01
This paper describes the numerical studies of a rarefied plume flow expanding through a nozzle into a vacuum, especially focusing on investigating the nozzle performance, the angular distributions of molecular flux in the nozzle plume and the influence of the backflow contamination for the variation of nozzle geometries and gas/surface interaction models. The direct simulation Monte Carlo (DSMC) method is employed for determining inside the nozzle and in the nozzle plume. The simulation results indicate that the half-angle of the diverging section in the highest thrust coefficient is 25° - 30° and this value varies with the expansion ratio of the nozzle. The descent of the half-angle brings about the increase of the molecules that are scattered in the backflow region.
Implementation of an Algorithm for Prosthetic Joint Infection: Deviations and Problems.
Mühlhofer, Heinrich M L; Kanz, Karl-Georg; Pohlig, Florian; Lenze, Ulrich; Lenze, Florian; Toepfer, Andreas; von Eisenhart-Rothe, Ruediger; Schauwecker, Johannes
The outcome of revision surgery in arthroplasty is based on a precise diagnosis. In addition, the treatment varies based on whether the prosthetic failure is caused by aseptic or septic loosening. Algorithms can help to identify periprosthetic joint infections (PJI) and standardize diagnostic steps, however, algorithms tend to oversimplify the treatment of complex cases. We conducted a process analysis during the implementation of a PJI algorithm to determine problems and deviations associated with the implementation of this algorithm. Fifty patients who were treated after implementing a standardized algorithm were monitored retrospectively. Their treatment plans and diagnostic cascades were analyzed for deviations from the implemented algorithm. Each diagnostic procedure was recorded, compared with the algorithm, and evaluated statistically. We detected 52 deviations while treating 50 patients. In 25 cases, no discrepancy was observed. Synovial fluid aspiration was not performed in 31.8% of patients (95% confidence interval [CI], 18.1%-45.6%), while white blood cell counts (WBCs) and neutrophil differentiation were assessed in 54.5% of patients (95% CI, 39.8%-69.3%). We also observed that the prolonged incubation of cultures was not requested in 13.6% of patients (95% CI, 3.5%-23.8%). In seven of 13 cases (63.6%; 95% CI, 35.2%-92.1%), arthroscopic biopsy was performed; 6 arthroscopies were performed in discordance with the algorithm (12%; 95% CI, 3%-21%). Self-critical analysis of diagnostic processes and monitoring of deviations using algorithms are important and could increase the quality of treatment by revealing recurring faults.
Information filtering via biased heat conduction
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Zhou, Tao; Guo, Qiang
2011-09-01
The process of heat conduction has recently found application in personalized recommendation [Zhou , Proc. Natl. Acad. Sci. USA PNASA60027-842410.1073/pnas.1000488107107, 4511 (2010)], which is of high diversity but low accuracy. By decreasing the temperatures of small-degree objects, we present an improved algorithm, called biased heat conduction, which could simultaneously enhance the accuracy and diversity. Extensive experimental analyses demonstrate that the accuracy on MovieLens, Netflix, and Delicious datasets could be improved by 43.5%, 55.4% and 19.2%, respectively, compared with the standard heat conduction algorithm and also the diversity is increased or approximately unchanged. Further statistical analyses suggest that the present algorithm could simultaneously identify users' mainstream and special tastes, resulting in better performance than the standard heat conduction algorithm. This work provides a creditable way for highly efficient information filtering.
Gradient Optimization for Analytic conTrols - GOAT
NASA Astrophysics Data System (ADS)
Assémat, Elie; Machnes, Shai; Tannor, David; Wilhelm-Mauch, Frank
Quantum optimal control becomes a necessary step in a number of studies in the quantum realm. Recent experimental advances showed that superconducting qubits can be controlled with an impressive accuracy. However, most of the standard optimal control algorithms are not designed to manage such high accuracy. To tackle this issue, a novel quantum optimal control algorithm have been introduced: the Gradient Optimization for Analytic conTrols (GOAT). It avoids the piecewise constant approximation of the control pulse used by standard algorithms. This allows an efficient implementation of very high accuracy optimization. It also includes a novel method to compute the gradient that provides many advantages, e.g. the absence of backpropagation or the natural route to optimize the robustness of the control pulses. This talk will present the GOAT algorithm and a few applications to transmons systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, John D.; Narayan, Akil; Zhou, Tao
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
Design optimization of steel frames using an enhanced firefly algorithm
NASA Astrophysics Data System (ADS)
Carbas, Serdar
2016-12-01
Mathematical modelling of real-world-sized steel frames under the Load and Resistance Factor Design-American Institute of Steel Construction (LRFD-AISC) steel design code provisions, where the steel profiles for the members are selected from a table of steel sections, turns out to be a discrete nonlinear programming problem. Finding the optimum design of such design optimization problems using classical optimization techniques is difficult. Metaheuristic algorithms provide an alternative way of solving such problems. The firefly algorithm (FFA) belongs to the swarm intelligence group of metaheuristics. The standard FFA has the drawback of being caught up in local optima in large-sized steel frame design problems. This study attempts to enhance the performance of the FFA by suggesting two new expressions for the attractiveness and randomness parameters of the algorithm. Two real-world-sized design examples are designed by the enhanced FFA and its performance is compared with standard FFA as well as with particle swarm and cuckoo search algorithms.
Bare-Bones Teaching-Learning-Based Optimization
Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye
2014-01-01
Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms. PMID:25013844
Bare-bones teaching-learning-based optimization.
Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye
2014-01-01
Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, John D.; Narayan, Akil; Zhou, Tao
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
Jakeman, John D.; Narayan, Akil; Zhou, Tao
2017-06-22
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
ERIC Educational Resources Information Center
Fofonoff, N. P.; Millard, R. C., Jr.
Algorithms for computation of fundamental properties of seawater, based on the practicality salinity scale (PSS-78) and the international equation of state for seawater (EOS-80), are compiled in the present report for implementing and standardizing computer programs for oceanographic data processing. Sample FORTRAN subprograms and tables are given…
Quantification of HIV-1 DNA using real-time recombinase polymerase amplification.
Crannell, Zachary Austin; Rohrman, Brittany; Richards-Kortum, Rebecca
2014-06-17
Although recombinase polymerase amplification (RPA) has many advantages for the detection of pathogenic nucleic acids in point-of-care applications, RPA has not yet been implemented to quantify sample concentration using a standard curve. Here, we describe a real-time RPA assay with an internal positive control and an algorithm that analyzes real-time fluorescence data to quantify HIV-1 DNA. We show that DNA concentration and the onset of detectable amplification are correlated by an exponential standard curve. In a set of experiments in which the standard curve and algorithm were used to analyze and quantify additional DNA samples, the algorithm predicted an average concentration within 1 order of magnitude of the correct concentration for all HIV-1 DNA concentrations tested. These results suggest that quantitative RPA (qRPA) may serve as a powerful tool for quantifying nucleic acids and may be adapted for use in single-sample point-of-care diagnostic systems.
Comparison of genetic algorithms with conjugate gradient methods
NASA Technical Reports Server (NTRS)
Bosworth, J. L.; Foo, N. Y.; Zeigler, B. P.
1972-01-01
Genetic algorithms for mathematical function optimization are modeled on search strategies employed in natural adaptation. Comparisons of genetic algorithms with conjugate gradient methods, which were made on an IBM 1800 digital computer, show that genetic algorithms display superior performance over gradient methods for functions which are poorly behaved mathematically, for multimodal functions, and for functions obscured by additive random noise. Genetic methods offer performance comparable to gradient methods for many of the standard functions.
Algorithmic complexity of quantum capacity
NASA Astrophysics Data System (ADS)
Oskouei, Samad Khabbazi; Mancini, Stefano
2018-04-01
We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.
Nickel, Katelin B; Wallace, Anna E; Warren, David K; Ball, Kelly E; Mines, Daniel; Fraser, Victoria J; Olsen, Margaret A
2016-08-16
Accurate identification of underlying health conditions is important to fully adjust for confounders in studies using insurer claims data. Our objective was to evaluate the ability of four modifications to a standard claims-based measure to estimate the prevalence of select comorbid conditions compared with national prevalence estimates. In a cohort of 11,973 privately insured women aged 18-64 years with mastectomy from 1/04-12/11 in the HealthCore Integrated Research Database, we identified diabetes, hypertension, deficiency anemia, smoking, and obesity from inpatient and outpatient claims for the year prior to surgery using four different algorithms. The standard comorbidity measure was compared to revised algorithms which included outpatient medications for diabetes, hypertension and smoking; an expanded timeframe encompassing the mastectomy admission; and an adjusted time interval and number of required outpatient claims. A χ2 test of proportions was used to compare prevalence estimates for 5 conditions in the mastectomy population to national health survey datasets (Behavioral Risk Factor Surveillance System and the National Health and Nutrition Examination Survey). Medical record review was conducted for a sample of women to validate the identification of smoking and obesity. Compared to the standard claims algorithm, use of the modified algorithms increased prevalence from 4.79 to 6.79 % for diabetes, 14.75 to 24.87 % for hypertension, 4.23 to 6.65 % for deficiency anemia, 1.78 to 12.87 % for smoking, and 1.14 to 6.31 % for obesity. The revised estimates were more similar, but not statistically equivalent, to nationally reported prevalence estimates. Medical record review revealed low sensitivity (17.86 %) to capture obesity in the claims, moderate negative predictive value (NPV, 71.78 %) and high specificity (99.15 %) and positive predictive value (PPV, 90.91 %); the claims algorithm for current smoking had relatively low sensitivity (62.50 %) and PPV (50.00 %), but high specificity (92.19 %) and NPV (95.16 %). Modifications to a standard comorbidity measure resulted in prevalence estimates that were closer to expected estimates for non-elderly women than the standard measure. Adjustment of the standard claims algorithm to identify underlying comorbid conditions should be considered depending on the specific conditions and the patient population studied.
Reference-free automatic quality assessment of tracheoesophageal speech.
Huang, Andy; Falk, Tiago H; Chan, Wai-Yip; Parsa, Vijay; Doyle, Philip
2009-01-01
Evaluation of the quality of tracheoesophageal (TE) speech using machines instead of human experts can enhance the voice rehabilitation process for patients who have undergone total laryngectomy and voice restoration. Towards the goal of devising a reference-free TE speech quality estimation algorithm, we investigate the efficacy of speech signal features that are used in standard telephone-speech quality assessment algorithms, in conjunction with a recently introduced speech modulation spectrum measure. Tests performed on two TE speech databases demonstrate that the modulation spectral measure and a subset of features in the standard ITU-T P.563 algorithm estimate TE speech quality with better correlation (up to 0.9) than previously proposed features.
A sequential quadratic programming algorithm using an incomplete solution of the subproblem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murray, W.; Prieto, F.J.
1993-05-01
We analyze sequential quadratic programming (SQP) methods to solve nonlinear constrained optimization problems that are more flexible in their definition than standard SQP methods. The type of flexibility introduced is motivated by the necessity to deviate from the standard approach when solving large problems. Specifically we no longer require a minimizer of the QP subproblem to be determined or particular Lagrange multiplier estimates to be used. Our main focus is on an SQP algorithm that uses a particular augmented Lagrangian merit function. New results are derived for this algorithm under weaker conditions than previously assumed; in particular, it is notmore » assumed that the iterates lie on a compact set.« less
NASA Astrophysics Data System (ADS)
Tomiwa, K. G.
2017-09-01
The search for new physics in the H → γγ+met relies on how well the missing transverse energy is reconstructed. The Met algorithm used by the ATLAS experiment in turns uses input variables like photon and jets which depend on the reconstruction of the primary vertex. This document presents the performance of di-photon vertex reconstruction algorithms (hardest vertex method and Neural Network method). Comparing the performance of these algorithms for the nominal Standard Model sample and the Beyond Standard Model sample, we see the overall performance of the Neural Network method of primary vertex selection performed better than the Hardest vertex method.
An evolutionary algorithm that constructs recurrent neural networks.
Angeline, P J; Saunders, G M; Pollack, J B
1994-01-01
Standard methods for simultaneously inducing the structure and weights of recurrent neural networks limit every task to an assumed class of architectures. Such a simplification is necessary since the interactions between network structure and function are not well understood. Evolutionary computations, which include genetic algorithms and evolutionary programming, are population-based search methods that have shown promise in many similarly complex tasks. This paper argues that genetic algorithms are inappropriate for network acquisition and describes an evolutionary program, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks. GNARL's empirical acquisition method allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods.
Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search
2017-01-01
Binary bat algorithm (BBA) is a binary version of the bat algorithm (BA). It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA) to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO). Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima. PMID:28634487
Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search.
Huang, Xingwang; Zeng, Xuewen; Han, Rui
2017-01-01
Binary bat algorithm (BBA) is a binary version of the bat algorithm (BA). It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA) to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO). Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima.
Testing algorithms for critical slowing down
NASA Astrophysics Data System (ADS)
Cossu, Guido; Boyle, Peter; Christ, Norman; Jung, Chulwoo; Jüttner, Andreas; Sanfilippo, Francesco
2018-03-01
We present the preliminary tests on two modifications of the Hybrid Monte Carlo (HMC) algorithm. Both algorithms are designed to travel much farther in the Hamiltonian phase space for each trajectory and reduce the autocorrelations among physical observables thus tackling the critical slowing down towards the continuum limit. We present a comparison of costs of the new algorithms with the standard HMC evolution for pure gauge fields, studying the autocorrelation times for various quantities including the topological charge.
The FBI compression standard for digitized fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.
1996-10-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the currentmore » status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.« less
FBI compression standard for digitized fingerprint images
NASA Astrophysics Data System (ADS)
Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas
1996-11-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.
A nudging-based data assimilation method: the Back and Forth Nudging (BFN) algorithm
NASA Astrophysics Data System (ADS)
Auroux, D.; Blum, J.
2008-03-01
This paper deals with a new data assimilation algorithm, called Back and Forth Nudging. The standard nudging technique consists in adding to the equations of the model a relaxation term that is supposed to force the observations to the model. The BFN algorithm consists in repeatedly performing forward and backward integrations of the model with relaxation (or nudging) terms, using opposite signs in the direct and inverse integrations, so as to make the backward evolution numerically stable. This algorithm has first been tested on the standard Lorenz model with discrete observations (perfect or noisy) and compared with the variational assimilation method. The same type of study has then been performed on the viscous Burgers equation, comparing again with the variational method and focusing on the time evolution of the reconstruction error, i.e. the difference between the reference trajectory and the identified one over a time period composed of an assimilation period followed by a prediction period. The possible use of the BFN algorithm as an initialization for the variational method has also been investigated. Finally the algorithm has been tested on a layered quasi-geostrophic model with sea-surface height observations. The behaviours of the two algorithms have been compared in the presence of perfect or noisy observations, and also for imperfect models. This has allowed us to reach a conclusion concerning the relative performances of the two algorithms.
Spectral correction algorithm for multispectral CdTe x-ray detectors
NASA Astrophysics Data System (ADS)
Christensen, Erik D.; Kehres, Jan; Gu, Yun; Feidenhans'l, Robert; Olsen, Ulrik L.
2017-09-01
Compared to the dual energy scintillator detectors widely used today, pixelated multispectral X-ray detectors show the potential to improve material identification in various radiography and tomography applications used for industrial and security purposes. However, detector effects, such as charge sharing and photon pileup, distort the measured spectra in high flux pixelated multispectral detectors. These effects significantly reduce the detectors' capabilities to be used for material identification, which requires accurate spectral measurements. We have developed a semi analytical computational algorithm for multispectral CdTe X-ray detectors which corrects the measured spectra for severe spectral distortions caused by the detector. The algorithm is developed for the Multix ME100 CdTe X-ray detector, but could potentially be adapted for any pixelated multispectral CdTe detector. The calibration of the algorithm is based on simple attenuation measurements of commercially available materials using standard laboratory sources, making the algorithm applicable in any X-ray setup. The validation of the algorithm has been done using experimental data acquired with both standard lab equipment and synchrotron radiation. The experiments show that the algorithm is fast, reliable even at X-ray flux up to 5 Mph/s/mm2, and greatly improves the accuracy of the measured X-ray spectra, making the algorithm very useful for both security and industrial applications where multispectral detectors are used.
Hazardous Traffic Event Detection Using Markov Blanket and Sequential Minimal Optimization (MB-SMO)
Yan, Lixin; Zhang, Yishi; He, Yi; Gao, Song; Zhu, Dunyao; Ran, Bin; Wu, Qing
2016-01-01
The ability to identify hazardous traffic events is already considered as one of the most effective solutions for reducing the occurrence of crashes. Only certain particular hazardous traffic events have been studied in previous studies, which were mainly based on dedicated video stream data and GPS data. The objective of this study is twofold: (1) the Markov blanket (MB) algorithm is employed to extract the main factors associated with hazardous traffic events; (2) a model is developed to identify hazardous traffic event using driving characteristics, vehicle trajectory, and vehicle position data. Twenty-two licensed drivers were recruited to carry out a natural driving experiment in Wuhan, China, and multi-sensor information data were collected for different types of traffic events. The results indicated that a vehicle’s speed, the standard deviation of speed, the standard deviation of skin conductance, the standard deviation of brake pressure, turn signal, the acceleration of steering, the standard deviation of acceleration, and the acceleration in Z (G) have significant influences on hazardous traffic events. The sequential minimal optimization (SMO) algorithm was adopted to build the identification model, and the accuracy of prediction was higher than 86%. Moreover, compared with other detection algorithms, the MB-SMO algorithm was ranked best in terms of the prediction accuracy. The conclusions can provide reference evidence for the development of dangerous situation warning products and the design of intelligent vehicles. PMID:27420073
Hazardous Traffic Event Detection Using Markov Blanket and Sequential Minimal Optimization (MB-SMO).
Yan, Lixin; Zhang, Yishi; He, Yi; Gao, Song; Zhu, Dunyao; Ran, Bin; Wu, Qing
2016-07-13
The ability to identify hazardous traffic events is already considered as one of the most effective solutions for reducing the occurrence of crashes. Only certain particular hazardous traffic events have been studied in previous studies, which were mainly based on dedicated video stream data and GPS data. The objective of this study is twofold: (1) the Markov blanket (MB) algorithm is employed to extract the main factors associated with hazardous traffic events; (2) a model is developed to identify hazardous traffic event using driving characteristics, vehicle trajectory, and vehicle position data. Twenty-two licensed drivers were recruited to carry out a natural driving experiment in Wuhan, China, and multi-sensor information data were collected for different types of traffic events. The results indicated that a vehicle's speed, the standard deviation of speed, the standard deviation of skin conductance, the standard deviation of brake pressure, turn signal, the acceleration of steering, the standard deviation of acceleration, and the acceleration in Z (G) have significant influences on hazardous traffic events. The sequential minimal optimization (SMO) algorithm was adopted to build the identification model, and the accuracy of prediction was higher than 86%. Moreover, compared with other detection algorithms, the MB-SMO algorithm was ranked best in terms of the prediction accuracy. The conclusions can provide reference evidence for the development of dangerous situation warning products and the design of intelligent vehicles.
Versatile and efficient pore network extraction method using marker-based watershed segmentation
NASA Astrophysics Data System (ADS)
Gostick, Jeff T.
2017-08-01
Obtaining structural information from tomographic images of porous materials is a critical component of porous media research. Extracting pore networks is particularly valuable since it enables pore network modeling simulations which can be useful for a host of tasks from predicting transport properties to simulating performance of entire devices. This work reports an efficient algorithm for extracting networks using only standard image analysis techniques. The algorithm was applied to several standard porous materials ranging from sandstone to fibrous mats, and in all cases agreed very well with established or known values for pore and throat sizes, capillary pressure curves, and permeability. In the case of sandstone, the present algorithm was compared to the network obtained using the current state-of-the-art algorithm, and very good agreement was achieved. Most importantly, the network extracted from an image of fibrous media correctly predicted the anisotropic permeability tensor, demonstrating the critical ability to detect key structural features. The highly efficient algorithm allows extraction on fairly large images of 5003 voxels in just over 200 s. The ability for one algorithm to match materials as varied as sandstone with 20% porosity and fibrous media with 75% porosity is a significant advancement. The source code for this algorithm is provided.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-04
.... Those public customers who continue to receive priority in the execution algorithm are called Priority... standard execution algorithm: \\3\\ Securities Exchange Act Release No. 59287 (January 23, 2009), 74 FR 5694...
Genetic Algorithms to Optimizatize Lecturer Assessment's Criteria
NASA Astrophysics Data System (ADS)
Jollyta, Deny; Johan; Hajjah, Alyauma
2017-12-01
The lecturer assessment criteria is used as a measurement of the lecturer's performance in a college environment. To determine the value for a criteriais complicated and often leads to doubt. The absence of a standard valuefor each assessment criteria will affect the final results of the assessment and become less presentational data for the leader of college in taking various policies relate to reward and punishment. The Genetic Algorithm comes as an algorithm capable of solving non-linear problems. Using chromosomes in the random initial population, one of the presentations is binary, evaluates the fitness function and uses crossover genetic operator and mutation to obtain the desired crossbreed. It aims to obtain the most optimum criteria values in terms of the fitness function of each chromosome. The training results show that Genetic Algorithm able to produce the optimal values of lecturer assessment criteria so that can be usedby the college as a standard value for lecturer assessment criteria.
Iterative algorithms for a non-linear inverse problem in atmospheric lidar
NASA Astrophysics Data System (ADS)
Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto
2017-08-01
We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.
New Secure E-mail System Based on Bio-Chaos Key Generation and Modified AES Algorithm
NASA Astrophysics Data System (ADS)
Hoomod, Haider K.; Radi, A. M.
2018-05-01
The E-mail messages exchanged between sender’s Mailbox and recipient’s Mailbox over the open systems and insecure Networks. These messages may be vulnerable to eavesdropping and itself poses a real threat to the privacy and data integrity from unauthorized persons. The E-mail Security includes the following properties (Confidentiality, Authentication, Message integrity). We need a safe encryption algorithm to encrypt Email messages such as the algorithm Advanced Encryption Standard (AES) or Data Encryption Standard DES, as well as biometric recognition and chaotic system. The proposed E-mail system security uses modified AES algorithm and uses secret key-bio-chaos that consist of biometric (Fingerprint) and chaotic system (Lu and Lorenz). This modification makes the proposed system more sensitive and random. The execution time for both encryption and decryption of the proposed system is much less from original AES, in addition to being compatible with all Mail Servers.
Reducing False Positives in Runtime Analysis of Deadlocks
NASA Technical Reports Server (NTRS)
Bensalem, Saddek; Havelund, Klaus; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper presents an improvement of a standard algorithm for detecting dead-lock potentials in multi-threaded programs, in that it reduces the number of false positives. The standard algorithm works as follows. The multi-threaded program under observation is executed, while lock and unlock events are observed. A graph of locks is built, with edges between locks symbolizing locking orders. Any cycle in the graph signifies a potential for a deadlock. The typical standard example is the group of dining philosophers sharing forks. The algorithm is interesting because it can catch deadlock potentials even though no deadlocks occur in the examined trace, and at the same time it scales very well in contrast t o more formal approaches to deadlock detection. The algorithm, however, can yield false positives (as well as false negatives). The extension of the algorithm described in this paper reduces the amount of false positives for three particular cases: when a gate lock protects a cycle, when a single thread introduces a cycle, and when the code segments in different threads that cause the cycle can actually not execute in parallel. The paper formalizes a theory for dynamic deadlock detection and compares it to model checking and static analysis techniques. It furthermore describes an implementation for analyzing Java programs and its application to two case studies: a planetary rover and a space craft altitude control system.
Production of τ τ jj final states at the LHC and the TauSpinner algorithm: the spin-2 case
NASA Astrophysics Data System (ADS)
Bahmani, M.; Kalinowski, J.; Kotlarski, W.; Richter-Wąs, E.; Wąs, Z.
2018-01-01
The TauSpinner algorithm is a tool that allows one to modify the physics model of the Monte Carlo generated samples due to the changed assumptions of event production dynamics, but without the need of re-generating events. With the help of weights τ -lepton production or decay processes can be modified accordingly to a new physics model. In a recent paper a new version TauSpinner ver.2.0.0 has been presented which includes a provision for introducing non-standard states and couplings and study their effects in the vector-boson-fusion processes by exploiting the spin correlations of τ -lepton pair decay products in processes where final states include also two hard jets. In the present paper we document how this can be achieved taking as an example the non-standard spin-2 state that couples to Standard Model particles and tree-level matrix elements with complete helicity information included for the parton-parton scattering amplitudes into a τ -lepton pair and two outgoing partons. This implementation is prepared as the external (user-provided) routine for the TauSpinner algorithm. It exploits amplitudes generated by MadGraph5 and adapted to the TauSpinner algorithm format. Consistency tests of the implemented matrix elements, re-weighting algorithm and numerical results for observables sensitive to τ polarisation are presented.
Rate distortion optimal bit allocation methods for volumetric data using JPEG 2000.
Kosheleva, Olga M; Usevitch, Bryan E; Cabrera, Sergio D; Vidal, Edward
2006-08-01
Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.
Warrick, P A; Precup, D; Hamilton, E F; Kearney, R E
2007-01-01
To develop a singular-spectrum analysis (SSA) based change-point detection algorithm applicable to fetal heart rate (FHR) monitoring to improve the detection of deceleration events. We present a method for decomposing a signal into near-orthogonal components via the discrete cosine transform (DCT) and apply this in a novel online manner to change-point detection based on SSA. The SSA technique forms models of the underlying signal that can be compared over time; models that are sufficiently different indicate signal change points. To adapt the algorithm to deceleration detection where many successive similar change events can occur, we modify the standard SSA algorithm to hold the reference model constant under such conditions, an approach that we term "base-hold SSA". The algorithm is applied to a database of 15 FHR tracings that have been preprocessed to locate candidate decelerations and is compared to the markings of an expert obstetrician. Of the 528 true and 1285 false decelerations presented to the algorithm, the base-hold approach improved on standard SSA, reducing the number of missed decelerations from 64 to 49 (21.9%) while maintaining the same reduction in false-positives (278). The standard SSA assumption that changes are infrequent does not apply to FHR analysis where decelerations can occur successively and in close proximity; our base-hold SSA modification improves detection of these types of event series.
Planning Under Uncertainty: Methods and Applications
2010-06-09
begun research into fundamental algorithms for optimization and re?optimization of continuous optimization problems (such as linear and quadratic... algorithm yields a 14.3% improvement over the original design while saving 68.2 % of the simulation evaluations compared to standard sample-path...They provide tools for building and justifying computational algorithms for such problems. Year. 2010 Month: 03 Final Research under this grant
Fast Back-Propagation Learning Using Steep Activation Functions and Automatic Weight
Tai-Hoon Cho; Richard W. Conners; Philip A. Araman
1992-01-01
In this paper, several back-propagation (BP) learning speed-up algorithms that employ the ãgainä parameter, i.e., steepness of the activation function, are examined. Simulations will show that increasing the gain seemingly increases the speed of convergence and that these algorithms can converge faster than the standard BP learning algorithm on some problems. However,...
Advancements in the Development of an Operational Lightning Jump Algorithm for GOES-R GLM
NASA Technical Reports Server (NTRS)
Shultz, Chris; Petersen, Walter; Carey, Lawrence
2011-01-01
Rapid increases in total lightning have been shown to precede the manifestation of severe weather at the surface. These rapid increases have been termed lightning jumps, and are the current focus of algorithm development for the GOES-R Geostationary Lightning Mapper (GLM). Recent lightning jump algorithm work has focused on evaluation of algorithms in three additional regions of the country, as well as, markedly increasing the number of thunderstorms in order to evaluate the each algorithm s performance on a larger population of storms. Lightning characteristics of just over 600 thunderstorms have been studied over the past four years. The 2 lightning jump algorithm continues to show the most promise for an operational lightning jump algorithm, with a probability of detection of 82%, a false alarm rate of 35%, a critical success index of 57%, and a Heidke Skill Score of 0.73 on the entire population of thunderstorms. Average lead time for the 2 algorithm on all severe weather is 21.15 minutes, with a standard deviation of +/- 14.68 minutes. Looking at tornadoes alone, the average lead time is 18.71 minutes, with a standard deviation of +/-14.88 minutes. Moreover, removing the 2 lightning jumps that occur after a jump has been detected, and before severe weather is detected at the ground, the 2 lightning jump algorithm s false alarm rate drops from 35% to 21%. Cold season, low topped, and tropical environments cause problems for the 2 lightning jump algorithm, due to their relative dearth in lightning as compared to a supercellular or summertime airmass thunderstorm environment.
A comparison between physicians and computer algorithms for form CMS-2728 data reporting.
Malas, Mohammed Said; Wish, Jay; Moorthi, Ranjani; Grannis, Shaun; Dexter, Paul; Duke, Jon; Moe, Sharon
2017-01-01
CMS-2728 form (Medical Evidence Report) assesses 23 comorbidities chosen to reflect poor outcomes and increased mortality risk. Previous studies questioned the validity of physician reporting on forms CMS-2728. We hypothesize that reporting of comorbidities by computer algorithms identifies more comorbidities than physician completion, and, therefore, is more reflective of underlying disease burden. We collected data from CMS-2728 forms for all 296 patients who had incident ESRD diagnosis and received chronic dialysis from 2005 through 2014 at Indiana University outpatient dialysis centers. We analyzed patients' data from electronic medical records systems that collated information from multiple health care sources. Previously utilized algorithms or natural language processing was used to extract data on 10 comorbidities for a period of up to 10 years prior to ESRD incidence. These algorithms incorporate billing codes, prescriptions, and other relevant elements. We compared the presence or unchecked status of these comorbidities on the forms to the presence or absence according to the algorithms. Computer algorithms had higher reporting of comorbidities compared to forms completion by physicians. This remained true when decreasing data span to one year and using only a single health center source. The algorithms determination was well accepted by a physician panel. Importantly, algorithms use significantly increased the expected deaths and lowered the standardized mortality ratios. Using computer algorithms showed superior identification of comorbidities for form CMS-2728 and altered standardized mortality ratios. Adapting similar algorithms in available EMR systems may offer more thorough evaluation of comorbidities and improve quality reporting. © 2016 International Society for Hemodialysis.
NASA Astrophysics Data System (ADS)
Basile, Vito; Guadagno, Gianluca; Ferrario, Maddalena; Fassi, Irene
2018-03-01
In this paper a parametric, modular and scalable algorithm allowing a fully automated assembly of a backplane fiber-optic interconnection circuit is presented. This approach guarantees the optimization of the optical fiber routing inside the backplane with respect to specific criteria (i.e. bending power losses), addressing both transmission performance and overall costs issues. Graph theory has been exploited to simplify the complexity of the NxN full-mesh backplane interconnection topology, firstly, into N independent sub-circuits and then, recursively, into a limited number of loops easier to be generated. Afterwards, the proposed algorithm selects a set of geometrical and architectural parameters whose optimization allows to identify the optimal fiber optic routing for each sub-circuit of the backplane. The topological and numerical information provided by the algorithm are then exploited to control a robot which performs the automated assembly of the backplane sub-circuits. The proposed routing algorithm can be extended to any array architecture and number of connections thanks to its modularity and scalability. Finally, the algorithm has been exploited for the automated assembly of an 8x8 optical backplane realized with standard multimode (MM) 12-fiber ribbons.
Cheng, Cynthia; Lee, Chadd W; Daskalakis, Constantine
2015-10-27
Capillaroscopy is a non-invasive, efficient, relatively inexpensive and easy to learn methodology for directly visualizing the microcirculation. The capillaroscopy technique can provide insight into a patient's microvascular health, leading to a variety of potentially valuable dermatologic, ophthalmologic, rheumatologic and cardiovascular clinical applications. In addition, tumor growth may be dependent on angiogenesis, which can be quantitated by measuring microvessel density within the tumor. However, there is currently little to no standardization of techniques, and only one publication to date reports the reliability of a currently available, complex computer based algorithms for quantitating capillaroscopy data.(1) This paper describes a new, simpler, reliable, standardized capillary counting algorithm for quantitating nailfold capillaroscopy data. A simple, reproducible computerized capillaroscopy algorithm such as this would facilitate more widespread use of the technique among researchers and clinicians. Many researchers currently analyze capillaroscopy images by hand, promoting user fatigue and subjectivity of the results. This paper describes a novel, easy-to-use automated image processing algorithm in addition to a reproducible, semi-automated counting algorithm. This algorithm enables analysis of images in minutes while reducing subjectivity; only a minimal amount of training time (in our experience, less than 1 hr) is needed to learn the technique.
Daskalakis, Constantine
2015-01-01
Capillaroscopy is a non-invasive, efficient, relatively inexpensive and easy to learn methodology for directly visualizing the microcirculation. The capillaroscopy technique can provide insight into a patient’s microvascular health, leading to a variety of potentially valuable dermatologic, ophthalmologic, rheumatologic and cardiovascular clinical applications. In addition, tumor growth may be dependent on angiogenesis, which can be quantitated by measuring microvessel density within the tumor. However, there is currently little to no standardization of techniques, and only one publication to date reports the reliability of a currently available, complex computer based algorithms for quantitating capillaroscopy data.1 This paper describes a new, simpler, reliable, standardized capillary counting algorithm for quantitating nailfold capillaroscopy data. A simple, reproducible computerized capillaroscopy algorithm such as this would facilitate more widespread use of the technique among researchers and clinicians. Many researchers currently analyze capillaroscopy images by hand, promoting user fatigue and subjectivity of the results. This paper describes a novel, easy-to-use automated image processing algorithm in addition to a reproducible, semi-automated counting algorithm. This algorithm enables analysis of images in minutes while reducing subjectivity; only a minimal amount of training time (in our experience, less than 1 hr) is needed to learn the technique. PMID:26554744
Evaluating Algorithm Performance Metrics Tailored for Prognostics
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai
2009-01-01
Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1
Genetics-based control of a mimo boiler-turbine plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimeo, R.M.; Lee, K.Y.
1994-12-31
A genetic algorithm is used to develop an optimal controller for a non-linear, multi-input/multi-output boiler-turbine plant. The algorithm is used to train a control system for the plant over a wide operating range in an effort to obtain better performance. The results of the genetic algorithm`s controller designed from the linearized plant model at a nominal operating point. Because the genetic algorithm is well-suited to solving traditionally difficult optimization problems it is found that the algorithm is capable of developing the controller based on input/output information only. This controller achieves a performance comparable to the standard linear quadratic regulator.
A novel hybrid meta-heuristic technique applied to the well-known benchmark optimization problems
NASA Astrophysics Data System (ADS)
Abtahi, Amir-Reza; Bijari, Afsane
2017-03-01
In this paper, a hybrid meta-heuristic algorithm, based on imperialistic competition algorithm (ICA), harmony search (HS), and simulated annealing (SA) is presented. The body of the proposed hybrid algorithm is based on ICA. The proposed hybrid algorithm inherits the advantages of the process of harmony creation in HS algorithm to improve the exploitation phase of the ICA algorithm. In addition, the proposed hybrid algorithm uses SA to make a balance between exploration and exploitation phases. The proposed hybrid algorithm is compared with several meta-heuristic methods, including genetic algorithm (GA), HS, and ICA on several well-known benchmark instances. The comprehensive experiments and statistical analysis on standard benchmark functions certify the superiority of the proposed method over the other algorithms. The efficacy of the proposed hybrid algorithm is promising and can be used in several real-life engineering and management problems.
Improved retrieval of cloud base heights from ceilometer using a non-standard instrument method
NASA Astrophysics Data System (ADS)
Wang, Yang; Zhao, Chuanfeng; Dong, Zipeng; Li, Zhanqing; Hu, Shuzhen; Chen, Tianmeng; Tao, Fa; Wang, Yuzhao
2018-04-01
Cloud-base height (CBH) is a basic cloud parameter but has not been measured accurately, especially under polluted conditions due to the interference of aerosol. Taking advantage of a comprehensive field experiment in northern China in which a variety of advanced cloud probing instruments were operated, different methods of detecting CBH are assessed. The Micro-Pulse Lidar (MPL) and the Vaisala ceilometer (CL51) provided two types of backscattered profiles. The latter has been employed widely as a standard means of measuring CBH using the manufacturer's operational algorithm to generate standard CBH products (CL51 MAN) whose quality is rigorously assessed here, in comparison with a research algorithm that we developed named value distribution equalization (VDE) algorithm. It was applied to both the profiles of lidar backscattering data from the two instruments. The VDE algorithm is found to produce more accurate estimates of CBH for both instruments and can cope with heavy aerosol loading conditions well. By contrast, CL51 MAN overestimates CBH by 400 m and misses many low level clouds under such conditions. These findings are important given that CL51 has been adopted operationally by many meteorological stations in China.
NASA Astrophysics Data System (ADS)
Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.
2016-03-01
The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.
Detection of new HIV infections in a multicentre HIV antiretroviral pre-exposure prophylaxis trial.
Fransen, Katrien; de Baetselier, Irith; Rammutla, Elizabeth; Ahmed, Khatija; Owino, Frederick; Agingu, Walter; Venter, Gustav; Deese, Jen; Van Damme, Lut; Crucitti, Tania
2017-08-01
Monthly specimens collected from FEM-PrEP-a Phase III trial [1] were investigated for the detection of acute HIV (AHI) infection. To evaluate the efficiency of the study-specific HIV algorithm in detecting AHI, and the performance of each of the serological and molecular tests used in diagnosing new infections, and their contribution to narrowing the window period. A total of 83 pre-seroconversion specimens from 61 seroconverters from the FEM-PrEP trial were further analyzed in a sub-study. During the trial, HIV seroconversion was diagnosed on site using a testing algorithm with simple/rapid tests (SRTs) and confirmed with a gold standard testing algorithm (see short communication: Fig. 1). The infection date was determined more accurately by the use of standard ELISAs and Nucleic Acid Amplification Tests (NAAT) in a look-back procedure. For this sub-study, the international central laboratory repeated the study algorithm using SRTs. A total of 83 pre-seroconversions specimens from 61 seroconverters were analyzed in a look-back procedure. RNA was detected in 35/61 seroconverters at the visit before the seroconversion visit as determined at the study sites. Four seroconversion dates were inaccurate at one study site as the international central laboratory detected the HIV infection one visit earlier using the same test algorithm. Using the gold standard, an additional seroconversion was detected at an earlier visit. The combined antigen/antibody and the single antigen test had a higher sensitivity compared to the SRTs in detecting acute infections. In the FEM-PrEP trial, the international central laboratory detected a small number of seroconversions one month earlier than the study sites using the same study algorithm. Standard tests are still the most sensitive tests in detecting pre-seroconversion or acute HIV infection, but they are costly, time consuming and not recommended for use on-site in a clinical trial. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kassianov, Evgueni I.; Flynn, Connor J.; Koontz, Annette S.
2013-09-11
Well-known cloud-screening algorithms, which are designed to remove cloud-contaminated aerosol optical depths (AOD) from AOD measurements, have shown great performance at many middle-to-low latitude sites around the world. However, they may occasionally fail under challenging observational conditions, such as when the sun is low (near the horizon) or when optically thin clouds with small spatial inhomogeneity occur. Such conditions have been observed quite frequently at the high-latitude Atmospheric Radiation Measurement (ARM) North Slope of Alaska (NSA) sites. A slightly modified cloud-screening version of the standard algorithm is proposed here with a focus on the ARM-supported Multifilter Rotating Shadowband Radiometer (MFRSR)more » and Normal Incidence Multifilter Radiometer (NIMFR) data. The modified version uses approximately the same techniques as the standard algorithm, but it additionally examines the magnitude of the slant-path line of sight transmittance and eliminates points when the observed magnitude is below a specified threshold. Substantial improvement of the multi-year (1999-2012) aerosol product (AOD and its Angstrom exponent) is shown for the NSA sites when the modified version is applied. Moreover, this version reproduces the AOD product at the ARM Southern Great Plains (SGP) site, which was originally generated by the standard cloud-screening algorithms. The proposed minor modification is easy to implement and its application to existing and future cloud-screening algorithms can be particularly beneficial for challenging observational conditions.« less
Automatic detection of end-diastolic and end-systolic frames in 2D echocardiography.
Zolgharni, Massoud; Negoita, Madalina; Dhutia, Niti M; Mielewczik, Michael; Manoharan, Karikaran; Sohaib, S M Afzal; Finegold, Judith A; Sacchi, Stefania; Cole, Graham D; Francis, Darrel P
2017-07-01
Correctly selecting the end-diastolic and end-systolic frames on a 2D echocardiogram is important and challenging, for both human experts and automated algorithms. Manual selection is time-consuming and subject to uncertainty, and may affect the results obtained, especially for advanced measurements such as myocardial strain. We developed and evaluated algorithms which can automatically extract global and regional cardiac velocity, and identify end-diastolic and end-systolic frames. We acquired apical four-chamber 2D echocardiographic video recordings, each at least 10 heartbeats long, acquired twice at frame rates of 52 and 79 frames/s from 19 patients, yielding 38 recordings. Five experienced echocardiographers independently marked end-systolic and end-diastolic frames for the first 10 heartbeats of each recording. The automated algorithm also did this. Using the average of time points identified by five human operators as the reference gold standard, the individual operators had a root mean square difference from that gold standard of 46.5 ms. The algorithm had a root mean square difference from the human gold standard of 40.5 ms (P<.0001). Put another way, the algorithm-identified time point was an outlier in 122/564 heartbeats (21.6%), whereas the average human operator was an outlier in 254/564 heartbeats (45%). An automated algorithm can identify the end-systolic and end-diastolic frames with performance indistinguishable from that of human experts. This saves staff time, which could therefore be invested in assessing more beats, and reduces uncertainty about the reliability of the choice of frame. © 2017, Wiley Periodicals, Inc.
Automatable algorithms to identify nonmedical opioid use using electronic data: a systematic review.
Canan, Chelsea; Polinski, Jennifer M; Alexander, G Caleb; Kowal, Mary K; Brennan, Troyen A; Shrank, William H
2017-11-01
Improved methods to identify nonmedical opioid use can help direct health care resources to individuals who need them. Automated algorithms that use large databases of electronic health care claims or records for surveillance are a potential means to achieve this goal. In this systematic review, we reviewed the utility, attempts at validation, and application of such algorithms to detect nonmedical opioid use. We searched PubMed and Embase for articles describing automatable algorithms that used electronic health care claims or records to identify patients or prescribers with likely nonmedical opioid use. We assessed algorithm development, validation, and performance characteristics and the settings where they were applied. Study variability precluded a meta-analysis. Of 15 included algorithms, 10 targeted patients, 2 targeted providers, 2 targeted both, and 1 identified medications with high abuse potential. Most patient-focused algorithms (67%) used prescription drug claims and/or medical claims, with diagnosis codes of substance abuse and/or dependence as the reference standard. Eleven algorithms were developed via regression modeling. Four used natural language processing, data mining, audit analysis, or factor analysis. Automated algorithms can facilitate population-level surveillance. However, there is no true gold standard for determining nonmedical opioid use. Users must recognize the implications of identifying false positives and, conversely, false negatives. Few algorithms have been applied in real-world settings. Automated algorithms may facilitate identification of patients and/or providers most likely to need more intensive screening and/or intervention for nonmedical opioid use. Additional implementation research in real-world settings would clarify their utility. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.
2013-01-01
A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.
Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.
Serebrinsky, Santiago A
2011-03-01
We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.
Algorithm for detection the QRS complexes based on support vector machine
NASA Astrophysics Data System (ADS)
Van, G. V.; Podmasteryev, K. V.
2017-11-01
The efficiency of computer ECG analysis depends on the accurate detection of QRS-complexes. This paper presents an algorithm for QRS complex detection based of support vector machine (SVM). The proposed algorithm is evaluated on annotated standard databases such as MIT-BIH Arrhythmia database. The QRS detector obtained a sensitivity Se = 98.32% and specificity Sp = 95.46% for MIT-BIH Arrhythmia database. This algorithm can be used as the basis for the software to diagnose electrical activity of the heart.
Schimpl, Michaela; Lederer, Christian; Daumer, Martin
2011-01-01
Walking speed is a fundamental indicator for human well-being. In a clinical setting, walking speed is typically measured by means of walking tests using different protocols. However, walking speed obtained in this way is unlikely to be representative of the conditions in a free-living environment. Recently, mobile accelerometry has opened up the possibility to extract walking speed from long-time observations in free-living individuals, but the validity of these measurements needs to be determined. In this investigation, we have developed algorithms for walking speed prediction based on 3D accelerometry data (actibelt®) and created a framework using a standardized data set with gold standard annotations to facilitate the validation and comparison of these algorithms. For this purpose 17 healthy subjects operated a newly developed mobile gold standard while walking/running on an indoor track. Subsequently, the validity of 12 candidate algorithms for walking speed prediction ranging from well-known simple approaches like combining step length with frequency to more sophisticated algorithms such as linear and non-linear models was assessed using statistical measures. As a result, a novel algorithm employing support vector regression was found to perform best with a concordance correlation coefficient of 0.93 (95%CI 0.92–0.94) and a coverage probability CP1 of 0.46 (95%CI 0.12–0.70) for a deviation of 0.1 m/s (CP2 0.78, CP3 0.94) when compared to the mobile gold standard while walking indoors. A smaller outdoor experiment confirmed those results with even better coverage probability. We conclude that walking speed thus obtained has the potential to help establish walking speed in free-living environments as a patient-oriented outcome measure. PMID:21850254
A review of lossless audio compression standards and algorithms
NASA Astrophysics Data System (ADS)
Muin, Fathiah Abdul; Gunawan, Teddy Surya; Kartiwi, Mira; Elsheikh, Elsheikh M. A.
2017-09-01
Over the years, lossless audio compression has gained popularity as researchers and businesses has become more aware of the need for better quality and higher storage demand. This paper will analyse various lossless audio coding algorithm and standards that are used and available in the market focusing on Linear Predictive Coding (LPC) specifically due to its popularity and robustness in audio compression, nevertheless other prediction methods are compared to verify this. Advanced representation of LPC such as LSP decomposition techniques are also discussed within this paper.
A voting-based star identification algorithm utilizing local and global distribution
NASA Astrophysics Data System (ADS)
Fan, Qiaoyun; Zhong, Xuyang; Sun, Junhua
2018-03-01
A novel star identification algorithm based on voting scheme is presented in this paper. In the proposed algorithm, the global distribution and local distribution of sensor stars are fully utilized, and the stratified voting scheme is adopted to obtain the candidates for sensor stars. The database optimization is employed to reduce its memory requirement and improve the robustness of the proposed algorithm. The simulation shows that the proposed algorithm exhibits 99.81% identification rate with 2-pixel standard deviations of positional noises and 0.322-Mv magnitude noises. Compared with two similar algorithms, the proposed algorithm is more robust towards noise, and the average identification time and required memory is less. Furthermore, the real sky test shows that the proposed algorithm performs well on the real star images.
Ship detection in satellite imagery using rank-order greyscale hit-or-miss transforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harvey, Neal R; Porter, Reid B; Theiler, James
2010-01-01
Ship detection from satellite imagery is something that has great utility in various communities. Knowing where ships are and their types provides useful intelligence information. However, detecting and recognizing ships is a difficult problem. Existing techniques suffer from too many false-alarms. We describe approaches we have taken in trying to build ship detection algorithms that have reduced false alarms. Our approach uses a version of the grayscale morphological Hit-or-Miss transform. While this is well known and used in its standard form, we use a version in which we use a rank-order selection for the dilation and erosion parts of themore » transform, instead of the standard maximum and minimum operators. This provides some slack in the fitting that the algorithm employs and provides a method for tuning the algorithm's performance for particular detection problems. We describe our algorithms, show the effect of the rank-order parameter on the algorithm's performance and illustrate the use of this approach for real ship detection problems with panchromatic satellite imagery.« less
NASA Astrophysics Data System (ADS)
Prasetyo, H.; Alfatsani, M. A.; Fauza, G.
2018-05-01
The main issue in vehicle routing problem (VRP) is finding the shortest route of product distribution from the depot to outlets to minimize total cost of distribution. Capacitated Closed Vehicle Routing Problem with Time Windows (CCVRPTW) is one of the variants of VRP that accommodates vehicle capacity and distribution period. Since the main problem of CCVRPTW is considered a non-polynomial hard (NP-hard) problem, it requires an efficient and effective algorithm to solve the problem. This study was aimed to develop Biased Random Key Genetic Algorithm (BRKGA) that is combined with local search to solve the problem of CCVRPTW. The algorithm design was then coded by MATLAB. Using numerical test, optimum algorithm parameters were set and compared with the heuristic method and Standard BRKGA to solve a case study on soft drink distribution. Results showed that BRKGA combined with local search resulted in lower total distribution cost compared with the heuristic method. Moreover, the developed algorithm was found to be successful in increasing the performance of Standard BRKGA.
NASA Technical Reports Server (NTRS)
Munoz, Cesar; Butler, Ricky; Narkawicz, Anthony; Maddalon, Jeffrey; Hagen, George
2010-01-01
Distributed approaches for conflict resolution rely on analyzing the behavior of each aircraft to ensure that system-wide safety properties are maintained. This paper presents the criteria method, which increases the quality and efficiency of a safety assurance analysis for distributed air traffic concepts. The criteria standard is shown to provide two key safety properties: safe separation when only one aircraft maneuvers and safe separation when both aircraft maneuver at the same time. This approach is complemented with strong guarantees of correct operation through formal verification. To show that an algorithm is correct, i.e., that it always meets its specified safety property, one must only show that the algorithm satisfies the criteria. Once this is done, then the algorithm inherits the safety properties of the criteria. An important consequence of this approach is that there is no requirement that both aircraft execute the same conflict resolution algorithm. Therefore, the criteria approach allows different avionics manufacturers or even different airlines to use different algorithms, each optimized according to their own proprietary concerns.
A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems
NASA Astrophysics Data System (ADS)
Thammano, Arit; Teekeng, Wannaporn
2015-05-01
The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.
The SEASAT altimeter wet tropospheric range correction revisited
NASA Technical Reports Server (NTRS)
Tapley, D. B.; Lundberg, J. B.; Born, G. H.
1984-01-01
An expanded set of radiosonde observations was used to calculate the wet tropospheric range correction for the brightness temperature measurements of the SEASAT scanning multichannel microwave radiometer (SMMR). The accuracy of the conventional algorithm for wet tropospheric range correction was evaluated. On the basis of the expanded observational data set, the algorithm was found to have a bias of about 1.0 cm, and a standard deviation 2.8 cm. In order to improve the algorithm, the exact linear, quadratic and logarithmic relationships between brightness temperatures and range corrections were determined. Various combinations of measurement parameters were used to reduce the standard deviation between SEASAT SMMR and radiosonde observations to about 2.1 cm. The performance of various range correction formulas is compared in a table.
Optimal Budget Allocation for Sample Average Approximation
2011-06-01
an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample
The openGL visualization of the 2D parallel FDTD algorithm
NASA Astrophysics Data System (ADS)
Walendziuk, Wojciech
2005-02-01
This paper presents a way of visualization of a two-dimensional version of a parallel algorithm of the FDTD method. The visualization module was created on the basis of the OpenGL graphic standard with the use of the GLUT interface. In addition, the work includes the results of the efficiency of the parallel algorithm in the form of speedup charts.
Immersed boundary method for Boltzmann model kinetic equations
NASA Astrophysics Data System (ADS)
Pekardan, Cem; Chigullapalli, Sruti; Sun, Lin; Alexeenko, Alina
2012-11-01
Three different immersed boundary method formulations are presented for Boltzmann model kinetic equations such as Bhatnagar-Gross-Krook (BGK) and Ellipsoidal statistical Bhatnagar-Gross-Krook (ESBGK) model equations. 1D unsteady IBM solution for a moving piston is compared with the DSMC results and 2D quasi-steady microscale gas damping solutions are verified by a conformal finite volume method solver. Transient analysis for a sinusoidally moving beam is also carried out for the different pressure conditions (1 atm, 0.1 atm and 0.01 atm) corresponding to Kn=0.05,0.5 and 5. Interrelaxation method (Method 2) is shown to provide a faster convergence as compared to the traditional interpolation scheme used in continuum IBM formulations. Unsteady damping in rarefied regime is characterized by a significant phase-lag which is not captured by quasi-steady approximations.
Spectral fitting, shock layer modeling, and production of nitrogen oxides and excited nitrogen
NASA Technical Reports Server (NTRS)
Blackwell, H. E.
1991-01-01
An analysis was made of N2 emission from 8.72 MJ/kg shock layer at 2.54, 1.91, and 1.27 cm positions and vibrational state distributions, temperatures, and relative electronic state populations was obtained from data sets. Other recorded arc jet N2 and air spectral data were reviewed and NO emission characteristics were studied. A review of operational procedures of the DSMC code was made. Information on other appropriate codes and modifications, including ionization, were made as well as a determination of the applicability of codes reviewed to task requirement. A review was also made of computational procedures used in CFD codes of Li and other codes on JSC computers. An analysis was made of problems associated with integration of specific chemical kinetics applicable to task into CFD codes.
Crystal Symmetry Algorithms in a High-Throughput Framework for Materials
NASA Astrophysics Data System (ADS)
Taylor, Richard
The high-throughput framework AFLOW that has been developed and used successfully over the last decade is improved to include fully-integrated software for crystallographic symmetry characterization. The standards used in the symmetry algorithms conform with the conventions and prescriptions given in the International Tables of Crystallography (ITC). A standard cell choice with standard origin is selected, and the space group, point group, Bravais lattice, crystal system, lattice system, and representative symmetry operations are determined. Following the conventions of the ITC, the Wyckoff sites are also determined and their labels and site symmetry are provided. The symmetry code makes no assumptions on the input cell orientation, origin, or reduction and has been integrated in the AFLOW high-throughput framework for materials discovery by adding to the existing code base and making use of existing classes and functions. The software is written in object-oriented C++ for flexibility and reuse. A performance analysis and examination of the algorithms scaling with cell size and symmetry is also reported.
NASA Technical Reports Server (NTRS)
Chadwick, C.
1984-01-01
This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.
Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong
2018-01-01
Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm–neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS. PMID:29671822
Fluorescence intensity positivity classification of Hep-2 cells images using fuzzy logic
NASA Astrophysics Data System (ADS)
Sazali, Dayang Farzana Abang; Janier, Josefina Barnachea; May, Zazilah Bt.
2014-10-01
Indirect Immunofluorescence (IIF) is a good standard used for antinuclear autoantibody (ANA) test using Hep-2 cells to determine specific diseases. Different classifier algorithm methods have been proposed in previous works however, there still no valid set as a standard to classify the fluorescence intensity. This paper presents the use of fuzzy logic to classify the fluorescence intensity and to determine the positivity of the Hep-2 cell serum samples. The fuzzy algorithm involves the image pre-processing by filtering the noises and smoothen the image, converting the red, green and blue (RGB) color space of images to luminosity layer, chromaticity layer "a" and "b" (LAB) color space where the mean value of the lightness and chromaticity layer "a" was extracted and classified by using fuzzy logic algorithm based on the standard score ranges of antinuclear autoantibody (ANA) fluorescence intensity. Using 100 data sets of positive and intermediate fluorescence intensity for testing the performance measurements, the fuzzy logic obtained an accuracy of intermediate and positive class as 85% and 87% respectively.
Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong
2018-04-19
Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm⁻neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS.
Schneider, Nadine; Sayle, Roger A; Landrum, Gregory A
2015-10-26
Finding a canonical ordering of the atoms in a molecule is a prerequisite for generating a unique representation of the molecule. The canonicalization of a molecule is usually accomplished by applying some sort of graph relaxation algorithm, the most common of which is the Morgan algorithm. There are known issues with that algorithm that lead to noncanonical atom orderings as well as problems when it is applied to large molecules like proteins. Furthermore, each cheminformatics toolkit or software provides its own version of a canonical ordering, most based on unpublished algorithms, which also complicates the generation of a universal unique identifier for molecules. We present an alternative canonicalization approach that uses a standard stable-sorting algorithm instead of a Morgan-like index. Two new invariants that allow canonical ordering of molecules with dependent chirality as well as those with highly symmetrical cyclic graphs have been developed. The new approach proved to be robust and fast when tested on the 1.45 million compounds of the ChEMBL 20 data set in different scenarios like random renumbering of input atoms or SMILES round tripping. Our new algorithm is able to generate a canonical order of the atoms of protein molecules within a few milliseconds. The novel algorithm is implemented in the open-source cheminformatics toolkit RDKit. With this paper, we provide a reference Python implementation of the algorithm that could easily be integrated in any cheminformatics toolkit. This provides a first step toward a common standard for canonical atom ordering to generate a universal unique identifier for molecules other than InChI.
Parameter expansion for estimation of reduced rank covariance matrices (Open Access publication)
Meyer, Karin
2008-01-01
Parameter expanded and standard expectation maximisation algorithms are described for reduced rank estimation of covariance matrices by restricted maximum likelihood, fitting the leading principal components only. Convergence behaviour of these algorithms is examined for several examples and contrasted to that of the average information algorithm, and implications for practical analyses are discussed. It is shown that expectation maximisation type algorithms are readily adapted to reduced rank estimation and converge reliably. However, as is well known for the full rank case, the convergence is linear and thus slow. Hence, these algorithms are most useful in combination with the quadratically convergent average information algorithm, in particular in the initial stages of an iterative solution scheme. PMID:18096112
US standards lab comes under fire
NASA Astrophysics Data System (ADS)
Cartlidge, Edwin
2014-09-01
America's National Institute of Standards and Technology is accused of bowing to the nation's spies in supporting an encryption algorithm that appears to contain a "back door", as Edwin Cartlidge reports.
Kenttä, Tuomas; Porthan, Kimmo; Tikkanen, Jani T; Väänänen, Heikki; Oikarinen, Lasse; Viitasalo, Matti; Karanko, Hannu; Laaksonen, Maarit; Huikuri, Heikki V
2015-07-01
Early repolarization (ER) is defined as an elevation of the QRS-ST junction in at least two inferior or lateral leads of the standard 12-lead electrocardiogram (ECG). Our purpose was to create an algorithm for the automated detection and classification of ER. A total of 6,047 electrocardiograms were manually graded for ER by two experienced readers. The automated detection of ER was based on quantification of the characteristic slurring or notching in ER-positive leads. The ER detection algorithm was tested and its results were compared with manual grading, which served as the reference. Readers graded 183 ECGs (3.0%) as ER positive, of which the algorithm detected 176 recordings, resulting in sensitivity of 96.2%. Of the 5,864 ER-negative recordings, the algorithm classified 5,281 as negative, resulting in 90.1% specificity. Positive and negative predictive values for the algorithm were 23.2% and 99.9%, respectively, and its accuracy was 90.2%. Inferior ER was correctly detected in 84.6% and lateral ER in 98.6% of the cases. As the automatic algorithm has high sensitivity, it could be used as a prescreening tool for ER; only the electrocardiograms graded positive by the algorithm would be reviewed manually. This would reduce the need for manual labor by 90%. © 2014 Wiley Periodicals, Inc.
MacRae, J; Darlow, B; McBain, L; Jones, O; Stubbe, M; Turner, N; Dowell, A
2015-08-21
To develop a natural language processing software inference algorithm to classify the content of primary care consultations using electronic health record Big Data and subsequently test the algorithm's ability to estimate the prevalence and burden of childhood respiratory illness in primary care. Algorithm development and validation study. To classify consultations, the algorithm is designed to interrogate clinical narrative entered as free text, diagnostic (Read) codes created and medications prescribed on the day of the consultation. Thirty-six consenting primary care practices from a mixed urban and semirural region of New Zealand. Three independent sets of 1200 child consultation records were randomly extracted from a data set of all general practitioner consultations in participating practices between 1 January 2008-31 December 2013 for children under 18 years of age (n=754,242). Each consultation record within these sets was independently classified by two expert clinicians as respiratory or non-respiratory, and subclassified according to respiratory diagnostic categories to create three 'gold standard' sets of classified records. These three gold standard record sets were used to train, test and validate the algorithm. Sensitivity, specificity, positive predictive value and F-measure were calculated to illustrate the algorithm's ability to replicate judgements of expert clinicians within the 1200 record gold standard validation set. The algorithm was able to identify respiratory consultations in the 1200 record validation set with a sensitivity of 0.72 (95% CI 0.67 to 0.78) and a specificity of 0.95 (95% CI 0.93 to 0.98). The positive predictive value of algorithm respiratory classification was 0.93 (95% CI 0.89 to 0.97). The positive predictive value of the algorithm classifying consultations as being related to specific respiratory diagnostic categories ranged from 0.68 (95% CI 0.40 to 1.00; other respiratory conditions) to 0.91 (95% CI 0.79 to 1.00; throat infections). A software inference algorithm that uses primary care Big Data can accurately classify the content of clinical consultations. This algorithm will enable accurate estimation of the prevalence of childhood respiratory illness in primary care and resultant service utilisation. The methodology can also be applied to other areas of clinical care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
A numerically-stable algorithm for calibrating single six-ports for national microwave reflectometry
NASA Astrophysics Data System (ADS)
Hodgetts, T. E.
1990-11-01
A full description and analysis of the numerically stable algorithm currently used for calibrating single six ports or multi states for national microwave reflectometry, employing as standards four one port devices having known voltage reflection coefficients, is given.
Marcek, Dusan; Durisova, Maria
2016-01-01
This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process. PMID:26977450
Falat, Lukas; Marcek, Dusan; Durisova, Maria
2016-01-01
This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.
Current Status of Japanese Global Precipitation Measurement (GPM) Research Project
NASA Astrophysics Data System (ADS)
Kachi, Misako; Oki, Riko; Kubota, Takuji; Masaki, Takeshi; Kida, Satoshi; Iguchi, Toshio; Nakamura, Kenji; Takayabu, Yukari N.
2013-04-01
The Global Precipitation Measurement (GPM) mission is a mission led by the Japan Aerospace Exploration Agency (JAXA) and the National Aeronautics and Space Administration (NASA) under collaboration with many international partners, who will provide constellation of satellites carrying microwave radiometer instruments. The GPM Core Observatory, which carries the Dual-frequency Precipitation Radar (DPR) developed by JAXA and the National Institute of Information and Communications Technology (NICT), and the GPM Microwave Imager (GMI) developed by NASA. The GPM Core Observatory is scheduled to be launched in early 2014. JAXA also provides the Global Change Observation Mission (GCOM) 1st - Water (GCOM-W1) named "SHIZUKU," as one of constellation satellites. The SHIZUKU satellite was launched in 18 May, 2012 from JAXA's Tanegashima Space Center, and public data release of the Advanced Microwave Scanning Radiometer 2 (AMSR2) on board the SHIZUKU satellite was planned that Level 1 products in January 2013, and Level 2 products including precipitation in May 2013. The Japanese GPM research project conducts scientific activities on algorithm development, ground validation, application research including production of research products. In addition, we promote collaboration studies in Japan and Asian countries, and public relations activities to extend potential users of satellite precipitation products. In pre-launch phase, most of our activities are focused on the algorithm development and the ground validation related to the algorithm development. As the GPM standard products, JAXA develops the DPR Level 1 algorithm, and the NASA-JAXA Joint Algorithm Team develops the DPR Level 2 and the DPR-GMI combined Level2 algorithms. JAXA also develops the Global Rainfall Map product as national product to distribute hourly and 0.1-degree horizontal resolution rainfall map. All standard algorithms including Japan-US joint algorithm will be reviewed by the Japan-US Joint Precipitation Measuring Mission (PMM) Science Team (JPST) before the release. DPR Level 2 algorithm has been developing by the DPR Algorithm Team led by Japan, which is under the NASA-JAXA Joint Algorithm Team. The Level-2 algorithms will provide KuPR only products, KaPR only products, and Dual-frequency Precipitation products, with estimated precipitation rate, radar reflectivity, and precipitation information such as drop size distribution and bright band height. At-launch code was developed in December 2012. In addition, JAXA and NASA have provided synthetic DPR L1 data and tests have been performed using them. Japanese Global Rainfall Map algorithm for the GPM mission has been developed by the Global Rainfall Map Algorithm Development Team in Japan. The algorithm succeeded heritages of the Global Satellite Mapping for Precipitation (GSMaP) project, which was sponsored by the Japan Science and Technology Agency (JST) under the Core Research for Evolutional Science and Technology (CREST) framework between 2002 and 2007. The GSMaP near-real-time version and reanalysis version have been in operation at JAXA, and browse images and binary data available at the GSMaP web site (http://sharaku.eorc.jaxa.jp/GSMaP/). The GSMaP algorithm for GPM is developed in collaboration with AMSR2 standard algorithm for precipitation product, and their validation studies are closely related. As JAXA GPM product, we will provide 0.1-degree grid and hourly product for standard and near-realtime processing. Outputs will include hourly rainfall, gauge-calibrated hourly rainfall, and several quality information (satellite information flag, time information flag, and gauge quality information) over global areas from 60°S to 60°N. At-launch code of GSMaP for GPM is under development, and will be delivered to JAXA GPM Mission Operation System by April 2013. At-launch code will include several updates of microwave imager and sounder algorithms and databases, and introduction of rain-gauge correction.
NASA Astrophysics Data System (ADS)
Sitko, Rafał
2008-11-01
Knowledge of X-ray tube spectral distribution is necessary in theoretical methods of matrix correction, i.e. in both fundamental parameter (FP) methods and theoretical influence coefficient algorithms. Thus, the influence of X-ray tube distribution on the accuracy of the analysis of thin films and bulk samples is presented. The calculations are performed using experimental X-ray tube spectra taken from the literature and theoretical X-ray tube spectra evaluated by three different algorithms proposed by Pella et al. (X-Ray Spectrom. 14 (1985) 125-135), Ebel (X-Ray Spectrom. 28 (1999) 255-266), and Finkelshtein and Pavlova (X-Ray Spectrom. 28 (1999) 27-32). In this study, Fe-Cr-Ni system is selected as an example and the calculations are performed for X-ray tubes commonly applied in X-ray fluorescence analysis (XRF), i.e., Cr, Mo, Rh and W. The influence of X-ray tube spectra on FP analysis is evaluated when quantification is performed using various types of calibration samples. FP analysis of bulk samples is performed using pure-element bulk standards and multielement bulk standards similar to the analyzed material, whereas for FP analysis of thin films, the bulk and thin pure-element standards are used. For the evaluation of the influence of X-ray tube spectra on XRF analysis performed by theoretical influence coefficient methods, two algorithms for bulk samples are selected, i.e. Claisse-Quintin (Can. Spectrosc. 12 (1967) 129-134) and COLA algorithms (G.R. Lachance, Paper Presented at the International Conference on Industrial Inorganic Elemental Analysis, Metz, France, June 3, 1981) and two algorithms (constant and linear coefficients) for thin films recently proposed by Sitko (X-Ray Spectrom. 37 (2008) 265-272).
Navigating a ship with a broken compass: evaluating standard algorithms to measure patient safety.
Hefner, Jennifer L; Huerta, Timothy R; McAlearney, Ann Scheck; Barash, Barbara; Latimer, Tina; Moffatt-Bruce, Susan D
2017-03-01
Agency for Healthcare Research and Quality (AHRQ) software applies standardized algorithms to hospital administrative data to identify patient safety indicators (PSIs). The objective of this study was to assess the validity of PSI flags and report reasons for invalid flagging. At a 6-hospital academic medical center, a retrospective analysis was conducted of all PSIs flagged in fiscal year 2014. A multidisciplinary PSI Quality Team reviewed each flagged PSI based on quarterly reports. The positive predictive value (PPV, the percent of clinically validated cases) was calculated for 12 PSI categories. The documentation for each reversed case was reviewed to determine the reasons for PSI reversal. Of 657 PSI flags, 185 were reversed. Seven PSI categories had a PPV below 75%. Four broad categories of reasons for reversal were AHRQ algorithm limitations (38%), coding misinterpretations (45%), present upon admission (10%), and documentation insufficiency (7%). AHRQ algorithm limitations included 2 subcategories: an "incident" was inherent to the procedure, or highly likely (eg, vascular tumor bleed), or an "incident" was nonsignificant, easily controlled, and/or no intervention was needed. These findings support previous research highlighting administrative data problems. Additionally, AHRQ algorithm limitations was an emergent category not considered in previous research. Herein we present potential solutions to address these issues. If, despite poor validity, US policy continues to rely on PSIs for incentive and penalty programs, improvements are needed in the quality of administrative data and the standardized PSI algorithms. These solutions require national motivation, research attention, and dissemination support. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
A high throughput architecture for a low complexity soft-output demapping algorithm
NASA Astrophysics Data System (ADS)
Ali, I.; Wasenmüller, U.; Wehn, N.
2015-11-01
Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.
Harju, Inka; Lange, Christoph; Kostrzewa, Markus; Maier, Thomas; Rantakokko-Jalava, Kaisu; Haanperä, Marjo
2017-03-01
Reliable distinction of Streptococcus pneumoniae and viridans group streptococci is important because of the different pathogenic properties of these organisms. Differentiation between S. pneumoniae and closely related Sreptococcus mitis species group streptococci has always been challenging, even when using such modern methods as 16S rRNA gene sequencing or matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry. In this study, a novel algorithm combined with an enhanced database was evaluated for differentiation between S. pneumoniae and S. mitis species group streptococci. One hundred one clinical S. mitis species group streptococcal strains and 188 clinical S. pneumoniae strains were identified by both the standard MALDI Biotyper database alone and that combined with a novel algorithm. The database update from 4,613 strains to 5,627 strains drastically improved the differentiation of S. pneumoniae and S. mitis species group streptococci: when the new database version containing 5,627 strains was used, only one of the 101 S. mitis species group isolates was misidentified as S. pneumoniae , whereas 66 of them were misidentified as S. pneumoniae when the earlier 4,613-strain MALDI Biotyper database version was used. The updated MALDI Biotyper database combined with the novel algorithm showed even better performance, producing no misidentifications of the S. mitis species group strains as S. pneumoniae All S. pneumoniae strains were correctly identified as S. pneumoniae with both the standard MALDI Biotyper database and the standard MALDI Biotyper database combined with the novel algorithm. This new algorithm thus enables reliable differentiation between pneumococci and other S. mitis species group streptococci with the MALDI Biotyper. Copyright © 2017 American Society for Microbiology.
JavaGenes and Condor: Cycle-Scavenging Genetic Algorithms
NASA Technical Reports Server (NTRS)
Globus, Al; Langhirt, Eric; Livny, Miron; Ramamurthy, Ravishankar; Soloman, Marvin; Traugott, Steve
2000-01-01
A genetic algorithm code, JavaGenes, was written in Java and used to evolve pharmaceutical drug molecules and digital circuits. JavaGenes was run under the Condor cycle-scavenging batch system managing 100-170 desktop SGI workstations. Genetic algorithms mimic biological evolution by evolving solutions to problems using crossover and mutation. While most genetic algorithms evolve strings or trees, JavaGenes evolves graphs representing (currently) molecules and circuits. Java was chosen as the implementation language because the genetic algorithm requires random splitting and recombining of graphs, a complex data structure manipulation with ample opportunities for memory leaks, loose pointers, out-of-bound indices, and other hard to find bugs. Java garbage-collection memory management, lack of pointer arithmetic, and array-bounds index checking prevents these bugs from occurring, substantially reducing development time. While a run-time performance penalty must be paid, the only unacceptable performance we encountered was using standard Java serialization to checkpoint and restart the code. This was fixed by a two-day implementation of custom checkpointing. JavaGenes is minimally integrated with Condor; in other words, JavaGenes must do its own checkpointing and I/O redirection. A prototype Java-aware version of Condor was developed using standard Java serialization for checkpointing. For the prototype to be useful, standard Java serialization must be significantly optimized. JavaGenes is approximately 8700 lines of code and a few thousand JavaGenes jobs have been run. Most jobs ran for a few days. Results include proof that genetic algorithms can evolve directed and undirected graphs, development of a novel crossover operator for graphs, a paper in the journal Nanotechnology, and another paper in preparation.
Programming Deep Brain Stimulation for Tremor and Dystonia: The Toronto Western Hospital Algorithms.
Picillo, Marina; Lozano, Andres M; Kou, Nancy; Munhoz, Renato Puppi; Fasano, Alfonso
2016-01-01
Deep brain stimulation (DBS) is an effective treatment for essential tremor (ET) and dystonia. After surgery, a number of extensive programming sessions are performed, mainly relying on neurologist's personal experience as no programming guidelines have been provided so far, with the exception of recommendations provided by groups of experts. Finally, fewer information is available for the management of DBS in ET and dystonia compared with Parkinson's disease. Our aim is to review the literature on initial and follow-up DBS programming procedures for ET and dystonia and integrate the results with our current practice at Toronto Western Hospital (TWH) to develop standardized DBS programming protocols. We conducted a literature search of PubMed from inception to July 2014 with the keywords "balance", "bradykinesia", "deep brain stimulation", "dysarthria", "dystonia", "gait disturbances", "initial programming", "loss of benefit", "micrographia", "speech", "speech difficulties" and "tremor". Seventy-six papers were considered for this review. Based on the literature review and our experience at TWH, we refined three algorithms for management of ET, including: (1) initial programming, (2) management of balance and speech issues and (3) loss of stimulation benefit. We also depicted algorithms for the management of dystonia, including: (1) initial programming and (2) management of stimulation-induced hypokinesia (shuffling gait, micrographia and speech impairment). We propose five algorithms tailored to an individualized approach to managing ET and dystonia patients with DBS. We encourage the application of these algorithms to supplement current standards of care in established as well as new DBS centers to test the clinical usefulness of these algorithms in supplementing the current standards of care. Copyright © 2016 Elsevier Inc. All rights reserved.
Yeo, Lami; Romero, Roberto; Jodicke, Cristiano; Oggè, Giovanna; Lee, Wesley; Kusanovic, Juan Pedro; Vaisbuch, Edi; Hassan, Sonia S.
2010-01-01
Objective To describe a novel and simple algorithm (FAST Echo: Four chamber view And Swing Technique) to visualize standard diagnostic planes of fetal echocardiography from dataset volumes obtained with spatiotemporal image correlation (STIC) and applying a new display technology (OmniView). Methods We developed an algorithm to image standard fetal echocardiographic planes by drawing four dissecting lines through the longitudinal view of the ductal arch contained in a STIC volume dataset. Three of the lines are locked to provide simultaneous visualization of targeted planes, and the fourth line (unlocked) “swings” through the ductal arch image (“swing technique”), providing an infinite number of cardiac planes in sequence. Each line generated the following plane(s): 1) Line 1: three-vessels and trachea view; 2) Line 2: five-chamber view and long axis view of the aorta (obtained by rotation of the five-chamber view on the y-axis); 3) Line 3: four-chamber view; and 4) “Swing” line: three-vessels and trachea view, five-chamber view and/or long axis view of the aorta, four-chamber view, and stomach. The algorithm was then tested in 50 normal hearts (15.3 – 40 weeks of gestation) and visualization rates for cardiac diagnostic planes were calculated. To determine if the algorithm could identify planes that departed from the normal images, we tested the algorithm in 5 cases with proven congenital heart defects. Results In normal cases, the FAST Echo algorithm (3 locked lines and rotation of the five-chamber view on the y-axis) was able to generate the intended planes (longitudinal view of the ductal arch, pulmonary artery, three-vessels and trachea view, five-chamber view, long axis view of the aorta, four-chamber view): 1) individually in 100% of cases [except for the three-vessel and trachea view, which was seen in 98% (49/50)]; and 2) simultaneously in 98% (49/50). The “swing technique” was able to generate the three-vessels and trachea view, five-chamber view and/or long axis view of the aorta, four-chamber view, and stomach in 100% of normal cases. In the abnormal cases, the FAST Echo algorithm demonstrated the cardiac defects and displayed views that deviated from what was expected from the examination of normal hearts. The “swing technique” was useful in demonstrating the specific diagnosis due to visualization of an infinite number of cardiac planes in sequence. Conclusions This novel and simple algorithm can be used to visualize standard fetal echocardiographic planes in normal fetal hearts. The FAST Echo algorithm may simplify examination of the fetal heart and could reduce operator dependency. Using this algorithm, the inability to obtain expected views or the appearance of abnormal views in the generated planes should raise the index of suspicion for congenital heart disease. PMID:20878671
Yeo, L; Romero, R; Jodicke, C; Oggè, G; Lee, W; Kusanovic, J P; Vaisbuch, E; Hassan, S
2011-04-01
To describe a novel and simple algorithm (four-chamber view and 'swing technique' (FAST) echo) for visualization of standard diagnostic planes of fetal echocardiography from dataset volumes obtained with spatiotemporal image correlation (STIC) and applying a new display technology (OmniView). We developed an algorithm to image standard fetal echocardiographic planes by drawing four dissecting lines through the longitudinal view of the ductal arch contained in a STIC volume dataset. Three of the lines are locked to provide simultaneous visualization of targeted planes, and the fourth line (unlocked) 'swings' through the ductal arch image (swing technique), providing an infinite number of cardiac planes in sequence. Each line generates the following plane(s): (a) Line 1: three-vessels and trachea view; (b) Line 2: five-chamber view and long-axis view of the aorta (obtained by rotation of the five-chamber view on the y-axis); (c) Line 3: four-chamber view; and (d) 'swing line': three-vessels and trachea view, five-chamber view and/or long-axis view of the aorta, four-chamber view and stomach. The algorithm was then tested in 50 normal hearts in fetuses at 15.3-40 weeks' gestation and visualization rates for cardiac diagnostic planes were calculated. To determine whether the algorithm could identify planes that departed from the normal images, we tested the algorithm in five cases with proven congenital heart defects. In normal cases, the FAST echo algorithm (three locked lines and rotation of the five-chamber view on the y-axis) was able to generate the intended planes (longitudinal view of the ductal arch, pulmonary artery, three-vessels and trachea view, five-chamber view, long-axis view of the aorta, four-chamber view) individually in 100% of cases (except for the three-vessels and trachea view, which was seen in 98% (49/50)) and simultaneously in 98% (49/50). The swing technique was able to generate the three-vessels and trachea view, five-chamber view and/or long-axis view of the aorta, four-chamber view and stomach in 100% of normal cases. In the abnormal cases, the FAST echo algorithm demonstrated the cardiac defects and displayed views that deviated from what was expected from the examination of normal hearts. The swing technique was useful for demonstrating the specific diagnosis due to visualization of an infinite number of cardiac planes in sequence. This novel and simple algorithm can be used to visualize standard fetal echocardiographic planes in normal fetal hearts. The FAST echo algorithm may simplify examination of the fetal heart and could reduce operator dependency. Using this algorithm, inability to obtain expected views or the appearance of abnormal views in the generated planes should raise the index of suspicion for congenital heart disease. Copyright © 2011 ISUOG. Published by John Wiley & Sons, Ltd.
Finding minimum spanning trees more efficiently for tile-based phase unwrapping
NASA Astrophysics Data System (ADS)
Sawaf, Firas; Tatam, Ralph P.
2006-06-01
The tile-based phase unwrapping method employs an algorithm for finding the minimum spanning tree (MST) in each tile. We first examine the properties of a tile's representation from a graph theory viewpoint, observing that it is possible to make use of a more efficient class of MST algorithms. We then describe a novel linear time algorithm which reduces the size of the MST problem by half at the least, and solves it completely at best. We also show how this algorithm can be applied to a tile using a sliding window technique. Finally, we show how the reduction algorithm can be combined with any other standard MST algorithm to achieve a more efficient hybrid, using Prim's algorithm for empirical comparison and noting that the reduction algorithm takes only 0.1% of the time taken by the overall hybrid.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demmel, James W.
This project addresses both communication-avoiding algorithms, and reproducible floating-point computation. Communication, i.e. moving data, either between levels of memory or processors over a network, is much more expensive per operation than arithmetic (measured in time or energy), so we seek algorithms that greatly reduce communication. We developed many new algorithms for both dense and sparse, and both direct and iterative linear algebra, attaining new communication lower bounds, and getting large speedups in many cases. We also extended this work in several ways: (1) We minimize writes separately from reads, since writes may be much more expensive than reads on emergingmore » memory technologies, like Flash, sometimes doing asymptotically fewer writes than reads. (2) We extend the lower bounds and optimal algorithms to arbitrary algorithms that may be expressed as perfectly nested loops accessing arrays, where the array subscripts may be arbitrary affine functions of the loop indices (eg A(i), B(i,j+k, k+3*m-7, …) etc.). (3) We extend our communication-avoiding approach to some machine learning algorithms, such as support vector machines. This work has won a number of awards. We also address reproducible floating-point computation. We define reproducibility to mean getting bitwise identical results from multiple runs of the same program, perhaps with different hardware resources or other changes that should ideally not change the answer. Many users depend on reproducibility for debugging or correctness. However, dynamic scheduling of parallel computing resources, combined with nonassociativity of floating point addition, makes attaining reproducibility a challenge even for simple operations like summing a vector of numbers, or more complicated operations like the Basic Linear Algebra Subprograms (BLAS). We describe an algorithm that computes a reproducible sum of floating point numbers, independent of the order of summation. The algorithm depends only on a subset of the IEEE Floating Point Standard 754-2008, uses just 6 words to represent a “reproducible accumulator,” and requires just one read-only pass over the data, or one reduction in parallel. New instructions based on this work are being considered for inclusion in the future IEEE 754-2018 floating-point standard, and new reproducible BLAS are being considered for the next version of the BLAS standard.« less
UAV Mission Planning under Uncertainty
2006-06-01
algorithm , adapted from [13] . 57 4-5 Robust Optimization considers only a subset of the feasible region . 61 5-1 Overview of simulation with parameter...incorporates the robust optimization method suggested by Bertsimas and Sim [12], and is solved with a standard Branch- and-Cut algorithm . The chapter... algorithms , and the heuristic methods of Local Search methods and Simulated Annealing. With each method, we attempt to give a review of research that has
Computer-Based Algorithmic Determination of Muscle Movement Onset Using M-Mode Ultrasonography
2017-05-01
contraction images were analyzed visually and with three different classes of algorithms: pixel standard deviation (SD), high-pass filter and Teager Kaiser...Linear relationships and agreements between computed and visual muscle onset were calculated. The top algorithms were high-pass filtered with a 30 Hz...suggest that computer automated determination using high-pass filtering is a potential objective alternative to visual determination in human
Secret Key Crypto Implementations
NASA Astrophysics Data System (ADS)
Bertoni, Guido Marco; Melzani, Filippo
This chapter presents the algorithm selected in 2001 as the Advanced Encryption Standard. This algorithm is the base for implementing security and privacy based on symmetric key solutions in almost all new applications. Secret key algorithms are used in combination with modes of operation to provide different security properties. The most used modes of operation are presented in this chapter. Finally an overview of the different techniques of software and hardware implementations is given.
Reid, Aylin Y; St Germaine-Smith, Christine; Liu, Mingfu; Sadiq, Shahnaz; Quan, Hude; Wiebe, Samuel; Faris, Peter; Dean, Stafford; Jetté, Nathalie
2012-12-01
The objective of this study was to develop and validate coding algorithms for epilepsy using ICD-coded inpatient claims, physician claims, and emergency room (ER) visits. 720/2049 charts from 2003 and 1533/3252 charts from 2006 were randomly selected for review from 13 neurologists' practices as the "gold standard" for diagnosis. Epilepsy status in each chart was determined by 2 trained physicians. The optimal algorithm to identify epilepsy cases was developed by linking the reviewed charts with three administrative databases (ICD 9 and 10 data from 2000 to 2008) including hospital discharges, ER visits and physician claims in a Canadian health region. Accepting chart review data as the gold standard, we calculated sensitivity, specificity, positive, and negative predictive value for each ICD-9 and ICD-10 administrative data algorithm (case definitions). Of 18 algorithms assessed, the most accurate algorithm to identify epilepsy cases was "2 physician claims or 1 hospitalization in 2 years coded" (ICD-9 345 or G40/G41) and the most sensitive algorithm was "1 physician clam or 1 hospitalization or 1 ER visit in 2 years." Accurate and sensitive case definitions are available for research requiring the identification of epilepsy cases in administrative health data. Copyright © 2012 Elsevier B.V. All rights reserved.
An Effective Hybrid Evolutionary Algorithm for Solving the Numerical Optimization Problems
NASA Astrophysics Data System (ADS)
Qian, Xiaohong; Wang, Xumei; Su, Yonghong; He, Liu
2018-04-01
There are many different algorithms for solving complex optimization problems. Each algorithm has been applied successfully in solving some optimization problems, but not efficiently in other problems. In this paper the Cauchy mutation and the multi-parent hybrid operator are combined to propose a hybrid evolutionary algorithm based on the communication (Mixed Evolutionary Algorithm based on Communication), hereinafter referred to as CMEA. The basic idea of the CMEA algorithm is that the initial population is divided into two subpopulations. Cauchy mutation operators and multiple paternal crossover operators are used to perform two subpopulations parallelly to evolve recursively until the downtime conditions are met. While subpopulation is reorganized, the individual is exchanged together with information. The algorithm flow is given and the performance of the algorithm is compared using a number of standard test functions. Simulation results have shown that this algorithm converges significantly faster than FEP (Fast Evolutionary Programming) algorithm, has good performance in global convergence and stability and is superior to other compared algorithms.
STAR Algorithm Integration Team - Facilitating operational algorithm development
NASA Astrophysics Data System (ADS)
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
Optimal Fungal Space Searching Algorithms.
Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V
2016-10-01
Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.
An Enhanced Differential Evolution Algorithm Based on Multiple Mutation Strategies.
Xiang, Wan-li; Meng, Xue-lei; An, Mei-qing; Li, Yin-zhen; Gao, Ming-xia
2015-01-01
Differential evolution algorithm is a simple yet efficient metaheuristic for global optimization over continuous spaces. However, there is a shortcoming of premature convergence in standard DE, especially in DE/best/1/bin. In order to take advantage of direction guidance information of the best individual of DE/best/1/bin and avoid getting into local trap, based on multiple mutation strategies, an enhanced differential evolution algorithm, named EDE, is proposed in this paper. In the EDE algorithm, an initialization technique, opposition-based learning initialization for improving the initial solution quality, and a new combined mutation strategy composed of DE/current/1/bin together with DE/pbest/bin/1 for the sake of accelerating standard DE and preventing DE from clustering around the global best individual, as well as a perturbation scheme for further avoiding premature convergence, are integrated. In addition, we also introduce two linear time-varying functions, which are used to decide which solution search equation is chosen at the phases of mutation and perturbation, respectively. Experimental results tested on twenty-five benchmark functions show that EDE is far better than the standard DE. In further comparisons, EDE is compared with other five state-of-the-art approaches and related results show that EDE is still superior to or at least equal to these methods on most of benchmark functions.
A maximum power point tracking algorithm for buoy-rope-drum wave energy converters
NASA Astrophysics Data System (ADS)
Wang, J. Q.; Zhang, X. C.; Zhou, Y.; Cui, Z. C.; Zhu, L. S.
2016-08-01
The maximum power point tracking control is the key link to improve the energy conversion efficiency of wave energy converters (WEC). This paper presents a novel variable step size Perturb and Observe maximum power point tracking algorithm with a power classification standard for control of a buoy-rope-drum WEC. The algorithm and simulation model of the buoy-rope-drum WEC are presented in details, as well as simulation experiment results. The results show that the algorithm tracks the maximum power point of the WEC fast and accurately.
NASA Technical Reports Server (NTRS)
Suarez, Max J. (Editor); Chang, Alfred T. C.; Chiu, Long S.
1997-01-01
Seventeen months of rainfall data (August 1987-December 1988) from nine satellite rainfall algorithms (Adler, Chang, Kummerow, Prabhakara, Huffman, Spencer, Susskind, and Wu) were analyzed to examine the uncertainty of satellite-derived rainfall estimates. The variability among algorithms, measured as the standard deviation computed from the ensemble of algorithms, shows regions of high algorithm variability tend to coincide with regions of high rain rates. Histograms of pattern correlation (PC) between algorithms suggest a bimodal distribution, with separation at a PC-value of about 0.85. Applying this threshold as a criteria for similarity, our analyses show that algorithms using the same sensor or satellite input tend to be similar, suggesting the dominance of sampling errors in these satellite estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crow, A J
2009-07-07
Andrew Crow arrived at Lawrence Livermore National Laboratory with the intention of continuing work on the Complex Particle Kinetic (CPK) method developed D. Larson and D. Hewett. Andrew Crow had previously worked on duplicating the results of D. Hewett in his previous work. Since arrival, A. Crow has been working with D. Larson on a slightly different project. The current method, still under development, is a Particle in Cell (PIC) code with the following features: (1) all particles begin each timestep at a gridpoint; (2) particles are then advanced in time using a standard special advancement method. The exact methodmore » has not been decided upon, but there are many reliable methods from which to choose. (3) All particles within each cell undergo a simultaneous implicit collision step. This is the current area of focus. Currently, A. Crow is not aware of any method of performing implicit collisions over a large number of charged particles. Implicit methods for charged particle movement and electron-electron collisions, have been developed. The work of L. pareschi and G. Russo on the Time Relaxed Direct Simulation Monte Carlo method, also appears to be a good basis for implicit particle collisions. (4) Each individual particle will be divided into a set of particles with a Gaussian velocity distribution. This will collect some of the thermal effects created by the collisions. This algorithm has not been created. (5) Particles will be projected on to the grid points. Currently, a linear weighting technique is intended to be used, but has not settled upon. (6) Once on the gridpoints the particle number will be reduced using a set of quadrature points based on the third order velocity moments of the particles. The method proposed by R. Fox has been programmed and shown to conserve energy, momentum and mass to machine precision. In addition to reducing the number of particles this method will work to quiet the simulation it will behave as a higher order version of the Quiet DSMC method proposed by B. Albright et al. (7) These quadrature points then become the new particles for the next timestep. the advantage of this method can be many: The self force on ions can be easily removed since all particles begin on grid points. The size of the timesteps should not be limited by collision rate, and should only be impacted by particle travel time through the cell. The particle reduction technique should keep many of the higher order features of the particle distribution while reducing the number of particles in the system. It should also quite the variance in the system. The two largest unknowns, at this time are, how large a part numerical diffusion will play in the scheme and how computationally expensive each timestep will be.« less
A Trajectory Algorithm to Support En Route and Terminal Area Self-Spacing Concepts: Third Revision
NASA Technical Reports Server (NTRS)
Abbott, Terence S.
2012-01-01
This document describes an algorithm for the generation of a four dimensional trajectory. Input data for this algorithm are similar to an augmented Standard Terminal Arrival (STAR) with the augmentation in the form of altitude or speed crossing restrictions at waypoints on the route. This version of the algorithm accommodates constant radius turns and cruise altitude waypoints with calibrated airspeed, versus Mach, constraints. The algorithm calculates the altitude, speed, along path distance, and along path time for each waypoint. Wind data at each of these waypoints are also used for the calculation of ground speed and turn radius.
NASA Technical Reports Server (NTRS)
Thompson, C. P.; Leaf, G. K.; Vanrosendale, J.
1991-01-01
An algorithm is described for the solution of the laminar, incompressible Navier-Stokes equations. The basic algorithm is a multigrid based on a robust, box-based smoothing step. Its most important feature is the incorporation of automatic, dynamic mesh refinement. This algorithm supports generalized simple domains. The program is based on a standard staggered-grid formulation of the Navier-Stokes equations for robustness and efficiency. Special grid transfer operators were introduced at grid interfaces in the multigrid algorithm to ensure discrete mass conservation. Results are presented for three models: the driven-cavity, a backward-facing step, and a sudden expansion/contraction.
NASA Astrophysics Data System (ADS)
Wang, Geng; Zhou, Kexin; Zhang, Yeming
2018-04-01
The widely used Bouc-Wen hysteresis model can be utilized to accurately simulate the voltage-displacement curves of piezoelectric actuators. In order to identify the unknown parameters of the Bouc-Wen model, an improved artificial bee colony (IABC) algorithm is proposed in this paper. A guiding strategy for searching the current optimal position of the food source is proposed in the method, which can help balance the local search ability and global exploitation capability. And the formula for the scout bees to search for the food source is modified to increase the convergence speed. Some experiments were conducted to verify the effectiveness of the IABC algorithm. The results show that the identified hysteresis model agreed well with the actual actuator response. Moreover, the identification results were compared with the standard particle swarm optimization (PSO) method, and it can be seen that the search performance in convergence rate of the IABC algorithm is better than that of the standard PSO method.
NASA Technical Reports Server (NTRS)
Folta, David; Bauer, Frank H. (Technical Monitor)
2001-01-01
The autonomous formation flying control algorithm developed by the Goddard Space Flight Center (GSFC) for the New Millennium Program (NMP) Earth Observing-1 (EO-1) mission is investigated for applicability to libration point orbit formations. In the EO-1 formation-flying algorithm, control is accomplished via linearization about a reference transfer orbit with a state transition matrix (STM) computed from state inputs. The effect of libration point orbit dynamics on this algorithm architecture is explored via computation of STMs using the flight proven code, a monodromy matrix developed from a N-body model of a libration orbit, and a standard STM developed from the gravitational and coriolis effects as measured at the libration point. A comparison of formation flying Delta-Vs calculated from these methods is made to a standard linear quadratic regulator (LQR) method. The universal 3-D approach is optimal in the sense that it can be accommodated as an open-loop or closed-loop control using only state information.
Regularization Paths for Conditional Logistic Regression: The clogitL1 Package.
Reid, Stephen; Tibshirani, Rob
2014-07-01
We apply the cyclic coordinate descent algorithm of Friedman, Hastie, and Tibshirani (2010) to the fitting of a conditional logistic regression model with lasso [Formula: see text] and elastic net penalties. The sequential strong rules of Tibshirani, Bien, Hastie, Friedman, Taylor, Simon, and Tibshirani (2012) are also used in the algorithm and it is shown that these offer a considerable speed up over the standard coordinate descent algorithm with warm starts. Once implemented, the algorithm is used in simulation studies to compare the variable selection and prediction performance of the conditional logistic regression model against that of its unconditional (standard) counterpart. We find that the conditional model performs admirably on datasets drawn from a suitable conditional distribution, outperforming its unconditional counterpart at variable selection. The conditional model is also fit to a small real world dataset, demonstrating how we obtain regularization paths for the parameters of the model and how we apply cross validation for this method where natural unconditional prediction rules are hard to come by.
Regularization Paths for Conditional Logistic Regression: The clogitL1 Package
Reid, Stephen; Tibshirani, Rob
2014-01-01
We apply the cyclic coordinate descent algorithm of Friedman, Hastie, and Tibshirani (2010) to the fitting of a conditional logistic regression model with lasso (ℓ1) and elastic net penalties. The sequential strong rules of Tibshirani, Bien, Hastie, Friedman, Taylor, Simon, and Tibshirani (2012) are also used in the algorithm and it is shown that these offer a considerable speed up over the standard coordinate descent algorithm with warm starts. Once implemented, the algorithm is used in simulation studies to compare the variable selection and prediction performance of the conditional logistic regression model against that of its unconditional (standard) counterpart. We find that the conditional model performs admirably on datasets drawn from a suitable conditional distribution, outperforming its unconditional counterpart at variable selection. The conditional model is also fit to a small real world dataset, demonstrating how we obtain regularization paths for the parameters of the model and how we apply cross validation for this method where natural unconditional prediction rules are hard to come by. PMID:26257587
NASA Astrophysics Data System (ADS)
WANG, Qingrong; ZHU, Changfeng
2017-06-01
Integration of distributed heterogeneous data sources is the key issues under the big data applications. In this paper the strategy of variable precision is introduced to the concept lattice, and the one-to-one mapping mode of variable precision concept lattice and ontology concept lattice is constructed to produce the local ontology by constructing the variable precision concept lattice for each subsystem, and the distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database is proposed to draw support from the special relationship between concept lattice and ontology construction. Finally, based on the standard of main concept lattice of the existing heterogeneous database generated, a case study has been carried out in order to testify the feasibility and validity of this algorithm, and the differences between the main concept lattice and the standard concept lattice are compared. Analysis results show that this algorithm above-mentioned can automatically process the construction process of distributed concept lattice under the heterogeneous data sources.
Use of chronic disease management algorithms in Australian community pharmacies.
Morrissey, Hana; Ball, Patrick; Jackson, David; Pilloto, Louis; Nielsen, Sharon
2015-01-01
In Australia, standardized chronic disease management algorithms are available for medical practitioners, nursing practitioners and nurses through a range of sources including prescribing software, manuals and through government and not-for-profit non-government organizations. There is currently no standardized algorithm for pharmacist intervention in the management of chronic diseases.. To investigate if a collaborative community pharmacists and doctors' model of care in chronic disease management could improve patients' outcomes through ongoing monitoring of disease biochemical markers, robust self-management skills and better medication adherence. This project was a pilot pragmatic study, measuring the effect of the intervention by comparing the baseline and the end of the study patient health outcomes, to support future definitive studies. Algorithms for selected chronic conditions were designed, based on the World Health Organisation STEPS™ process and Central Australia Rural Practitioners' Association Standard Treatment Manual. They were evaluated in community pharmacies in 8 inland Australian small towns, mostly having only one pharmacy in order to avoid competition issues. The algorithms were reviewed by Murrumbidgee Medicare Local Ltd, New South Wales, Australia, Quality use of Medicines committee. They constitute a pharmacist-driven, doctor/pharmacist collaboration primary care model. The Pharmacy owners volunteered to take part in the study and patients were purposefully recruited by in-store invitation. Six out of 9 sites' pharmacists (67%) were fully capable of delivering the algorithm (each site had 3 pharmacists), one site (11%) with 2 pharmacists, found it too difficult and withdrew from the study, and 2 sites (22%, with one pharmacist at each site) stated that they were personally capable of delivering the algorithm but unable to do so due to workflow demands. This primary care model can form the basis of workable collaboration between doctors and pharmacists ensuring continuity of care for patients. It has potential for rural and remote areas of Australia where this continuity of care may be problematic. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zorila, Alexandru; Stratan, Aurel; Nemes, George
2018-01-01
We compare the ISO-recommended (the standard) data-reduction algorithm used to determine the surface laser-induced damage threshold of optical materials by the S-on-1 test with two newly suggested algorithms, both named "cumulative" algorithms/methods, a regular one and a limit-case one, intended to perform in some respects better than the standard one. To avoid additional errors due to real experiments, a simulated test is performed, named the reverse approach. This approach simulates the real damage experiments, by generating artificial test-data of damaged and non-damaged sites, based on an assumed, known damage threshold fluence of the target and on a given probability distribution function to induce the damage. In this work, a database of 12 sets of test-data containing both damaged and non-damaged sites was generated by using four different reverse techniques and by assuming three specific damage probability distribution functions. The same value for the threshold fluence was assumed, and a Gaussian fluence distribution on each irradiated site was considered, as usual for the S-on-1 test. Each of the test-data was independently processed by the standard and by the two cumulative data-reduction algorithms, the resulting fitted probability distributions were compared with the initially assumed probability distribution functions, and the quantities used to compare these algorithms were determined. These quantities characterize the accuracy and the precision in determining the damage threshold and the goodness of fit of the damage probability curves. The results indicate that the accuracy in determining the absolute damage threshold is best for the ISO-recommended method, the precision is best for the limit-case of the cumulative method, and the goodness of fit estimator (adjusted R-squared) is almost the same for all three algorithms.
Biomedical Terminology Mapper for UML projects.
Thibault, Julien C; Frey, Lewis
2013-01-01
As the biomedical community collects and generates more and more data, the need to describe these datasets for exchange and interoperability becomes crucial. This paper presents a mapping algorithm that can help developers expose local implementations described with UML through standard terminologies. The input UML class or attribute name is first normalized and tokenized, then lookups in a UMLS-based dictionary are performed. For the evaluation of the algorithm 142 UML projects were extracted from caGrid and automatically mapped to National Cancer Institute (NCI) terminology concepts. Resulting mappings at the UML class and attribute levels were compared to the manually curated annotations provided in caGrid. Results are promising and show that this type of algorithm could speed-up the tedious process of mapping local implementations to standard biomedical terminologies.