Automatic mesh refinement and parallel load balancing for Fokker-Planck-DSMC algorithm
NASA Astrophysics Data System (ADS)
Küchlin, Stephan; Jenny, Patrick
2018-06-01
Recently, a parallel Fokker-Planck-DSMC algorithm for rarefied gas flow simulation in complex domains at all Knudsen numbers was developed by the authors. Fokker-Planck-DSMC (FP-DSMC) is an augmentation of the classical DSMC algorithm, which mitigates the near-continuum deficiencies in terms of computational cost of pure DSMC. At each time step, based on a local Knudsen number criterion, the discrete DSMC collision operator is dynamically switched to the Fokker-Planck operator, which is based on the integration of continuous stochastic processes in time, and has fixed computational cost per particle, rather than per collision. In this contribution, we present an extension of the previous implementation with automatic local mesh refinement and parallel load-balancing. In particular, we show how the properties of discrete approximations to space-filling curves enable an efficient implementation. Exemplary numerical studies highlight the capabilities of the new code.
Object-Oriented/Data-Oriented Design of a Direct Simulation Monte Carlo Algorithm
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2014-01-01
Over the past decade, there has been much progress towards improved phenomenological modeling and algorithmic updates for the direct simulation Monte Carlo (DSMC) method, which provides a probabilistic physical simulation of gas Rows. These improvements have largely been based on the work of the originator of the DSMC method, Graeme Bird. Of primary importance are improved chemistry, internal energy, and physics modeling and a reduction in time to solution. These allow for an expanded range of possible solutions In altitude and velocity space. NASA's current production code, the DSMC Analysis Code (DAC), is well-established and based on Bird's 1994 algorithms written in Fortran 77 and has proven difficult to upgrade. A new DSMC code is being developed in the C++ programming language using object-oriented and data-oriented design paradigms to facilitate the inclusion of the recent improvements and future development activities. The development efforts on the new code, the Multiphysics Algorithm with Particles (MAP), are described, and performance comparisons are made with DAC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Küchlin, Stephan, E-mail: kuechlin@ifd.mavt.ethz.ch; Jenny, Patrick
2017-01-01
A major challenge for the conventional Direct Simulation Monte Carlo (DSMC) technique lies in the fact that its computational cost becomes prohibitive in the near continuum regime, where the Knudsen number (Kn)—characterizing the degree of rarefaction—becomes small. In contrast, the Fokker–Planck (FP) based particle Monte Carlo scheme allows for computationally efficient simulations of rarefied gas flows in the low and intermediate Kn regime. The Fokker–Planck collision operator—instead of performing binary collisions employed by the DSMC method—integrates continuous stochastic processes for the phase space evolution in time. This allows for time step and grid cell sizes larger than the respective collisionalmore » scales required by DSMC. Dynamically switching between the FP and the DSMC collision operators in each computational cell is the basis of the combined FP-DSMC method, which has been proven successful in simulating flows covering the whole Kn range. Until recently, this algorithm had only been applied to two-dimensional test cases. In this contribution, we present the first general purpose implementation of the combined FP-DSMC method. Utilizing both shared- and distributed-memory parallelization, this implementation provides the capability for simulations involving many particles and complex geometries by exploiting state of the art computer cluster technologies.« less
All-Particle Multiscale Computation of Hypersonic Rarefied Flow
NASA Astrophysics Data System (ADS)
Jun, E.; Burt, J. M.; Boyd, I. D.
2011-05-01
This study examines a new hybrid particle scheme used as an alternative means of multiscale flow simulation. The hybrid particle scheme employs the direct simulation Monte Carlo (DSMC) method in rarefied flow regions and the low diffusion (LD) particle method in continuum flow regions. The numerical procedures of the low diffusion particle method are implemented within an existing DSMC algorithm. The performance of the LD-DSMC approach is assessed by studying Mach 10 nitrogen flow over a sphere with a global Knudsen number of 0.002. The hybrid scheme results show good overall agreement with results from standard DSMC and CFD computation. Subcell procedures are utilized to improve computational efficiency and reduce sensitivity to DSMC cell size in the hybrid scheme. This makes it possible to perform the LD-DSMC simulation on a much coarser mesh that leads to a significant reduction in computation time.
DREAM: An Efficient Methodology for DSMC Simulation of Unsteady Processes
NASA Astrophysics Data System (ADS)
Cave, H. M.; Jermy, M. C.; Tseng, K. C.; Wu, J. S.
2008-12-01
A technique called the DSMC Rapid Ensemble Averaging Method (DREAM) for reducing the statistical scatter in the output from unsteady DSMC simulations is introduced. During post-processing by DREAM, the DSMC algorithm is re-run multiple times over a short period before the temporal point of interest thus building up a combination of time- and ensemble-averaged sampling data. The particle data is regenerated several mean collision times before the output time using the particle data generated during the original DSMC run. This methodology conserves the original phase space data from the DSMC run and so is suitable for reducing the statistical scatter in highly non-equilibrium flows. In this paper, the DREAM-II method is investigated and verified in detail. Propagating shock waves at high Mach numbers (Mach 8 and 12) are simulated using a parallel DSMC code (PDSC) and then post-processed using DREAM. The ability of DREAM to obtain the correct particle velocity distribution in the shock structure is demonstrated and the reduction of statistical scatter in the output macroscopic properties is measured. DREAM is also used to reduce the statistical scatter in the results from the interaction of a Mach 4 shock with a square cavity and for the interaction of a Mach 12 shock on a wedge in a channel.
Dynamic load balance scheme for the DSMC algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jin; Geng, Xiangren; Jiang, Dingwu
The direct simulation Monte Carlo (DSMC) algorithm, devised by Bird, has been used over a wide range of various rarified flow problems in the past 40 years. While the DSMC is suitable for the parallel implementation on powerful multi-processor architecture, it also introduces a large load imbalance across the processor array, even for small examples. The load imposed on a processor by a DSMC calculation is determined to a large extent by the total of simulator particles upon it. Since most flows are impulsively started with initial distribution of particles which is surely quite different from the steady state, themore » total of simulator particles will change dramatically. The load balance based upon an initial distribution of particles will break down as the steady state of flow is reached. The load imbalance and huge computational cost of DSMC has limited its application to rarefied or simple transitional flows. In this paper, by taking advantage of METIS, a software for partitioning unstructured graphs, and taking the total of simulator particles in each cell as a weight information, the repartitioning based upon the principle that each processor handles approximately the equal total of simulator particles has been achieved. The computation must pause several times to renew the total of simulator particles in each processor and repartition the whole domain again. Thus the load balance across the processors array holds in the duration of computation. The parallel efficiency can be improved effectively. The benchmark solution of a cylinder submerged in hypersonic flow has been simulated numerically. Besides, hypersonic flow past around a complex wing-body configuration has also been simulated. The results have displayed that, for both of cases, the computational time can be reduced by about 50%.« less
Collision partner selection schemes in DSMC: From micro/nano flows to hypersonic flows
NASA Astrophysics Data System (ADS)
Roohi, Ehsan; Stefanov, Stefan
2016-10-01
The motivation of this review paper is to present a detailed summary of different collision models developed in the framework of the direct simulation Monte Carlo (DSMC) method. The emphasis is put on a newly developed collision model, i.e., the Simplified Bernoulli trial (SBT), which permits efficient low-memory simulation of rarefied gas flows. The paper starts with a brief review of the governing equations of the rarefied gas dynamics including Boltzmann and Kac master equations and reiterates that the linear Kac equation reduces to a non-linear Boltzmann equation under the assumption of molecular chaos. An introduction to the DSMC method is provided, and principles of collision algorithms in the DSMC are discussed. A distinction is made between those collision models that are based on classical kinetic theory (time counter, no time counter (NTC), and nearest neighbor (NN)) and the other class that could be derived mathematically from the Kac master equation (pseudo-Poisson process, ballot box, majorant frequency, null collision, Bernoulli trials scheme and its variants). To provide a deeper insight, the derivation of both collision models, either from the principles of the kinetic theory or the Kac master equation, is provided with sufficient details. Some discussions on the importance of subcells in the DSMC collision procedure are also provided and different types of subcells are presented. The paper then focuses on the simplified version of the Bernoulli trials algorithm (SBT) and presents a detailed summary of validation of the SBT family collision schemes (SBT on transient adaptive subcells: SBT-TAS, and intelligent SBT: ISBT) in a broad spectrum of rarefied gas-flow test cases, ranging from low speed, internal micro and nano flows to external hypersonic flow, emphasizing first the accuracy of these new collision models and second, demonstrating that the SBT family scheme, if compared to other conventional and recent collision models, requires smaller number of particles per cell to obtain sufficiently accurate solutions.
Modifications to Axially Symmetric Simulations Using New DSMC (2007) Algorithms
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2008-01-01
Several modifications aimed at improving physical accuracy are proposed for solving axially symmetric problems building on the DSMC (2007) algorithms introduced by Bird. Originally developed to solve nonequilibrium, rarefied flows, the DSMC method is now regularly used to solve complex problems over a wide range of Knudsen numbers. These new algorithms include features such as nearest neighbor collisions excluding the previous collision partners, separate collision and sampling cells, automatically adaptive variable time steps, a modified no-time counter procedure for collisions, and discontinuous and event-driven physical processes. Axially symmetric solutions require radial weighting for the simulated molecules since the molecules near the axis represent fewer real molecules than those farther away from the axis due to the difference in volume of the cells. In the present methodology, these radial weighting factors are continuous, linear functions that vary with the radial position of each simulated molecule. It is shown that how one defines the number of tentative collisions greatly influences the mean collision time near the axis. The method by which the grid is treated for axially symmetric problems also plays an important role near the axis, especially for scalar pressure. A new method to treat how the molecules are traced through the grid is proposed to alleviate the decrease in scalar pressure at the axis near the surface. Also, a modification to the duplication buffer is proposed to vary the duplicated molecular velocities while retaining the molecular kinetic energy and axially symmetric nature of the problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu
2016-02-15
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach.more » The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.« less
An Object-Oriented Serial DSMC Simulation Package
NASA Astrophysics Data System (ADS)
Liu, Hongli; Cai, Chunpei
2011-05-01
A newly developed three-dimensional direct simulation Monte Carlo (DSMC) simulation package, named GRASP ("Generalized Rarefied gAs Simulation Package"), is reported in this paper. This package utilizes the concept of simulation engine, many C++ features and software design patterns. The package has an open architecture which can benefit further development and maintenance of the code. In order to reduce the engineering time for three-dimensional models, a hybrid grid scheme, combined with a flexible data structure compiled by C++ language, are implemented in this package. This scheme utilizes a local data structure based on the computational cell to achieve high performance on workstation processors. This data structure allows the DSMC algorithm to be very efficiently parallelized with domain decomposition and it provides much flexibility in terms of grid types. This package can utilize traditional structured, unstructured or hybrid grids within the framework of a single code to model arbitrarily complex geometries and to simulate rarefied gas flows. Benchmark test cases indicate that this package has satisfactory accuracy for complex rarefied gas flows.
Microscale Modeling of Porous Thermal Protection System Materials
NASA Astrophysics Data System (ADS)
Stern, Eric C.
Ablative thermal protection system (TPS) materials play a vital role in the design of entry vehicles. Most simulation tools for ablative TPS in use today take a macroscopic approach to modeling, which involves heavy empiricism. Recent work has suggested improving the fidelity of the simulations by taking a multi-scale approach to the physics of ablation. In this work, a new approach for modeling ablative TPS at the microscale is proposed, and its feasibility and utility is assessed. This approach uses the Direct Simulation Monte Carlo (DSMC) method to simulate the gas flow through the microstructure, as well as the gas-surface interaction. Application of the DSMC method to this problem allows the gas phase dynamics---which are often rarefied---to be modeled to a high degree of fidelity. Furthermore this method allows for sophisticated gas-surface interaction models to be implemented. In order to test this approach for realistic materials, a method for generating artificial microstructures which emulate those found in spacecraft TPS is developed. Additionally, a novel approach for allowing the surface to move under the influence of chemical reactions at the surface is developed. This approach is shown to be efficient and robust for performing coupled simulation of the oxidation of carbon fibers. The microscale modeling approach is first applied to simulating the steady flow of gas through the porous medium. Predictions of Darcy permeability for an idealized microstructure agree with empirical correlations from the literature, as well as with predictions from computational fluid dynamics (CFD) when the continuum assumption is valid. Expected departures are observed for conditions at which the continuum assumption no longer holds. Comparisons of simulations using a fabricated microstructure to experimental data for a real spacecraft TPS material show good agreement when similar microstructural parameters are used to build the geometry. The approach is then applied to investigating the ablation of porous materials through oxidation. A simple gas surface interaction model is described, and an approach for coupling the surface reconstruction algorithm to the DSMC method is outlined. Simulations of single carbon fibers at representative conditions suggest this approach to be feasible for simulating the ablation of porous TPS materials at scale. Additionally, the effect of various simulation parameters on in-depth morphology is investigated for random fibrous microstructures.
dsmcFoam+: An OpenFOAM based direct simulation Monte Carlo solver
NASA Astrophysics Data System (ADS)
White, C.; Borg, M. K.; Scanlon, T. J.; Longshaw, S. M.; John, B.; Emerson, D. R.; Reese, J. M.
2018-03-01
dsmcFoam+ is a direct simulation Monte Carlo (DSMC) solver for rarefied gas dynamics, implemented within the OpenFOAM software framework, and parallelised with MPI. It is open-source and released under the GNU General Public License in a publicly available software repository that includes detailed documentation and tutorial DSMC gas flow cases. This release of the code includes many features not found in standard dsmcFoam, such as molecular vibrational and electronic energy modes, chemical reactions, and subsonic pressure boundary conditions. Since dsmcFoam+ is designed entirely within OpenFOAM's C++ object-oriented framework, it benefits from a number of key features: the code emphasises extensibility and flexibility so it is aimed first and foremost as a research tool for DSMC, allowing new models and test cases to be developed and tested rapidly. All DSMC cases are as straightforward as setting up any standard OpenFOAM case, as dsmcFoam+ relies upon the standard OpenFOAM dictionary based directory structure. This ensures that useful pre- and post-processing capabilities provided by OpenFOAM remain available even though the fully Lagrangian nature of a DSMC simulation is not typical of most OpenFOAM applications. We show that dsmcFoam+ compares well to other well-known DSMC codes and to analytical solutions in terms of benchmark results.
N-S/DSMC hybrid simulation of hypersonic flow over blunt body including wakes
NASA Astrophysics Data System (ADS)
Li, Zhonghua; Li, Zhihui; Li, Haiyan; Yang, Yanguang; Jiang, Xinyu
2014-12-01
A hybrid N-S/DSMC method is presented and applied to solve the three-dimensional hypersonic transitional flows by employing the MPC (modular Particle-Continuum) technique based on the N-S and the DSMC method. A sub-relax technique is adopted to deal with information transfer between the N-S and the DSMC. The hypersonic flows over a 70-deg spherically blunted cone under different Kn numbers are simulated using the CFD, DSMC and hybrid N-S/DSMC method. The present computations are found in good agreement with DSMC and experimental results. The present method provides an efficient way to predict the hypersonic aerodynamics in near-continuum transitional flow regime.
Parallel DSMC Solution of Three-Dimensional Flow Over a Finite Flat Plate
NASA Technical Reports Server (NTRS)
Nance, Robert P.; Wilmoth, Richard G.; Moon, Bongki; Hassan, H. A.; Saltz, Joel
1994-01-01
This paper describes a parallel implementation of the direct simulation Monte Carlo (DSMC) method. Runtime library support is used for scheduling and execution of communication between nodes, and domain decomposition is performed dynamically to maintain a good load balance. Performance tests are conducted using the code to evaluate various remapping and remapping-interval policies, and it is shown that a one-dimensional chain-partitioning method works best for the problems considered. The parallel code is then used to simulate the Mach 20 nitrogen flow over a finite-thickness flat plate. It is shown that the parallel algorithm produces results which compare well with experimental data. Moreover, it yields significantly faster execution times than the scalar code, as well as very good load-balance characteristics.
Investigation on a coupled CFD/DSMC method for continuum-rarefied flows
NASA Astrophysics Data System (ADS)
Tang, Zhenyu; He, Bijiao; Cai, Guobiao
2012-11-01
The purpose of the present work is to investigate the coupled CFD/DSMC method using the existing CFD and DSMC codes developed by the authors. The interface between the continuum and particle regions is determined by the gradient-length local Knudsen number. A coupling scheme combining both state-based and flux-based coupling methods is proposed in the current study. Overlapping grids are established between the different grid systems of CFD and DSMC codes. A hypersonic flow over a 2D cylinder has been simulated using the present coupled method. Comparison has been made between the results obtained from both methods, which shows that the coupled CFD/DSMC method can achieve the same precision as the pure DSMC method and obtain higher computational efficiency.
DSMC Simulation and Experimental Validation of Shock Interaction in Hypersonic Low Density Flow
2014-01-01
Direct simulation Monte Carlo (DSMC) of shock interaction in hypersonic low density flow is developed. Three collision molecular models, including hard sphere (HS), variable hard sphere (VHS), and variable soft sphere (VSS), are employed in the DSMC study. The simulations of double-cone and Edney's type IV hypersonic shock interactions in low density flow are performed. Comparisons between DSMC and experimental data are conducted. Investigation of the double-cone hypersonic flow shows that three collision molecular models can predict the trend of pressure coefficient and the Stanton number. HS model shows the best agreement between DSMC simulation and experiment among three collision molecular models. Also, it shows that the agreement between DSMC and experiment is generally good for HS and VHS models in Edney's type IV shock interaction. However, it fails in the VSS model. Both double-cone and Edney's type IV shock interaction simulations show that the DSMC errors depend on the Knudsen number and the models employed for intermolecular interaction. With the increase in the Knudsen number, the DSMC error is decreased. The error is the smallest in HS compared with those in the VHS and VSS models. When the Knudsen number is in the level of 10−4, the DSMC errors, for pressure coefficient, the Stanton number, and the scale of interaction region, are controlled within 10%. PMID:24672360
Comparison of DAC and MONACO DSMC Codes with Flat Plate Simulation
NASA Technical Reports Server (NTRS)
Padilla, Jose F.
2010-01-01
Various implementations of the direct simulation Monte Carlo (DSMC) method exist in academia, government and industry. By comparing implementations, deficiencies and merits of each can be discovered. This document reports comparisons between DSMC Analysis Code (DAC) and MONACO. DAC is NASA's standard DSMC production code and MONACO is a research DSMC code developed in academia. These codes have various differences; in particular, they employ distinct computational grid definitions. In this study, DAC and MONACO are compared by having each simulate a blunted flat plate wind tunnel test, using an identical volume mesh. Simulation expense and DSMC metrics are compared. In addition, flow results are compared with available laboratory data. Overall, this study revealed that both codes, excluding grid adaptation, performed similarly. For parallel processing, DAC was generally more efficient. As expected, code accuracy was mainly dependent on physical models employed.
NASA Technical Reports Server (NTRS)
Macrossan, M. N.
1995-01-01
The direct simulation Monte Carlo (DSMC) method is the established technique for the simulation of rarefied gas flows. In some flows of engineering interest, such as occur for aero-braking spacecraft in the upper atmosphere, DSMC can become prohibitively expensive in CPU time because some regions of the flow, particularly on the windward side of blunt bodies, become collision dominated. As an alternative to using a hybrid DSMC and continuum gas solver (Euler or Navier-Stokes solver) this work is aimed at making the particle simulation method efficient in the high density regions of the flow. A high density, infinite collision rate limit of DSMC, the Equilibrium Particle Simulation method (EPSM) was proposed some 15 years ago. EPSM is developed here for the flow of a gas consisting of many different species of molecules and is shown to be computationally efficient (compared to DSMC) for high collision rate flows. It thus offers great potential as part of a hybrid DSMC/EPSM code which could handle flows in the transition regime between rarefied gas flows and fully continuum flows. As a first step towards this goal a pure EPSM code is described. The next step of combining DSMC and EPSM is not attempted here but should be straightforward. EPSM and DSMC are applied to Taylor-Couette flow with Kn = 0.02 and 0.0133 and S(omega) = 3). Toroidal vortices develop for both methods but some differences are found, as might be expected for the given flow conditions. EPSM appears to be less sensitive to the sequence of random numbers used in the simulation than is DSMC and may also be more dissipative. The question of the origin and the magnitude of the dissipation in EPSM is addressed. It is suggested that this analysis is also relevant to DSMC when the usual accuracy requirements on the cell size and decoupling time step are relaxed in the interests of computational efficiency.
DSMC Shock Simulation of Saturn Entry Probe Conditions
NASA Technical Reports Server (NTRS)
Higdon, Kyle J.; Cruden, Brett A.; Brandis, Aaron; Liechty, Derek S.; Goldstein, David B.; Varghese, Philip L.
2016-01-01
This work describes the direct simulation Monte Carlo (DSMC) investigation of Saturn entry probe scenarios and the influence of non-equilibrium phenomena on Saturn entry conditions. The DSMC simulations coincide with rarefied hypersonic shock tube experiments of a hydrogen-helium mixture performed in the Electric Arc Shock Tube (EAST) at NASA Ames Research Center. The DSMC simulations are post-processed through the NEQAIR line-by-line radiation code to compare directly to the experimental results. Improved collision cross-sections, inelastic collision parameters, and reaction rates are determined for a high temperature DSMC simulation of a 7-species H2-He mixture and an electronic excitation model is implemented in the DSMC code. Simulation results for 27.8 and 27.4 kms shock waves are obtained at 0.2 and 0.1 Torr respectively and compared to measured spectra in the VUV, UV, visible, and IR ranges. These results confirm the persistence of non-equilibrium for several centimeters behind the shock and the diffusion of atomic hydrogen upstream of the shock wave. Although the magnitude of the radiance did not match experiments and an ionization inductance period was not observed in the simulations, the discrepancies indicated where improvements are needed in the DSMC and NEQAIR models.
DSMC Shock Simulation of Saturn Entry Probe Conditions
NASA Technical Reports Server (NTRS)
Higdon, Kyle J.; Cruden, Brett A.; Brandis, Aaron M.; Liechty, Derek S.; Goldstein, David B.; Varghese, Philip L.
2016-01-01
This work describes the direct simulation Monte Carlo (DSMC) investigation of Saturn entry probe scenarios and the influence of non-equilibrium phenomena on Saturn entry conditions. The DSMC simulations coincide with rarefied hypersonic shock tube experiments of a hydrogen-helium mixture performed in the Electric Arc Shock Tube (EAST) at the NASA Ames Research Center. The DSMC simulations are post-processed through the NEQAIR line-by-line radiation code to compare directly to the experimental results. Improved collision cross-sections, inelastic collision parameters, and reaction rates are determined for a high temperature DSMC simulation of a 7-species H2-He mixture and an electronic excitation model is implemented in the DSMC code. Simulation results for 27.8 and 27.4 km/s shock waves are obtained at 0.2 and 0.1 Torr, respectively, and compared to measured spectra in the VUV, UV, visible, and IR ranges. These results confirm the persistence of non-equilibrium for several centimeters behind the shock and the diffusion of atomic hydrogen upstream of the shock wave. Although the magnitude of the radiance did not match experiments and an ionization inductance period was not observed in the simulations, the discrepancies indicated where improvements are needed in the DSMC and NEQAIR models.
State resolved vibrational relaxation modeling for strongly nonequilibrium flows
NASA Astrophysics Data System (ADS)
Boyd, Iain D.; Josyula, Eswar
2011-05-01
Vibrational relaxation is an important physical process in hypersonic flows. Activation of the vibrational mode affects the fundamental thermodynamic properties and finite rate relaxation can reduce the degree of dissociation of a gas. Low fidelity models of vibrational activation employ a relaxation time to capture the process at a macroscopic level. High fidelity, state-resolved models have been developed for use in continuum gas dynamics simulations based on computational fluid dynamics (CFD). By comparison, such models are not as common for use with the direct simulation Monte Carlo (DSMC) method. In this study, a high fidelity, state-resolved vibrational relaxation model is developed for the DSMC technique. The model is based on the forced harmonic oscillator approach in which multi-quantum transitions may become dominant at high temperature. Results obtained for integrated rate coefficients from the DSMC model are consistent with the corresponding CFD model. Comparison of relaxation results obtained with the high-fidelity DSMC model shows significantly less excitation of upper vibrational levels in comparison to the standard, lower fidelity DSMC vibrational relaxation model. Application of the new DSMC model to a Mach 7 normal shock wave in carbon monoxide provides better agreement with experimental measurements than the standard DSMC relaxation model.
DSMC Evaluation of the Navier-Stokes Shear Viscosity of a Granular Fluid
2005-07-13
transport coefficients of the HCS have been measured from DSMC by using the associated Green – Kubo formulas [8]. In the case of a system heated by the action...DSMC evaluation of the Navier–Stokes shear viscosity of a granular fluid José María Montanero∗, Andrés Santos† and Vicente Garzó† ∗Departamento de...proposed to measure the Navier–Stokes shear viscosity in a granular fluid described by the Enskog equation. The method is implemented in DSMC
Vectorization of a particle code used in the simulation of rarefied hypersonic flow
NASA Technical Reports Server (NTRS)
Baganoff, D.
1990-01-01
A limitation of the direct simulation Monte Carlo (DSMC) method is that it does not allow efficient use of vector architectures that predominate in current supercomputers. Consequently, the problems that can be handled are limited to those of one- and two-dimensional flows. This work focuses on a reformulation of the DSMC method with the objective of designing a procedure that is optimized to the vector architectures found on machines such as the Cray-2. In addition, it focuses on finding a better balance between algorithmic complexity and the total number of particles employed in a simulation so that the overall performance of a particle simulation scheme can be greatly improved. Simulations of the flow about a 3D blunt body are performed with 10 to the 7th particles and 4 x 10 to the 5th mesh cells. Good statistics are obtained with time averaging over 800 time steps using 4.5 h of Cray-2 single-processor CPU time.
DSMC simulations of shock tube experiments for the dissociation rate of nitrogen
NASA Astrophysics Data System (ADS)
Bird, G. A.
2012-11-01
The DSMC method has been used to simulate the flow associated with several experiments that led to predictions of the dissociation rate in nitrogen. One involved optical interferometry to determine the density behind strong shock wave and the other involved the measurement of the shock tube end-wall pressure after the reflection of a similar shock wave. DSMC calculations for the un-reflected shock wave were made with the older TCE model that converts rate coefficients to reaction cross-sections, with the newer Q-K model that predicts the rates and with a set of reaction cross-sections for nitrogen dissociation from QCT calculations. A comparison of the resulting density profiles with the measured profile provides a test of the validity of the DSMC chemistry models. The DSMC reaction rates were sampled directly in the DSMC calculation, both far downstream where the flow is in equilibrium and in the non-equilibrium region immediately behind the shock. This permits a critical evaluation of data reduction procedures that were employed to deduce the dissociation rate from the measured quantities.
NASA Astrophysics Data System (ADS)
Mahieux, Arnaud; Goldstein, David B.; Varghese, Philip; Trafton, Laurence M.
2017-10-01
The vapor and particulate plumes arising from the southern polar regions of Enceladus are a key signature of what lies below the surface. Multiple Cassini instruments (INMS, CDA, CAPS, MAG, UVIS, VIMS, ISS) measured the gas-particle plume over the warm Tiger Stripe region and there have been several close flybys. Numerous observations also exist of the near-vent regions in the visible and the IR. The most likely source for these extensive geysers is a subsurface liquid reservoir of somewhat saline water and other volatiles boiling off through crevasse-like conduits into the vacuum of space.In this work, we use a DSMC code to simulate the plume as it exits a vent, considering axisymmetric conditions, in a vertical domain extending up to 10 km. Above 10 km altitude, the flow is collisionless and well modeled in a separate free molecular code. We perform a DSMC parametric and sensitivity study of the following vent parameters: vent diameter, outgassed flow density, water gas/water ice mass flow ratio, gas and ice speed, and ice grain diameter. We build parametric expressions of the plume characteristics at the 10 km upper boundary (number density, temperature, velocity) that will be used in a Bayesian inversion algorithm in order to constrain source conditions from fits to plume observations by various instruments on board the Cassini spacecraft and assess the parametric sensitivity study.
Aspects of GPU perfomance in algorithms with random memory access
NASA Astrophysics Data System (ADS)
Kashkovsky, Alexander V.; Shershnev, Anton A.; Vashchenkov, Pavel V.
2017-10-01
The numerical code for solving the Boltzmann equation on the hybrid computational cluster using the Direct Simulation Monte Carlo (DSMC) method showed that on Tesla K40 accelerators computational performance drops dramatically with increase of percentage of occupied GPU memory. Testing revealed that memory access time increases tens of times after certain critical percentage of memory is occupied. Moreover, it seems to be the common problem of all NVidia's GPUs arising from its architecture. Few modifications of the numerical algorithm were suggested to overcome this problem. One of them, based on the splitting the memory into "virtual" blocks, resulted in 2.5 times speed up.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Womble, David E.
Unified collision operator demonstrated for both radiation transport and PIC-DSMC. A side-by-side comparison between the DSMC method and the radiation transport method was conducted for photon attenuation in the atmosphere over 2 kilometers in physical distance with a reduction of photon density of six orders of magnitude. Both DSMC and traditional radiation transport agreed with theory to two digits. This indicates that PIC-DSMC operators can be unified with the radiation transport collision operators into a single code base and that physics kernels can remain unique to the actual collision pairs. This simulation example provides an initial validation of the unifiedmore » collision theory approach that will later be implemented into EMPIRE.« less
A DSMC Study of Low Pressure Argon Discharge
NASA Astrophysics Data System (ADS)
Hash, David; Meyyappan, M.
1997-10-01
Work toward a self-consistent plasma simulation using the DSMC method for examination of the flowfields of low-pressure high density plasma reactors is presented. Presently, DSMC simulations for these applications involve either treating the electrons as a fluid or imposing experimentally determined values for the electron number density profile. In either approach, the electrons themselves are not physically simulated. Self-consistent plasma DSMC simulations have been conducted for aerospace applications but at a severe computational cost due in part to the scalar architectures on which the codes were employed. The present work attempts to conduct such simulations at a more reasonable cost using a plasma version of the object-oriented parallel Cornell DSMC code, MONACO, on an IBM SP-2. Due the availability of experimental data, the GEC reference cell is chosen to conduct preliminary investigations. An argon discharge is examined thus affording a simple chemistry set with eight gas-phase reactions and five species: Ar, Ar^+, Ar^*, Ar_2, and e where Ar^* is a metastable.
FDDO and DSMC analyses of rarefied gas flow through 2D nozzles
NASA Technical Reports Server (NTRS)
Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren; Penko, Paul F.
1992-01-01
Two different approaches, the finite-difference method coupled with the discrete-ordinate method (FDDO), and the direct-simulation Monte Carlo (DSMC) method, are used in the analysis of the flow of a rarefied gas expanding through a two-dimensional nozzle and into a surrounding low-density environment. In the FDDO analysis, by employing the discrete-ordinate method, the Boltzmann equation simplified by a model collision integral is transformed to a set of partial differential equations which are continuous in physical space but are point functions in molecular velocity space. The set of partial differential equations are solved by means of a finite-difference approximation. In the DSMC analysis, the variable hard sphere model is used as a molecular model and the no time counter method is employed as a collision sampling technique. The results of both the FDDO and the DSMC methods show good agreement. The FDDO method requires less computational effort than the DSMC method by factors of 10 to 40 in CPU time, depending on the degree of rarefaction.
Nonequilibrium hypersonic flows simulations with asymptotic-preserving Monte Carlo methods
NASA Astrophysics Data System (ADS)
Ren, Wei; Liu, Hong; Jin, Shi
2014-12-01
In the rarefied gas dynamics, the DSMC method is one of the most popular numerical tools. It performs satisfactorily in simulating hypersonic flows surrounding re-entry vehicles and micro-/nano- flows. However, the computational cost is expensive, especially when Kn → 0. Even for flows in the near-continuum regime, pure DSMC simulations require a number of computational efforts for most cases. Albeit several DSMC/NS hybrid methods are proposed to deal with this, those methods still suffer from the boundary treatment, which may cause nonphysical solutions. Filbet and Jin [1] proposed a framework of new numerical methods of Boltzmann equation, called asymptotic preserving schemes, whose computational costs are affordable as Kn → 0. Recently, Ren et al. [2] realized the AP schemes with Monte Carlo methods (AP-DSMC), which have better performance than counterpart methods. In this paper, AP-DSMC is applied in simulating nonequilibrium hypersonic flows. Several numerical results are computed and analyzed to study the efficiency and capability of capturing complicated flow characteristics.
DSMC Studies of the Richtmyer-Meshkov Instability
NASA Astrophysics Data System (ADS)
Gallis, M. A.; Koehler, T. P.; Torczynski, J. R.
2014-11-01
A new exascale-capable Direct Simulation Monte Carlo (DSMC) code, SPARTA, developed to be highly efficient on massively parallel computers, has extended the applicability of DSMC to challenging, transient three-dimensional problems in the continuum regime. Because DSMC inherently accounts for compressibility, viscosity, and diffusivity, it has the potential to improve the understanding of the mechanisms responsible for hydrodynamic instabilities. Here, the Richtmyer-Meshkov instability at the interface between two gases was studied parametrically using SPARTA. Simulations performed on Sequoia, an IBM Blue Gene/Q supercomputer at Lawrence Livermore National Laboratory, are used to investigate various Atwood numbers (0.33-0.94) and Mach numbers (1.2-12.0) for two-dimensional and three-dimensional perturbations. Comparisons with theoretical predictions demonstrate that DSMC accurately predicts the early-time growth of the instability. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Implementation of a vibrationally linked chemical reaction model for DSMC
NASA Technical Reports Server (NTRS)
Carlson, A. B.; Bird, Graeme A.
1994-01-01
A new procedure closely linking dissociation and exchange reactions in air to the vibrational levels of the diatomic molecules has been implemented in both one- and two-dimensional versions of Direct Simulation Monte Carlo (DSMC) programs. The previous modeling of chemical reactions with DSMC was based on the continuum reaction rates for the various possible reactions. The new method is more closely related to the actual physics of dissociation and is more appropriate to the particle nature of DSMC. Two cases are presented: the relaxation to equilibrium of undissociated air initially at 10,000 K, and the axisymmetric calculation of shuttle forebody heating during reentry at 92.35 km and 7500 m/s. Although reaction rates are not used in determining the dissociations or exchange reactions, the new method produces rates which agree astonishingly well with the published rates derived from experiment. The results for gas properties and surface properties also agree well with the results produced by earlier DSMC models, equilibrium air calculations, and experiment.
Navier-Stokes Dynamics by a Discrete Boltzmann Model
NASA Technical Reports Server (NTRS)
Rubinstein, Robet
2010-01-01
This work investigates the possibility of particle-based algorithms for the Navier-Stokes equations and higher order continuum approximations of the Boltzmann equation; such algorithms would generalize the well-known Pullin scheme for the Euler equations. One such method is proposed in the context of a discrete velocity model of the Boltzmann equation. Preliminary results on shock structure are consistent with the expectation that the shock should be much broader than the near discontinuity predicted by the Pullin scheme, yet narrower than the prediction of the Boltzmann equation. We discuss the extension of this essentially deterministic method to a stochastic particle method that, like DSMC, samples the distribution function rather than resolving it completely.
Investigation of the DSMC Approach for Ion/neutral Species in Modeling Low Pressure Plasma Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng Hao; Li, Z.; Levin, D.
2011-05-20
Low pressure plasma reactors are important tools for ionized metal physical vapor deposition (IMPVD), a semiconductor plasma processing technology that is increasingly being applied to deposit Cu seed layers on semiconductor surfaces of trenches and vias with the high aspect ratio (e.g., >5:1). A large fraction of ionized atoms produced by the IMPVD process leads to an anisotropic deposition flux towards the substrate, a feature which is critical for attaining a void-free and uniform fill. Modeling such devices is challenging due to their high plasma density, reactive environment, but low gas pressure. A modular code developed by the Computational Opticalmore » and Discharge Physics Group, the Hybrid Plasma Equipment Model (HPEM), has been successfully applied to the numerical investigations of IMPVD by modeling a hollow cathode magnetron (HCM) device. However, as the development of semiconductor devices progresses towards the lower pressure regime (e.g., <5 mTorr), the breakdown of the continuum assumption limits the application of the fluid model in HPEM and suggests the incorporation of the kinetic method, such as the direct simulation Monte Carlo (DSMC), in the plasma simulation.The DSMC method, which solves the Boltzmann equation of transport, has been successfully applied in modeling micro-fluidic flows in MEMS devices with low Reynolds numbers, a feature shared with the HCM. Modeling of the basic physical and chemical processes for ion/neutral species in plasma have been developed and implemented in DSMC, which include ion particle motion due to the Lorentz force, electron impact reactions, charge exchange reactions, and charge recombination at the surface. The heating of neutrals due to collisions with ions and the heating of ions due to the electrostatic field will be shown to be captured by the DSMC simulations. In this work, DSMC calculations were coupled with the modules from HPEM so that the plasma can be self-consistently solved. Differences in the Ar results, the dominant species in the reactor, produced by the DSMC-HPEM coupled simulation will be shown in comparison with the original HPEM results. The effects of the DSMC calculations for ion/neutral species on HPEM plasma simulation will be further analyzed.« less
The direct simulation of acoustics on Earth, Mars, and Titan.
Hanford, Amanda D; Long, Lyle N
2009-02-01
With the recent success of the Huygens lander on Titan, a moon of Saturn, there has been renewed interest in further exploring the acoustic environments of the other planets in the solar system. The direct simulation Monte Carlo (DSMC) method is used here for modeling sound propagation in the atmospheres of Earth, Mars, and Titan at a variety of altitudes above the surface. DSMC is a particle method that describes gas dynamics through direct physical modeling of particle motions and collisions. The validity of DSMC for the entire range of Knudsen numbers (Kn), where Kn is defined as the mean free path divided by the wavelength, allows for the exploration of sound propagation in planetary environments for all values of Kn. DSMC results at a variety of altitudes on Earth, Mars, and Titan including the details of nonlinearity, absorption, dispersion, and molecular relaxation in gas mixtures are given for a wide range of Kn showing agreement with various continuum theories at low Kn and deviation from continuum theory at high Kn. Despite large computation time and memory requirements, DSMC is the method best suited to study high altitude effects or where continuum theory is not valid.
Lattice Boltzmann accelerated direct simulation Monte Carlo for dilute gas flow simulations.
Di Staso, G; Clercx, H J H; Succi, S; Toschi, F
2016-11-13
Hybrid particle-continuum computational frameworks permit the simulation of gas flows by locally adjusting the resolution to the degree of non-equilibrium displayed by the flow in different regions of space and time. In this work, we present a new scheme that couples the direct simulation Monte Carlo (DSMC) with the lattice Boltzmann (LB) method in the limit of isothermal flows. The former handles strong non-equilibrium effects, as they typically occur in the vicinity of solid boundaries, whereas the latter is in charge of the bulk flow, where non-equilibrium can be dealt with perturbatively, i.e. according to Navier-Stokes hydrodynamics. The proposed concurrent multiscale method is applied to the dilute gas Couette flow, showing major computational gains when compared with the full DSMC scenarios. In addition, it is shown that the coupling with LB in the bulk flow can speed up the DSMC treatment of the Knudsen layer with respect to the full DSMC case. In other words, LB acts as a DSMC accelerator.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2016 The Author(s).
Simulation of unsteady flows by the DSMC macroscopic chemistry method
NASA Astrophysics Data System (ADS)
Goldsworthy, Mark; Macrossan, Michael; Abdel-jawad, Madhat
2009-03-01
In the Direct Simulation Monte-Carlo (DSMC) method, a combination of statistical and deterministic procedures applied to a finite number of 'simulator' particles are used to model rarefied gas-kinetic processes. In the macroscopic chemistry method (MCM) for DSMC, chemical reactions are decoupled from the specific particle pairs selected for collisions. Information from all of the particles within a cell, not just those selected for collisions, is used to determine a reaction rate coefficient for that cell. Unlike collision-based methods, MCM can be used with any viscosity or non-reacting collision models and any non-reacting energy exchange models. It can be used to implement any reaction rate formulations, whether these be from experimental or theoretical studies. MCM has been previously validated for steady flow DSMC simulations. Here we show how MCM can be used to model chemical kinetics in DSMC simulations of unsteady flow. Results are compared with a collision-based chemistry procedure for two binary reactions in a 1-D unsteady shock-expansion tube simulation. Close agreement is demonstrated between the two methods for instantaneous, ensemble-averaged profiles of temperature, density and species mole fractions, as well as for the accumulated number of net reactions per cell.
NASA Astrophysics Data System (ADS)
Ivanov, M.; Zeitoun, D.; Vuillon, J.; Gimelshein, S.; Markelov, G.
1996-05-01
The problem of transition of planar shock waves over straight wedges in steady flows from regular to Mach reflection and back was numerically studied by the DSMC method for solving the Boltzmann equation and finite difference method with FCT algorithm for solving the Euler equations. It is shown that the transition from regular to Mach reflection takes place in accordance with detachment criterion while the opposite transition occurs at smaller angles. The hysteresis effect was observed at increasing and decreasing shock wave angle.
DSMC simulations of the Shuttle Plume Impingement Flight EXperiment(SPIFEX)
NASA Technical Reports Server (NTRS)
Stewart, Benedicte; Lumpkin, Forrest
2017-01-01
During orbital maneuvers and proximity operations, a spacecraft fires its thrusters inducing plume impingement loads, heating and contamination to itself and to any other nearby spacecraft. These thruster firings are generally modeled using a combination of Computational Fluid Dynamics (CFD) and DSMC simulations. The Shuttle Plume Impingement Flight EXperiment(SPIFEX) produced data that can be compared to a high fidelity simulation. Due to the size of the Shuttle thrusters this problem was too resource intensive to be solved with DSMC when the experiment flew in 1994.
DSMC simulation of the interaction between rarefied free jets
NASA Technical Reports Server (NTRS)
Dagum, Leonardo; Zhu, S. H. K.
1993-01-01
This paper presents a direct simulation Monte Carlo (DSMC) calculation of two interacting free jets exhausting into vacuum. The computed flow field is compared against available experimental data and shows excellent agreement everywhere except in the very near field (less than one orifice diameter downstream of the jet exhaust plane). The lack of agreement in this region is attributed to having assumed an inviscid boundary condition for the orifice lip. The results serve both to validate the DSMC code for a very complex, three dimensional non-equilibrium flow field, and to provide some insight as to the complicated nature of this flow.
A 3-D Coupled CFD-DSMC Solution Method With Application to the Mars Sample Return Orbiter
NASA Technical Reports Server (NTRS)
Glass, Christopher E.; Gnoffo, Peter A.
2000-01-01
A method to obtain coupled Computational Fluid Dynamics-Direct Simulation Monte Carlo (CFD-DSMC), 3-D flow field solutions for highly blunt bodies at low incidence is presented and applied to one concept of the Mars Sample Return Orbiter vehicle as a demonstration of the technique. CFD is used to solve the high-density blunt forebody flow defining an inflow boundary condition for a DSMC solution of the afterbody wake flow. By combining the two techniques in flow regions where most applicable, the entire mixed flow field is modeled in an appropriate manner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shevyrin, Alexander A.; Vashchenkov, Pavel V.; Bondar, Yevgeniy A.
An ionized flow around the RAM C-II vehicle in the range of altitudes from 73 to 81 km is studied by the Direct Simulation Monte Carlo (DSMC) method with three models of chemical reactions. It is demonstrated that vibration favoring in reactions of dissociation of neutral molecules affects significantly the predicted values of plasma density in the shock layer, and good agreement between the results of experiments and DSMC computations can be achieved in terms of the plasma density as a function of the flight altitude.
NASA Technical Reports Server (NTRS)
Prisbell, Andrew; Marichalar, J.; Lumpkin, F.; LeBeau, G.
2010-01-01
Plume impingement effects on the Orion Crew Service Module (CSM) were analyzed for various dual Reaction Control System (RCS) engine firings and various configurations of the solar arrays. The study was performed using a decoupled computational fluid dynamics (CFD) and Direct Simulation Monte Carlo (DSMC) approach. This approach included a single jet plume solution for the R1E RCS engine computed with the General Aerodynamic Simulation Program (GASP) CFD code. The CFD solution was used to create an inflow surface for the DSMC solution based on the Bird continuum breakdown parameter. The DSMC solution was then used to model the dual RCS plume impingement effects on the entire CSM geometry with deployed solar arrays. However, because the continuum breakdown parameter of 0.5 could not be achieved due to geometrical constraints and because high resolution in the plume shock interaction region is desired, a focused DSMC simulation modeling only the plumes and the shock interaction region was performed. This high resolution intermediate solution was then used as the inflow to the larger DSMC solution to obtain plume impingement heating, forces, and moments on the CSM and the solar arrays for a total of 21 cases that were analyzed. The results of these simulations were used to populate the Orion CSM Aerothermal Database.
Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Wada, Takao
2014-07-01
A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.
Application of a Modular Particle-Continuum Method to Partially Rarefied, Hypersonic Flow
NASA Astrophysics Data System (ADS)
Deschenes, Timothy R.; Boyd, Iain D.
2011-05-01
The Modular Particle-Continuum (MPC) method is used to simulate partially-rarefied, hypersonic flow over a sting-mounted planetary probe configuration. This hybrid method uses computational fluid dynamics (CFD) to solve the Navier-Stokes equations in regions that are continuum, while using direct simulation Monte Carlo (DSMC) in portions of the flow that are rarefied. The MPC method uses state-based coupling to pass information between the two flow solvers and decouples both time-step and mesh densities required by each solver. It is parallelized for distributed memory systems using dynamic domain decomposition and internal energy modes can be consistently modeled to be out of equilibrium with the translational mode in both solvers. The MPC results are compared to both full DSMC and CFD predictions and available experimental measurements. By using DSMC in only regions where the flow is nonequilibrium, the MPC method is able to reproduce full DSMC results down to the level of velocity and rotational energy probability density functions while requiring a fraction of the computational time.
Pressure measurements in a low-density nozzle plume for code verification
NASA Technical Reports Server (NTRS)
Penko, Paul F.; Boyd, Iain D.; Meissner, Dana L.; Dewitt, Kenneth J.
1991-01-01
Measurements of Pitot pressure were made in the exit plane and plume of a low-density, nitrogen nozzle flow. Two numerical computer codes were used to analyze the flow, including one based on continuum theory using the explicit MacCormack method, and the other on kinetic theory using the method of direct-simulation Monte Carlo (DSMC). The continuum analysis was carried to the nozzle exit plane and the results were compared to the measurements. The DSMC analysis was extended into the plume of the nozzle flow and the results were compared with measurements at the exit plane and axial stations 12, 24 and 36 mm into the near-field plume. Two experimental apparatus were used that differed in design and gave slightly different profiles of pressure measurements. The DSMC method compared well with the measurements from each apparatus at all axial stations and provided a more accurate prediction of the flow than the continuum method, verifying the validity of DSMC for such calculations.
State-to-state models of vibrational relaxation in Direct Simulation Monte Carlo (DSMC)
NASA Astrophysics Data System (ADS)
Oblapenko, G. P.; Kashkovsky, A. V.; Bondar, Ye A.
2017-02-01
In the present work, the application of state-to-state models of vibrational energy exchanges to the Direct Simulation Monte Carlo (DSMC) is considered. A state-to-state model for VT transitions of vibrational energy in nitrogen and oxygen, based on the application of the inverse Laplace transform to results of quasiclassical trajectory calculations (QCT) of vibrational energy transitions, along with the Forced Harmonic Oscillator (FHO) state-to-state model is implemented in DSMC code and applied to flows around blunt bodies. Comparisons are made with the widely used Larsen-Borgnakke model and the in uence of multi-quantum VT transitions is assessed.
Analysis of Effectiveness of Phoenix Entry Reaction Control System
NASA Technical Reports Server (NTRS)
Dyakonov, Artem A.; Glass, Christopher E.; Desai, Prasun, N.; VanNorman, John W.
2008-01-01
Interaction between the external flowfield and the reaction control system (RCS) thruster plumes of the Phoenix capsule during entry has been investigated. The analysis covered rarefied, transitional, hypersonic and supersonic flight regimes. Performance of pitch, yaw and roll control authority channels was evaluated, with specific emphasis on the yaw channel due to its low nominal yaw control authority. Because Phoenix had already been constructed and its RCS could not be modified before flight, an assessment of RCS efficacy along the trajectory was needed to determine possible issues and to make necessary software changes. Effectiveness of the system at various regimes was evaluated using a hybrid DSMC-CFD technique, based on DSMC Analysis Code (DAC) code and General Aerodynamic Simulation Program (GASP), the LAURA (Langley Aerothermal Upwind Relaxation Algorithm) code, and the FUN3D (Fully Unstructured 3D) code. Results of the analysis at hypersonic and supersonic conditions suggest a significant aero-RCS interference which reduced the efficacy of the thrusters and could likely produce control reversal. Very little aero-RCS interference was predicted in rarefied and transitional regimes. A recommendation was made to the project to widen controller system deadbands to minimize (if not eliminate) the use of RCS thrusters through hypersonic and supersonic flight regimes, where their performance would be uncertain.
NASA Astrophysics Data System (ADS)
Chen, Syuan-Yi; Gong, Sheng-Sian
2017-09-01
This study aims to develop an adaptive high-precision control system for controlling the speed of a vane-type air motor (VAM) pneumatic servo system. In practice, the rotor speed of a VAM depends on the input mass air flow, which can be controlled by the effective orifice area (EOA) of an electronic throttle valve (ETV). As the control variable of a second-order pneumatic system is the integral of the EOA, an observation-based adaptive dynamic sliding-mode control (ADSMC) system is proposed to derive the differential of the control variable, namely, the EOA control signal. In the ADSMC system, a proportional-integral-derivative fuzzy neural network (PIDFNN) observer is used to achieve an ideal dynamic sliding-mode control (DSMC), and a supervisor compensator is designed to eliminate the approximation error. As a result, the ADSMC incorporates the robustness of a DSMC and the online learning ability of a PIDFNN. To ensure the convergence of the tracking error, a Lyapunov-based analytical method is employed to obtain the adaptive algorithms required to tune the control parameters of the online ADSMC system. Finally, our experimental results demonstrate the precision and robustness of the ADSMC system for highly nonlinear and time-varying VAM pneumatic servo systems.
Evaluation of new collision-pair selection models in DSMC
NASA Astrophysics Data System (ADS)
Akhlaghi, Hassan; Roohi, Ehsan
2017-10-01
The current paper investigates new collision-pair selection procedures in a direct simulation Monte Carlo (DSMC) method. Collision partner selection based on the random procedure from nearest neighbor particles and deterministic selection of nearest neighbor particles have already been introduced as schemes that provide accurate results in a wide range of problems. In the current research, new collision-pair selections based on the time spacing and direction of the relative movement of particles are introduced and evaluated. Comparisons between the new and existing algorithms are made considering appropriate test cases including fluctuations in homogeneous gas, 2D equilibrium flow, and Fourier flow problem. Distribution functions for number of particles and collisions in cell, velocity components, and collisional parameters (collision separation, time spacing, relative velocity, and the angle between relative movements of particles) are investigated and compared with existing analytical relations for each model. The capability of each model in the prediction of the heat flux in the Fourier problem at different cell numbers, numbers of particles, and time steps is examined. For new and existing collision-pair selection schemes, the effect of an alternative formula for the number of collision-pair selections and avoiding repetitive collisions are investigated via the prediction of the Fourier heat flux. The simulation results demonstrate the advantages and weaknesses of each model in different test cases.
Mankodi, T K; Bhandarkar, U V; Puranik, B P
2017-08-28
A new ab initio based chemical model for a Direct Simulation Monte Carlo (DSMC) study suitable for simulating rarefied flows with a high degree of non-equilibrium is presented. To this end, Collision Induced Dissociation (CID) cross sections for N 2 +N 2 →N 2 +2N are calculated and published using a global complete active space self-consistent field-complete active space second order perturbation theory N 4 potential energy surface and quasi-classical trajectory algorithm for high energy collisions (up to 30 eV). CID cross sections are calculated for only a selected set of ro-vibrational combinations of the two nitrogen molecules, and a fitting scheme based on spectroscopic weights is presented to interpolate the CID cross section for all possible ro-vibrational combinations. The new chemical model is validated by calculating equilibrium reaction rate coefficients that can be compared well with existing shock tube and computational results. High-enthalpy hypersonic nitrogen flows around a cylinder in the transition flow regime are simulated using DSMC to compare the predictions of the current ab initio based chemical model with the prevailing phenomenological model (the total collision energy model). The differences in the predictions are discussed.
NASA Astrophysics Data System (ADS)
Goldsworthy, M. J.
2012-10-01
One of the most useful tools for modelling rarefied hypersonic flows is the Direct Simulation Monte Carlo (DSMC) method. Simulator particle movement and collision calculations are combined with statistical procedures to model thermal non-equilibrium flow-fields described by the Boltzmann equation. The Macroscopic Chemistry Method for DSMC simulations was developed to simplify the inclusion of complex thermal non-equilibrium chemistry. The macroscopic approach uses statistical information which is calculated during the DSMC solution process in the modelling procedures. Here it is shown how inclusion of macroscopic information in models of chemical kinetics, electronic excitation, ionization, and radiation can enhance the capabilities of DSMC to model flow-fields where a range of physical processes occur. The approach is applied to the modelling of a 6.4 km/s nitrogen shock wave and results are compared with those from existing shock-tube experiments and continuum calculations. Reasonable agreement between the methods is obtained. The quality of the comparison is highly dependent on the set of vibrational relaxation and chemical kinetic parameters employed.
Effects of continuum breakdown on hypersonic aerothermodynamics for reacting flow
NASA Astrophysics Data System (ADS)
Holman, Timothy D.; Boyd, Iain D.
2011-02-01
This study investigates the effects of continuum breakdown on the surface aerothermodynamic properties (pressure, stress, and heat transfer rate) of a sphere in a Mach 25 flow of reacting air in regimes varying from continuum to a rarefied gas. Results are generated using both continuum [computational fluid dynamics (CFD)] and particle [direct simulation Monte Carlo (DSMC)] approaches. The DSMC method utilizes a chemistry model that calculates the backward rates from an equilibrium constant. A preferential dissociation model is modified in the CFD method to better compare with the vibrationally favored dissociation model that is utilized in the DSMC method. Tests of these models are performed to confirm their validity and to compare the chemistry models in both numerical methods. This study examines the effect of reacting air flow on continuum breakdown and the surface properties of the sphere. As the global Knudsen number increases, the amount of continuum breakdown in the flow and on the surface increases. This increase in continuum breakdown significantly affects the surface properties, causing an increase in the differences between CFD and DSMC. Explanations are provided for the trends observed.
NASA Technical Reports Server (NTRS)
Borner, A.; Swaminathan-Gopalan, K.; Stephani, Kelly; Poovathingal, S.; Murray, V. J.; Minton, T. K.; Panerai, F.; Mansour, N. N.
2017-01-01
A collaborative effort between the University of Illinois at Urbana-Champaign (UIUC), NASA Ames Research Center (ARC) and Montana State University (MSU) succeeded at developing a new finite-rate carbon oxidation model from molecular beam scattering experiments on vitreous carbon (VC). We now aim to use the direct simulation Monte Carlo (DSMC) code SPARTA to apply the model to each fiber of the porous fibrous Thermal Protection Systems (TPS) material FiberForm (FF). The detailed micro-structure of FF was obtained from X-ray micro-tomography and then used in DSMC. Both experiments and simulations show that the CO/O products ratio increased at all temperatures from VC to FF. We postulate this is due to the larger number of collisions an O atom encounters inside the porous FF material compared to the flat surface of VC. For the simulations, we particularly focused on the lowest and highest temperatures studied experimentally, 1023 K and 1823 K, and found good agreement between the finite-rate DSMC simulations and experiments.
Oxygen transport properties estimation by DSMC-CT simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruno, Domenico; Frezzotti, Aldo; Ghiroldi, Gian Pietro
Coupling DSMC simulations with classical trajectories calculations is emerging as a powerful tool to improve predictive capabilities of computational rarefied gas dynamics. The considerable increase of computational effort outlined in the early application of the method (Koura,1997) can be compensated by running simulations on massively parallel computers. In particular, GPU acceleration has been found quite effective in reducing computing time (Ferrigni,2012; Norman et al.,2013) of DSMC-CT simulations. The aim of the present work is to study rarefied Oxygen flows by modeling binary collisions through an accurate potential energy surface, obtained by molecular beams scattering (Aquilanti, et al.,1999). The accuracy ofmore » the method is assessed by calculating molecular Oxygen shear viscosity and heat conductivity following three different DSMC-CT simulation methods. In the first one, transport properties are obtained from DSMC-CT simulations of spontaneous fluctuation of an equilibrium state (Bruno et al, Phys. Fluids, 23, 093104, 2011). In the second method, the collision trajectory calculation is incorporated in a Monte Carlo integration procedure to evaluate the Taxman’s expressions for the transport properties of polyatomic gases (Taxman,1959). In the third, non-equilibrium zero and one-dimensional rarefied gas dynamic simulations are adopted and the transport properties are computed from the non-equilibrium fluxes of momentum and energy. The three methods provide close values of the transport properties, their estimated statistical error not exceeding 3%. The experimental values are slightly underestimated, the percentage deviation being, again, few percent.« less
Particle kinetic simulation of high altitude hypervelocity flight
NASA Technical Reports Server (NTRS)
Haas, Brian L.
1993-01-01
In this grant period, the focus has been on enhancement and application of the direct simulation Monte Carlo (DSMC) particle method for computing hypersonic flows of re-entry vehicles. Enhancement efforts dealt with modeling gas-gas interactions for thermal non-equilibrium relaxation processes and gas-surface interactions for prediction of vehicle surface temperatures. Both are important for application to problems of engineering interest. The code was employed in a parametric study to improve future applications, and in simulations of aeropass maneuvers in support of the Magellan mission. Detailed comparisons between continuum models for internal energy relaxation and DSMC models reveals that several discrepancies exist. These include definitions of relaxation parameters and the methodologies for implementing them in DSMC codes. These issues were clarified and all differences were rectified in a paper (Appendix A) submitted to Physics of Fluids A, featuring several key figures in the DSMC community as co-authors and B. Haas as first author. This material will be presented at the Fluid Dynamics meeting of the American Physical Society on November 21, 1993. The aerodynamics of space vehicles in highly rarefied flows are very sensitive to the vehicle surface temperatures. Rather than require prescribed temperature estimates for spacecraft as is typically done in DSMC methods, a new technique was developed which couples the dynamic surface heat transfer characteristics into the DSMC flow simulation code to compute surface temperatures directly. This model, when applied to thin planar bodies such as solar panels, was described in AIAA Paper No. 93-2765 (Appendix B) and was presented at the Thermophysics Conference in July 1993. The paper has been submitted to the Journal of Thermophysics and Heat Transfer. Application of the DSMC method to problems of practical interest requires a trade off between solution accuracy and computational expense and limitations. A parametric study was performed and reported in AIAA Paper No. 93-2806 (Appendix C) which assessed the accuracy penalties associated with simulations of varying grid resolution and flow domain size. The paper was also presented at the Thermophysics Conference and will be submitted to the journal shortly. Finally, the DSMC code was employed to assess the pitch, yaw, and roll aerodynamics of the Magellan spacecraft during entry into the Venus atmosphere at off-design attitudes. This work was in support of the Magellan aerobraking maneuver of May 25-Aug. 3, 1993. Furthermore, analysis of the roll characteristics of the configuration with canted solar panels was performed in support of the proposed 'Windmill' experiment. Results were reported in AIAA Paper No. 93-3676 (Appendix D) presented at the Atmospheric Flight Mechanics Conference in August 1993, and were submitted to Journal of Spacecraft and Rockets.
1994-06-01
Defense Systems Management requirements for program executive College (DSMC). However. the sec- officers ( PEas ), program managers ond and third sets have...and presenting in- predictable outcomes in terms of cul- formation would change the entire tural change. t4 of culture. Once, carrier pigeons took days
Direct simulation Monte Carlo investigation of the Richtmyer-Meshkov instability
Gallis, Michail A.; Koehler, Timothy P.; Torczynski, John R.; ...
2015-08-14
The Rayleigh-Taylor instability (RTI) is investigated using the Direct Simulation Monte Carlo (DSMC) method of molecular gas dynamics. Here, fully resolved two-dimensional DSMC RTI simulations are performed to quantify the growth of flat and single-mode perturbed interfaces between two atmospheric-pressure monatomic gases as a function of the Atwood number and the gravitational acceleration. The DSMC simulations reproduce all qualitative features of the RTI and are in reasonable quantitative agreement with existing theoretical and empirical models in the linear, nonlinear, and self-similar regimes. At late times, the instability is seen to exhibit a self-similar behavior, in agreement with experimental observations. Formore » the conditions simulated, diffusion can influence the initial instability growth significantly.« less
Shock-Wave/Boundary-Layer Interactions in Hypersonic Low Density Flows
NASA Technical Reports Server (NTRS)
Moss, James N.; Olejniczak, Joseph
2004-01-01
Results of numerical simulations of Mach 10 air flow over a hollow cylinder-flare and a double-cone are presented where viscous effects are significant. The flow phenomena include shock-shock and shock- boundary-layer interactions with accompanying flow separation, recirculation, and reattachment. The purpose of this study is to promote an understanding of the fundamental gas dynamics resulting from such complex interactions and to clarify the requirements for meaningful simulations of such flows when using the direct simulation Monte Carlo (DSMC) method. Particular emphasis is placed on the sensitivity of computed results to grid resolution. Comparisons of the DSMC results for the hollow cylinder-flare (30 deg.) configuration are made with the results of experimental measurements conducted in the ONERA RSCh wind tunnel for heating, pressure, and the extent of separation. Agreement between computations and measurements for various quantities is good except that for pressure. For the same flow conditions, the double- cone geometry (25 deg.- 65 deg.) produces much stronger interactions, and these interactions are investigated numerically using both DSMC and Navier-Stokes codes. For the double-cone computations, a two orders of magnitude variation in free-stream density (with Reynolds numbers from 247 to 24,7 19) is investigated using both computational methods. For this range of flow conditions, the computational results are in qualitative agreement for the extent of separation with the DSMC method always predicting a smaller separation region. Results from the Navier-Stokes calculations suggest that the flow for the highest density double-cone case may be unsteady; however, the DSMC solution does not show evidence of unsteadiness.
Particle/Continuum Hybrid Simulation in a Parallel Computing Environment
NASA Technical Reports Server (NTRS)
Baganoff, Donald
1996-01-01
The objective of this study was to modify an existing parallel particle code based on the direct simulation Monte Carlo (DSMC) method to include a Navier-Stokes (NS) calculation so that a hybrid solution could be developed. In carrying out this work, it was determined that the following five issues had to be addressed before extensive program development of a three dimensional capability was pursued: (1) find a set of one-sided kinetic fluxes that are fully compatible with the DSMC method, (2) develop a finite volume scheme to make use of these one-sided kinetic fluxes, (3) make use of the one-sided kinetic fluxes together with DSMC type boundary conditions at a material surface so that velocity slip and temperature slip arise naturally for near-continuum conditions, (4) find a suitable sampling scheme so that the values of the one-sided fluxes predicted by the NS solution at an interface between the two domains can be converted into the correct distribution of particles to be introduced into the DSMC domain, (5) carry out a suitable number of tests to confirm that the developed concepts are valid, individually and in concert for a hybrid scheme.
Molecular-Level Simulations of the Turbulent Taylor-Green Flow
NASA Astrophysics Data System (ADS)
Gallis, M. A.; Bitter, N. P.; Koehler, T. P.; Plimpton, S. J.; Torczynski, J. R.; Papadakis, G.
2017-11-01
The Direct Simulation Monte Carlo (DSMC) method, a statistical, molecular-level technique that provides accurate solutions to the Boltzmann equation, is applied to the turbulent Taylor-Green vortex flow. The goal of this work is to investigate whether DSMC can accurately simulate energy decay in a turbulent flow. If so, then simulating turbulent flows at the molecular level can provide new insights because the energy decay can be examined in detail from molecular to macroscopic length scales, thereby directly linking molecular relaxation processes to macroscopic transport processes. The DSMC simulations are performed on half a million cores of Sequoia, the 17 Pflop platform at Lawrence Livermore National Laboratory, and the kinetic-energy dissipation rate and the energy spectrum are computed directly from the molecular velocities. The DSMC simulations are found to reproduce the Kolmogorov -5/3 law and to agree with corresponding Navier-Stokes simulations obtained using a spectral method. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
Direct simulation Monte Carlo investigation of the Rayleigh-Taylor instability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallis, M. A.; Koehler, T. P.; Torczynski, J. R.
In this paper, the Rayleigh-Taylor instability (RTI) is investigated using the direct simulation Monte Carlo (DSMC) method of molecular gas dynamics. Here, fully resolved two-dimensional DSMC RTI simulations are performed to quantify the growth of flat and single-mode perturbed interfaces between two atmospheric-pressure monatomic gases as a function of the Atwood number and the gravitational acceleration. The DSMC simulations reproduce many qualitative features of the growth of the mixing layer and are in reasonable quantitative agreement with theoretical and empirical models in the linear, nonlinear, and self-similar regimes. In some of the simulations at late times, the instability enters themore » self-similar regime, in agreement with experimental observations. Finally, for the conditions simulated, diffusion can influence the initial instability growth significantly.« less
Direct simulation Monte Carlo investigation of the Rayleigh-Taylor instability
Gallis, M. A.; Koehler, T. P.; Torczynski, J. R.; ...
2016-08-31
In this paper, the Rayleigh-Taylor instability (RTI) is investigated using the direct simulation Monte Carlo (DSMC) method of molecular gas dynamics. Here, fully resolved two-dimensional DSMC RTI simulations are performed to quantify the growth of flat and single-mode perturbed interfaces between two atmospheric-pressure monatomic gases as a function of the Atwood number and the gravitational acceleration. The DSMC simulations reproduce many qualitative features of the growth of the mixing layer and are in reasonable quantitative agreement with theoretical and empirical models in the linear, nonlinear, and self-similar regimes. In some of the simulations at late times, the instability enters themore » self-similar regime, in agreement with experimental observations. Finally, for the conditions simulated, diffusion can influence the initial instability growth significantly.« less
DSMC Simulations of Disturbance Torque to ISS During Airlock Depressurization
NASA Technical Reports Server (NTRS)
Lumpkin, F. E., III; Stewart, B. S.
2015-01-01
The primary attitude control system on the International Space Station (ISS) is part of the United States On-orbit Segment (USOS) and uses Control Moment Gyroscopes (CMG). The secondary system is part of the Russian On orbit Segment (RSOS) and uses a combination of gyroscopes and thrusters. Historically, events with significant disturbances such as the airlock depressurizations associated with extra-vehicular activity (EVA) have been performed using the RSOS attitude control system. This avoids excessive propulsive "de-saturations" of the CMGs. However, transfer of attitude control is labor intensive and requires significant propellant. Predictions employing NASA's DSMC Analysis Code (DAC) of the disturbance torque to the ISS for depressurization of the Pirs airlock on the RSOS will be presented [1]. These predictions were performed to assess the feasibility of using USOS control during these events. The ISS Pirs airlock is vented using a device known as a "T-vent" as shown in the inset in figure 1. By orienting two equal streams of gas in opposite directions, this device is intended to have no propulsive effect. However, disturbance force and torque to the ISS do occur due to plume impingement. The disturbance torque resulting from the Pirs depressurization during EVAs is estimated by using a loosely coupled CFD/DSMC technique [2]. CFD is used to simulate the flow field in the nozzle and the near field plume. DSMC is used to simulate the remaining flow field using the CFD results to create an in flow boundary to the DSMC simulation. Due to the highly continuum nature of flow field near the T-vent, two loosely coupled DSMC domains are employed. An 88.2 cubic meter inner domain contains the Pirs airlock and the T-vent. Inner domain results are used to create an in flow boundary for an outer domain containing the remaining portions of the ISS. Several orientations of the ISS solar arrays and radiators have been investigated to find cases that result in minimal disturbance torque. Figure 1 shows surface pressure contours on the ISS and a plane of number density contours for a particular case.
NAS Experiences of Porting CM Fortran Codes to HPF on IBM SP2 and SGI Power Challenge
NASA Technical Reports Server (NTRS)
Saini, Subhash
1995-01-01
Current Connection Machine (CM) Fortran codes developed for the CM-2 and the CM-5 represent an important class of parallel applications. Several users have employed CM Fortran codes in production mode on the CM-2 and the CM-5 for the last five to six years, constituting a heavy investment in terms of cost and time. With Thinking Machines Corporation's decision to withdraw from the hardware business and with the decommissioning of many CM-2 and CM-5 machines, the best way to protect the substantial investment in CM Fortran codes is to port the codes to High Performance Fortran (HPF) on highly parallel systems. HPF is very similar to CM Fortran and thus represents a natural transition. Conversion issues involved in porting CM Fortran codes on the CM-5 to HPF are presented. In particular, the differences between data distribution directives and the CM Fortran Utility Routines Library, as well as the equivalent functionality in the HPF Library are discussed. Several CM Fortran codes (Cannon algorithm for matrix-matrix multiplication, Linear solver Ax=b, 1-D convolution for 2-D datasets, Laplace's Equation solver, and Direct Simulation Monte Carlo (DSMC) codes have been ported to Subset HPF on the IBM SP2 and the SGI Power Challenge. Speedup ratios versus number of processors for the Linear solver and DSMC code are presented.
Direct Simulation Monte Carlo Simulations of Low Pressure Semiconductor Plasma Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gochberg, L. A.; Ozawa, T.; Deng, H.
2008-12-31
The two widely used plasma deposition tools for semiconductor processing are Ionized Metal Physical Vapor Deposition (IMPVD) of metals using either planar or hollow cathode magnetrons (HCM), and inductively-coupled plasma (ICP) deposition of dielectrics in High Density Plasma Chemical Vapor Deposition (HDP-CVD) reactors. In these systems, the injected neutral gas flows are generally in the transonic to supersonic flow regime. The Hybrid Plasma Equipment Model (HPEM) has been developed and is strategically and beneficially applied to the design of these tools and their processes. For the most part, the model uses continuum-based techniques, and thus, as pressures decrease below 10more » mTorr, the continuum approaches in the model become questionable. Modifications have been previously made to the HPEM to significantly improve its accuracy in this pressure regime. In particular, the Ion Monte Carlo Simulation (IMCS) was added, wherein a Monte Carlo simulation is used to obtain ion and neutral velocity distributions in much the same way as in direct simulation Monte Carlo (DSMC). As a further refinement, this work presents the first steps towards the adaptation of full DSMC calculations to replace part of the flow module within the HPEM. Six species (Ar, Cu, Ar*, Cu*, Ar{sup +}, and Cu{sup +}) are modeled in DSMC. To couple SMILE as a module to the HPEM, source functions for species, momentum and energy from plasma sources will be provided by the HPEM. The DSMC module will then compute a quasi-converged flow field that will provide neutral and ion species densities, momenta and temperatures. In this work, the HPEM results for a hollow cathode magnetron (HCM) IMPVD process using the Boltzmann distribution are compared with DSMC results using portions of those HPEM computations as an initial condition.« less
Development of a Detailed Surface Chemistry Framework in DSMC
NASA Technical Reports Server (NTRS)
Swaminathan-Gopalan, K.; Borner, A.; Stephani, K. A.
2017-01-01
Many of the current direct simulation Monte Carlo (DSMC) codes still employ only simple surface catalysis models. These include only basic mechanisms such as dissociation, recombination, and exchange reactions, without any provision for adsorption and finite rate kinetics. Incorporating finite rate chemistry at the surface is increasingly becoming a necessity for various applications such as high speed re-entry flows over thermal protection systems (TPS), micro-electro-mechanical systems (MEMS), surface catalysis, etc. In the recent years, relatively few works have examined finite-rate surface reaction modeling using the DSMC method.In this work, a generalized finite-rate surface chemistry framework incorporating a comprehensive list of reaction mechanisms is developed and implemented into the DSMC solver SPARTA. The various mechanisms include adsorption, desorption, Langmuir-Hinshelwood (LH), Eley-Rideal (ER), Collision Induced (CI), condensation, sublimation, etc. The approach is to stochastically model the various competing reactions occurring on a set of active sites. Both gas-surface (e.g., ER, CI) and pure-surface (e.g., LH, desorption) reaction mechanisms are incorporated. The reaction mechanisms could also be catalytic or surface altering based on the participation of the bulk-phase species (e.g., bulk carbon atoms). Marschall and MacLean developed a general formulation in which multiple phases and surface sites are used and we adopt a similar convention in the current work. Microscopic parameters of reaction probabilities (for gas-surface reactions) and frequencies (for pure-surface reactions) that are require for DSMC are computed from the surface properties and macroscopic parameters such as rate constants, sticking coefficients, etc. The energy and angular distributions of the products are decided based on the reaction type and input parameters. Thus, the user has the capability to model various surface reactions via user-specified reaction rate constants, surface properties and parameters.
Fast and accurate calculation of dilute quantum gas using Uehling–Uhlenbeck model equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yano, Ryosuke, E-mail: ryosuke.yano@tokiorisk.co.jp
The Uehling–Uhlenbeck (U–U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U–U model equation. DSMC analysis based on the U–U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U–U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculatingmore » the viscosity coefficient of a Bose gas on the basis of the Green–Kubo expression and the shock layer of a dilute Bose gas around a cylinder.« less
1991-09-01
Maintaining Goal Congruence International Cooperation-the Next Generation ENDNOTES 1. Wolfgang Flume and David Swa, "British Aerospace-Leading...Program Management Questionnaire Report. Michael G. Krause , DSMC internal document, May 1989- 10. Bonn Seminar on Armaments cooperation, proceedings, w...Appendix K 154 International Cooperation-the Next Generation Dudney, Robert S., "The Electronics Industry Flume, Wolfgang , "Electronics for the Ger- Is
Particle kinetic simulation of high altitude hypervelocity flight
NASA Technical Reports Server (NTRS)
Boyd, Iain; Haas, Brian L.
1994-01-01
Rarefied flows about hypersonic vehicles entering the upper atmosphere or through nozzles expanding into a near vacuum may only be simulated accurately with a direct simulation Monte Carlo (DSMC) method. Under this grant, researchers enhanced the models employed in the DSMC method and performed simulations in support of existing NASA projects or missions. DSMC models were developed and validated for simulating rotational, vibrational, and chemical relaxation in high-temperature flows, including effects of quantized anharmonic oscillators and temperature-dependent relaxation rates. State-of-the-art advancements were made in simulating coupled vibration-dissociation recombination for post-shock flows. Models were also developed to compute vehicle surface temperatures directly in the code rather than requiring isothermal estimates. These codes were instrumental in simulating aerobraking of NASA's Magellan spacecraft during orbital maneuvers to assess heat transfer and aerodynamic properties of the delicate satellite. NASA also depended upon simulations of entry of the Galileo probe into the atmosphere of Jupiter to provide drag and flow field information essential for accurate interpretation of an onboard experiment. Finally, the codes have been used extensively to simulate expanding nozzle flows in low-power thrusters in support of propulsion activities at NASA-Lewis. Detailed comparisons between continuum calculations and DSMC results helped to quantify the limitations of continuum CFD codes in rarefied applications.
New chemical-DSMC method in numerical simulation of axisymmetric rarefied reactive flow
NASA Astrophysics Data System (ADS)
Zakeri, Ramin; Kamali Moghadam, Ramin; Mani, Mahmoud
2017-04-01
The modified quantum kinetic (MQK) chemical reaction model introduced by Zakeri et al. is developed for applicable cases in axisymmetric reactive rarefied gas flows using the direct simulation Monte Carlo (DSMC) method. Although, the MQK chemical model uses some modifications in the quantum kinetic (QK) method, it also employs the general soft sphere collision model and Stockmayer potential function to properly select the collision pairs in the DSMC algorithm and capture both the attraction and repulsion intermolecular forces in rarefied gas flows. For assessment of the presented model in the simulation of more complex and applicable reacting flows, first, the air dissociation is studied in a single cell for equilibrium and non-equilibrium conditions. The MQK results agree well with the analytical and experimental data and they accurately predict the characteristics of the rarefied flowfield with chemical reaction. To investigate accuracy of the MQK chemical model in the simulation of the axisymmetric flow, air dissociation is also assessed in an axial hypersonic flow around two geometries, the sphere as a benchmark case and the blunt body (STS-2) as an applicable test case. The computed results including the transient, rotational and vibrational temperatures, species concentration in the stagnation line, and also the heat flux and pressure coefficient on the surface are compared with those of the other chemical methods like the QK and total collision energy (TCE) models and available analytical and experimental data. Generally, the MQK chemical model properly simulates the chemical reactions and predicts flowfield characteristics more accurate rather than the typical QK model. Although in some cases, results of the MQK approaches match with those of the TCE method, the main point is that the MQK does not need any experimental data or unrealistic assumption of specular boundary condition as used in the TCE method. Another advantage of the MQK model is the significant reduction of computational cost rather than the QK chemical model to reach the same accuracy because of applying more proper collision model and consequently, decrease of the particles collision number.
DSMC Simulations of Hypersonic Flows and Comparison With Experiments
NASA Technical Reports Server (NTRS)
Moss, James N.; Bird, Graeme A.; Markelov, Gennady N.
2004-01-01
This paper presents computational results obtained with the direct simulation Monte Carlo (DSMC) method for several biconic test cases in which shock interactions and flow separation-reattachment are key features of the flow. Recent ground-based experiments have been performed for several biconic configurations, and surface heating rate and pressure measurements have been proposed for code validation studies. The present focus is to expand on the current validating activities for a relatively new DSMC code called DS2V that Bird (second author) has developed. Comparisons with experiments and other computations help clarify the agreement currently being achieved between computations and experiments and to identify the range of measurement variability of the proposed validation data when benchmarked with respect to the current computations. For the test cases with significant vibrational nonequilibrium, the effect of the vibrational energy surface accommodation on heating and other quantities is demonstrated.
Hypersonic simulations using open-source CFD and DSMC solvers
NASA Astrophysics Data System (ADS)
Casseau, V.; Scanlon, T. J.; John, B.; Emerson, D. R.; Brown, R. E.
2016-11-01
Hypersonic hybrid hydrodynamic-molecular gas flow solvers are required to satisfy the two essential requirements of any high-speed reacting code, these being physical accuracy and computational efficiency. The James Weir Fluids Laboratory at the University of Strathclyde is currently developing an open-source hybrid code which will eventually reconcile the direct simulation Monte-Carlo method, making use of the OpenFOAM application called dsmcFoam, and the newly coded open-source two-temperature computational fluid dynamics solver named hy2Foam. In conjunction with employing the CVDV chemistry-vibration model in hy2Foam, novel use is made of the QK rates in a CFD solver. In this paper, further testing is performed, in particular with the CFD solver, to ensure its efficacy before considering more advanced test cases. The hy2Foam and dsmcFoam codes have shown to compare reasonably well, thus providing a useful basis for other codes to compare against.
Comparisons of the Maxwell and CLL Gas/Surface Interaction Models Using DSMC
NASA Technical Reports Server (NTRS)
Hedahl, Marc O.
1995-01-01
Two contrasting models of gas-surface interactions are studied using the Direct Simulation Monte Carlo (DSMC) method. The DSMC calculations examine differences in predictions of aerodynamic forces and heat transfer between the Maxwell and Cercignani-Lampis-Lord (CLL) models for flat plate configurations at freestream conditions corresponding to a 140 km orbit around Venus. The size of the flat plate is that of one of the solar panels on the Magellan spacecraft, and the freestream conditions are one of those experienced during aerobraking maneuvers. Results are presented for both a single flat plate and a two-plate configuration as a function of angle of attack and gas-surface accommodation coefficients. The two plate system is not representative of the Magellan geometry, but is studied to explore possible experiments that might be used to differentiate between the two gas surface interaction models.
NASA Technical Reports Server (NTRS)
Liechty, Derek S.; Burt, Jonathan M.
2016-01-01
There are many flows fields that span a wide range of length scales where regions of both rarefied and continuum flow exist and neither direct simulation Monte Carlo (DSMC) nor computational fluid dynamics (CFD) provide the appropriate solution everywhere. Recently, a new viscous collision limited (VCL) DSMC technique was proposed to incorporate effects of physical diffusion into collision limiter calculations to make the low Knudsen number regime normally limited to CFD more tractable for an all-particle technique. This original work had been derived for a single species gas. The current work extends the VCL-DSMC technique to gases with multiple species. Similar derivations were performed to equate numerical and physical transport coefficients. However, a more rigorous treatment of determining the mixture viscosity is applied. In the original work, consideration was given to internal energy non-equilibrium, and this is also extended in the current work to chemical non-equilibrium.
NLSE: Parameter-Based Inversion Algorithm
NASA Astrophysics Data System (ADS)
Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.
Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.
NASA Technical Reports Server (NTRS)
Campbell, David; Wysong, Ingrid; Kaplan, Carolyn; Mott, David; Wadsworth, Dean; VanGilder, Douglas
2000-01-01
An AFRL/NRL team has recently been selected to develop a scalable, parallel, reacting, multidimensional (SUPREM) Direct Simulation Monte Carlo (DSMC) code for the DoD user community under the High Performance Computing Modernization Office (HPCMO) Common High Performance Computing Software Support Initiative (CHSSI). This paper will introduce the JANNAF Exhaust Plume community to this three-year development effort and present the overall goals, schedule, and current status of this new code.
2008-01-17
15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18 . NUMBER OF PAGES 261 19a. NAME OF RESPONSIBLE PERSON a...REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39- 18 This material...Sciences Meeting and Exhibit. Several DSMC [13, 58] and CFD [ 18 , 28, 43] solutions were submitted. Later, others compared CFD and DSMC solutions to these
Restricted Collision List method for faster Direct Simulation Monte-Carlo (DSMC) collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macrossan, Michael N., E-mail: m.macrossan@uq.edu.au
The ‘Restricted Collision List’ (RCL) method for speeding up the calculation of DSMC Variable Soft Sphere collisions, with Borgnakke–Larsen (BL) energy exchange, is presented. The method cuts down considerably on the number of random collision parameters which must be calculated (deflection and azimuthal angles, and the BL energy exchange factors). A relatively short list of these parameters is generated and the parameters required in any cell are selected from this list. The list is regenerated at intervals approximately equal to the smallest mean collision time in the flow, and the chance of any particle re-using the same collision parameters inmore » two successive collisions is negligible. The results using this method are indistinguishable from those obtained with standard DSMC. The CPU time saving depends on how much of a DSMC calculation is devoted to collisions and how much is devoted to other tasks, such as moving particles and calculating particle interactions with flow boundaries. For 1-dimensional calculations of flow in a tube, the new method saves 20% of the CPU time per collision for VSS scattering with no energy exchange. With RCL applied to rotational energy exchange, the CPU saving can be greater; for small values of the rotational collision number, for which most collisions involve some rotational energy exchange, the CPU may be reduced by 50% or more.« less
Predictive Modeling in Plasma Reactor and Process Design
NASA Technical Reports Server (NTRS)
Hash, D. B.; Bose, D.; Govindan, T. R.; Meyyappan, M.; Arnold, James O. (Technical Monitor)
1997-01-01
Research continues toward the improvement and increased understanding of high-density plasma tools. Such reactor systems are lauded for their independent control of ion flux and energy enabling high etch rates with low ion damage and for their improved ion velocity anisotropy resulting from thin collisionless sheaths and low neutral pressures. Still, with the transition to 300 mm processing, achieving etch uniformity and high etch rates concurrently may be a formidable task for such large diameter wafers for which computational modeling can play an important role in successful reactor and process design. The inductively coupled plasma (ICP) reactor is the focus of the present investigation. The present work attempts to understand the fundamental physical phenomena of such systems through computational modeling. Simulations will be presented using both computational fluid dynamics (CFD) techniques and the direct simulation Monte Carlo (DSMC) method for argon and chlorine discharges. ICP reactors generally operate at pressures on the order of 1 to 10 mTorr. At such low pressures, rarefaction can be significant to the degree that the constitutive relations used in typical CFD techniques become invalid and a particle simulation must be employed. This work will assess the extent to which CFD can be applied and evaluate the degree to which accuracy is lost in prediction of the phenomenon of interest; i.e., etch rate. If the CFD approach is found reasonably accurate and bench-marked with DSMC and experimental results, it has the potential to serve as a design tool due to the rapid time relative to DSMC. The continuum CFD simulation solves the governing equations for plasma flow using a finite difference technique with an implicit Gauss-Seidel Line Relaxation method for time marching toward a converged solution. The equation set consists of mass conservation for each species, separate energy equations for the electrons and heavy species, and momentum equations for the gas. The sheath is modeled by imposing the Bohm velocity to the ions near the walls. The DSMC method simulates each constituent of the gas as a separate species which would be analogous in CFD to employing separate species mass, momentum, and energy equations. All particles including electrons are moved and allowed to collide with one another with the stipulation that the electrons remain tied to the ions consistent with the concept of ambipolar diffusion. The velocities of the electrons are allowed to be modified during collisions and are not confined to a Maxwellian distribution. These benefits come at a price in terms of computational time and memory. The DSMC and CFD are made as consistent as possible by using similar chemistry and power deposition models. Although the comparison of CFD and DSMC is interesting, the main goal of this work is the increased understanding of high-density plasma flowfields that can then direct improvements in both techniques. This work is unique in the level of the physical models employed in both the DSMC and CFD for high-density plasma reactor applications. For example, the electrons are simulated in the present DSMC work which has not been done before for low temperature plasma processing problems. In the CFD approach, for the first time, the charged particle transport (discharge physics) has been self-consistently coupled to the gas flow and heat transfer.
Comparison of DSMC and CFD Solutions of Fire II Including Radiative Heating
NASA Technical Reports Server (NTRS)
Liechty, Derek S.; Johnston, Christopher O.; Lewis, Mark J.
2011-01-01
The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. These flows may also contain significant radiative heating. To prepare for these missions, NASA is developing the capability to simulate rarefied, ionized flows and to then calculate the resulting radiative heating to the vehicle's surface. In this study, the DSMC codes DAC and DS2V are used to obtain charge-neutral ionization solutions. NASA s direct simulation Monte Carlo code DAC is currently being updated to include the ability to simulate charge-neutral ionized flows, take advantage of the recently introduced Quantum-Kinetic chemistry model, and to include electronic energy levels as an additional internal energy mode. The Fire II flight test is used in this study to assess these new capabilities. The 1634 second data point was chosen for comparisons to be made in order to include comparisons to computational fluid dynamics solutions. The Knudsen number at this point in time is such that the DSMC simulations are still tractable and the CFD computations are at the edge of what is considered valid. It is shown that there can be quite a bit of variability in the vibrational temperature inferred from DSMC solutions and that, from how radiative heating is computed, the electronic temperature is much better suited for radiative calculations. To include the radiative portion of heating, the flow-field solutions are post-processed by the non-equilibrium radiation code HARA. Acceptable agreement between CFD and DSMC flow field solutions is demonstrated and the progress of the updates to DAC, along with an appropriate radiative heating solution, are discussed. In addition, future plans to generate more high fidelity radiative heat transfer solutions are discussed.
DSMC Simulations of Hypersonic Flows With Shock Interactions and Validation With Experiments
NASA Technical Reports Server (NTRS)
Moss, James N.; Bird, Graeme A.
2004-01-01
The capabilities of a relatively new direct simulation Monte Carlo (DSMC) code are examined for the problem of hypersonic laminar shock/shock and shock/boundary layer interactions, where boundary layer separation is an important feature of the flow. Flow about two model configurations is considered, where both configurations (a biconic and a hollow cylinder-flare) have recent published experimental measurements. The computations are made by using the DS2V code of Bird, a general two-dimensional/axisymmetric time accurate code that incorporates many of the advances in DSMC over the past decade. The current focus is on flows produced in ground-based facilities at Mach 12 and 16 test conditions with nitrogen as the test gas and the test models at zero incidence. Results presented highlight the sensitivity of the calculations to grid resolutions, sensitivity to physical modeling parameters, and comparison with experimental measurements. Information is provided concerning the flow structure and surface results for the extent of separation, heating, pressure, and skin friction.
DSMC Simulations of Hypersonic Flows With Shock Interactions and Validation With Experiments
NASA Technical Reports Server (NTRS)
Moss, James N.; Bird, Graeme A.
2004-01-01
The capabilities of a relatively new direct simulation Monte Carlo (DSMC) code are examined for the problem of hypersonic laminar shock/shock and shock/boundary layer interactions, where boundary layer separation is an important feature of the flow. Flow about two model configurations is considered, where both configurations (a biconic and a hollow cylinder-flare) have recent published experimental measurements. The computations are made by using the DS2V code of Bird, a general two-dimensional/axisymmetric time accurate code that incorporates many of the advances in DSMC over the past decade. The current focus is on flows produced in ground-based facilities at Mach 12 and 16 test conditions with nitrogen as the test gas and the test models at zero incidence. Results presented highlight the sensitivity of the calculations to grid resolution, sensitivity to physical modeling parameters, and comparison with experimental measurements. Information is provided concerning the flow structure and surface results for the extent of separation, heating, pressure, and skin friction.
A continuum analysis of chemical nonequilibrium under hypersonic low-density flight conditions
NASA Technical Reports Server (NTRS)
Gupta, R. N.
1986-01-01
Results of employing the continuum model of Navier-Stokes equations under the low-density flight conditions are presented. These results are obtained with chemical nonequilibrium and multicomponent surface slip boundary conditions. The conditions analyzed are those encountered by the nose region of the Space Shuttle Orbiter during reentry. A detailed comparison of the Navier-Stokes (NS) results is made with the viscous shock-layer (VSL) and direct simulation Monte Carlo (DSMC) predictions. With the inclusion of new surface-slip boundary conditions in NS calculations, the surface heat transfer and other flowfield quantities adjacent to the surface are predicted favorably with the DSMC calculations from 75 km to 115 km in altitude. This suggests a much wider practical range for the applicability of Navier-Stokes solutions than previously thought. This is appealing because the continuum (NS and VSL) methods are commonly used to solve the fluid flow problems and are less demanding in terms of computer resource requirements than the noncontinuum (DSMC) methods.
Error estimation for CFD aeroheating prediction under rarefied flow condition
NASA Astrophysics Data System (ADS)
Jiang, Yazhong; Gao, Zhenxun; Jiang, Chongwen; Lee, Chunhian
2014-12-01
Both direct simulation Monte Carlo (DSMC) and Computational Fluid Dynamics (CFD) methods have become widely used for aerodynamic prediction when reentry vehicles experience different flow regimes during flight. The implementation of slip boundary conditions in the traditional CFD method under Navier-Stokes-Fourier (NSF) framework can extend the validity of this approach further into transitional regime, with the benefit that much less computational cost is demanded compared to DSMC simulation. Correspondingly, an increasing error arises in aeroheating calculation as the flow becomes more rarefied. To estimate the relative error of heat flux when applying this method for a rarefied flow in transitional regime, theoretical derivation is conducted and a dimensionless parameter ɛ is proposed by approximately analyzing the ratio of the second order term to first order term in the heat flux expression in Burnett equation. DSMC simulation for hypersonic flow over a cylinder in transitional regime is performed to test the performance of parameter ɛ, compared with two other parameters, Knρ and MaṡKnρ.
Measurement and analysis of a small nozzle plume in vacuum
NASA Technical Reports Server (NTRS)
Penko, P. F.; Boyd, I. D.; Meissner, D. L.; Dewitt, K. J.
1993-01-01
Pitot pressures and flow angles are measured in the plume of a nozzle flowing nitrogen and exhausting to a vacuum. Total pressures are measured with Pitot tubes sized for specific regions of the plume and flow angles measured with a conical probe. The measurement area for total pressure extends 480 mm (16 exit diameters) downstream of the nozzle exit plane and radially to 60 mm (1.9 exit diameters) off the plume axis. The measurement area for flow angle extends to 160 mm (5 exit diameters) downstream and radially to 60 mm. The measurements are compared to results from a numerical simulation of the flow that is based on kinetic theory and uses the direct-simulation Monte Carlo (DSMC) method. Comparisons of computed results from the DSMC method with measurements of flow angle display good agreement in the far-field of the plume and improve with increasing distance from the exit plane. Pitot pressures computed from the DSMC method are in reasonably good agreement with experimental results over the entire measurement area.
Comparison of a 3-D CFD-DSMC Solution Methodology With a Wind Tunnel Experiment
NASA Technical Reports Server (NTRS)
Glass, Christopher E.; Horvath, Thomas J.
2002-01-01
A solution method for problems that contain both continuum and rarefied flow regions is presented. The methodology is applied to flow about the 3-D Mars Sample Return Orbiter (MSRO) that has a highly compressed forebody flow, a shear layer where the flow separates from a forebody lip, and a low density wake. Because blunt body flow fields contain such disparate regions, employing a single numerical technique to solve the entire 3-D flow field is often impractical, or the technique does not apply. Direct simulation Monte Carlo (DSMC) could be employed to solve the entire flow field; however, the technique requires inordinate computational resources for continuum and near-continuum regions, and is best suited for the wake region. Computational fluid dynamics (CFD) will solve the high-density forebody flow, but continuum assumptions do not apply in the rarefied wake region. The CFD-DSMC approach presented herein may be a suitable way to obtain a higher fidelity solution.
Development of the ARISTOTLE webware for cloud-based rarefied gas flow modeling
NASA Astrophysics Data System (ADS)
Deschenes, Timothy R.; Grot, Jonathan; Cline, Jason A.
2016-11-01
Rarefied gas dynamics are important for a wide variety of applications. An improvement in the ability of general users to predict these gas flows will enable optimization of current, and discovery of future processes. Despite this potential, most rarefied simulation software is designed by and for experts in the community. This has resulted in low adoption of the methods outside of the immediate RGD community. This paper outlines an ongoing effort to create a rarefied gas dynamics simulation tool that can be used by a general audience. The tool leverages a direct simulation Monte Carlo (DSMC) library that is available to the entire community and a web-based simulation process that will enable all users to take advantage of high performance computing capabilities. First, the DSMC library and simulation architecture are described. Then the DSMC library is used to predict a number of representative transient gas flows that are applicable to the rarefied gas dynamics community. The paper closes with a summary and future direction.
DSMC simulations of shock interactions about sharp double cones
NASA Astrophysics Data System (ADS)
Moss, James N.
2001-08-01
This paper presents the results of a numerical study of shock interactions resulting from Mach 10 flow about sharp double cones. Computations are made by using the direct simulation Monte Carlo (DSMC) method of Bird. The sensitivity and characteristics of the interactions are examined by varying flow conditions, model size, and configuration. The range of conditions investigated includes those for which experiments have been or will be performed in the ONERA R5Ch low-density wind tunnel and the Calspan-University of Buffalo Research Center (CUBRC) Large Energy National Shock (LENS) tunnel.
DSMC Simulations of Shock Interactions About Sharp Double Cones
NASA Technical Reports Server (NTRS)
Moss, James N.
2000-01-01
This paper presents the results of a numerical study of shock interactions resulting from Mach 10 flow about sharp double cones. Computations are made by using the direct simulation Monte Carlo (DSMC) method of Bird. The sensitivity and characteristics of the interactions are examined by varying flow conditions, model size, and configuration. The range of conditions investigated includes those for which experiments have been or will be performed in the ONERA R5Ch low-density wind tunnel and the Calspan-University of Buffalo Research Center (CUBRC) Large Energy National Shock (LENS) tunnel.
Effects of Chemistry on Blunt-Body Wake Structure
NASA Technical Reports Server (NTRS)
Dogra, Virendra K.; Moss, James N.; Wilmoth, Richard G.; Taylor, Jeff C.; Hassan, H. A.
1995-01-01
Results of a numerical study are presented for hypersonic low-density flow about a 70-deg blunt cone using direct simulation Monte Carlo (DSMC) and Navier-Stokes calculations. Particular emphasis is given to the effects of chemistry on the near-wake structure and on the surface quantities and the comparison of the DSMC results with the Navier-Stokes calculations. The flow conditions simulated are those experienced by a space vehicle at an altitude of 85 km and a velocity of 7 km/s during Earth entry. A steady vortex forms in the near wake for these freestream conditions for both chemically reactive and nonreactive air gas models. The size (axial length) of the vortex for the reactive air calculations is 25% larger than that of the nonreactive air calculations. The forebody surface quantities are less sensitive to the chemistry than the base surface quantities. The presence of the afterbody has no effect on the forebody flow structure or the surface quantities. The comparisons of DSMC and Navier-Stokes calculations show good agreement for the wake structure and the forebody surface quantities.
Second-Order Consensus in Multiagent Systems via Distributed Sliding Mode Control.
Yu, Wenwu; Wang, He; Cheng, Fei; Yu, Xinghuo; Wen, Guanghui
2016-11-22
In this paper, the new decoupled distributed sliding-mode control (DSMC) is first proposed for second-order consensus in multiagent systems, which finally solves the fundamental unknown problem for sliding-mode control (SMC) design of coupled networked systems. A distributed full-order sliding-mode surface is designed based on the homogeneity with dilation for reaching second-order consensus in multiagent systems, under which the sliding-mode states are decoupled. Then, the SMC is applied to the decoupled sliding-mode states to reach their origin in finite time, which is the sliding-mode surface. The states of agents can first reach the designed sliding-mode surface in finite time and then move to the second-order consensus state along the surface in finite time as well. The DSMC designed in this paper can eliminate the influence of singularity problems and weaken the influence of chattering, which is still very difficult in the SMC systems. In addition, DSMC proposes a general decoupling framework for designing SMC in networked multiagent systems. Simulations are presented to verify the theoretical results in this paper.
Experimental validation of a direct simulation by Monte Carlo molecular gas flow model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shufflebotham, P.K.; Bartel, T.J.; Berney, B.
1995-07-01
The Sandia direct simulation Monte Carlo (DSMC) molecular/transition gas flow simulation code has significant potential as a computer-aided design tool for the design of vacuum systems in low pressure plasma processing equipment. The purpose of this work was to verify the accuracy of this code through direct comparison to experiment. To test the DSMC model, a fully instrumented, axisymmetric vacuum test cell was constructed, and spatially resolved pressure measurements made in N{sub 2} at flows from 50 to 500 sccm. In a ``blind`` test, the DSMC code was used to model the experimental conditions directly, and the results compared tomore » the measurements. It was found that the model predicted all the experimental findings to a high degree of accuracy. Only one modeling issue was uncovered. The axisymmetric model showed localized low pressure spots along the axis next to surfaces. Although this artifact did not significantly alter the accuracy of the results, it did add noise to the axial data. {copyright} {ital 1995} {ital American} {ital Vacuum} {ital Society}« less
Low-Density Nozzle Flow by the Direct Simulation Monte Carlo and Continuum Methods
NASA Technical Reports Server (NTRS)
Chung, Chang-Hong; Kim, Sku C.; Stubbs, Robert M.; Dewitt, Kenneth J.
1994-01-01
Two different approaches, the direct simulation Monte Carlo (DSMC) method based on molecular gasdynamics, and a finite-volume approximation of the Navier-Stokes equations, which are based on continuum gasdynamics, are employed in the analysis of a low-density gas flow in a small converging-diverging nozzle. The fluid experiences various kinds of flow regimes including continuum, slip, transition, and free-molecular. Results from the two numerical methods are compared with Rothe's experimental data, in which density and rotational temperature variations along the centerline and at various locations inside a low-density nozzle were measured by the electron-beam fluorescence technique. The continuum approach showed good agreement with the experimental data as far as density is concerned. The results from the DSMC method showed good agreement with the experimental data, both in the density and the rotational temperature. It is also shown that the simulation parameters, such as the gas/surface interaction model, the energy exchange model between rotational and translational modes, and the viscosity-temperature exponent, have substantial effects on the results of the DSMC method.
Direct simulation Monte Carlo method for gas flows in micro-channels with bends with added curvature
NASA Astrophysics Data System (ADS)
Tisovský, Tomáš; Vít, Tomáš
Gas flows in micro-channels are simulated using an open source Direct Simulation Monte Carlo (DSMC) code dsmcFOAM for general application to rarefied gas flow written within the framework of the open source C++ toolbox called OpenFOAM. Aim of this paper is to investigate the flow in micro-channel with bend with added curvature. Results are compared with flows in channel without added curvature and equivalent straight channel. Effects of micro-channel bend was already thoroughly investigated by White et al. Geometry proposed by White is also used here for refference.
Direct simulation of high-vorticity gas flows
NASA Technical Reports Server (NTRS)
Bird, G. A.
1987-01-01
The computational limitations associated with the molecular dynamics (MD) method and the direct simulation Monte Carlo (DSMC) method are reviewed in the context of the computation of dilute gas flows with high vorticity. It is concluded that the MD method is generally limited to the dense gas case in which the molecular diameter is one-tenth or more of the mean free path. It is shown that the cell size in DSMC calculations should be small in comparison with the mean free path, and that this may be facilitated by a new subcell procedure for the selection of collision partners.
Simulations of Ground and Space-Based Oxygen Atom Experiments
NASA Technical Reports Server (NTRS)
Finchum, A. (Technical Monitor); Cline, J. A.; Minton, T. K.; Braunstein, M.
2003-01-01
A low-earth orbit (LEO) materials erosion scenario and the ground-based experiment designed to simulate it are compared using the direct-simulation Monte Carlo (DSMC) method. The DSMC model provides a detailed description of the interactions between the hyperthermal gas flow and a normally oriented flat plate for each case. We find that while the general characteristics of the LEO exposure are represented in the ground-based experiment, multi-collision effects can potentially alter the impact energy and directionality of the impinging molecules in the ground-based experiment. Multi-collision phenomena also affect downstream flux measurements.
Direct Simulation of Reentry Flows with Ionization
NASA Technical Reports Server (NTRS)
Carlson, Ann B.; Hassan, H. A.
1989-01-01
The Direct Simulation Monte Carlo (DSMC) method is applied in this paper to the study of rarefied, hypersonic, reentry flows. The assumptions and simplifications involved with the treatment of ionization, free electrons and the electric field are investigated. A new method is presented for the calculation of the electric field and handling of charged particles with DSMC. In addition, a two-step model for electron impact ionization is implemented. The flow field representing a 10 km/sec shock at an altitude of 65 km is calculated. The effects of the new modeling techniques on the calculation results are presented and discussed.
Search strategy in a complex and dynamic environment (the Indian Ocean case)
NASA Astrophysics Data System (ADS)
Loire, Sophie; Arbabi, Hassan; Clary, Patrick; Ivic, Stefan; Crnjaric-Zic, Nelida; Macesic, Senka; Crnkovic, Bojan; Mezic, Igor; UCSB Team; Rijeka Team
2014-11-01
The disappearance of Malaysia Airlines Flight 370 (MH370) in the early morning hours of 8 March 2014 has exposed the disconcerting lack of efficient methods for identifying where to look and how to look for missing objects in a complex and dynamic environment. The search area for plane debris is a remote part of the Indian Ocean. Searches, of the lawnmower type, have been unsuccessful so far. Lagrangian kinematics of mesoscale features are visible in hypergraph maps of the Indian Ocean surface currents. Without a precise knowledge of the crash site, these maps give an estimate of the time evolution of any initial distribution of plane debris and permits the design of a search strategy. The Dynamic Spectral Multiscale Coverage search algorithm is modified to search a spatial distribution of targets that is evolving with time following the dynamic of ocean surface currents. Trajectories are generated for multiple search agents such that their spatial coverage converges to the target distribution. Central to this DSMC algorithm is a metric for the ergodicity.
NASA Astrophysics Data System (ADS)
Hansen, K. C.; Fougere, N.; Bieler, A. M.; Altwegg, K.; Combi, M. R.; Gombosi, T. I.; Huang, Z.; Rubin, M.; Tenishev, V.; Toth, G.; Tzou, C. Y.
2015-12-01
We have previously published results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model and its characterization of the neutral coma of comet 67P/Churyumov-Gerasimenko through detailed comparison with data collected by the ROSINA/COPS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis/COmet Pressure Sensor) instrument aboard the Rosetta spacecraft [Bieler, 2015]. Results from these DSMC models have been used to create an empirical model of the near comet coma (<200 km) of comet 67P. The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. The model is a significant improvement over more simple empirical models, such as the Haser model. While the DSMC results are a more accurate representation of the coma at any given time, the advantage of a mean state, empirical model is the ease and speed of use. One use of such an empirical model is in the calculation of a total cometary coma production rate from the ROSINA/COPS data. The COPS data are in situ measurements of gas density and velocity along the ROSETTA spacecraft track. Converting the measured neutral density into a production rate requires knowledge of the neutral gas distribution in the coma. Our empirical model provides this information and therefore allows us to correct for the spacecraft location to calculate a production rate as a function of heliocentric distance. We will present the full empirical model as well as the calculated neutral production rate for the period of August 2014 - August 2015 (perihelion).
Transient Macroscopic Chemistry in the DSMC Method
NASA Astrophysics Data System (ADS)
Goldsworthy, M. J.; Macrossan, M. N.; Abdel-Jawad, M.
2008-12-01
In the Direct Simulation Monte Carlo method, a combination of statistical and deterministic procedures applied to a finite number of `simulator' particles are used to model rarefied gas-kinetic processes. Traditionally, chemical reactions are modelled using information from specific colliding particle pairs. In the Macroscopic Chemistry Method (MCM), the reactions are decoupled from the specific particle pairs selected for collisions. Information from all of the particles within a cell is used to determine a reaction rate coefficient for that cell. MCM has previously been applied to steady flow DSMC simulations. Here we show how MCM can be used to model chemical kinetics in DSMC simulations of unsteady flow. Results are compared with a collision-based chemistry procedure for two binary reactions in a 1-D unsteady shock-expansion tube simulation and during the unsteady development of 2-D flow through a cavity. For the shock tube simulation, close agreement is demonstrated between the two methods for instantaneous, ensemble-averaged profiles of temperature and species mole fractions. For the cavity flow, a high degree of thermal non-equilibrium is present and non-equilibrium reaction rate correction factors are employed in MCM. Very close agreement is demonstrated for ensemble averaged mole fraction contours predicted by the particle and macroscopic methods at three different flow-times. A comparison of the accumulated number of net reactions per cell shows that both methods compute identical numbers of reaction events. For the 2-D flow, MCM required similar CPU and memory resources to the particle chemistry method. The Macroscopic Chemistry Method is applicable to any general DSMC code using any viscosity or non-reacting collision models and any non-reacting energy exchange models. MCM can be used to implement any reaction rate formulations, whether these be from experimental or theoretical studies.
Plume Impingement to the Lunar Surface: A Challenging Problem for DSMC
NASA Technical Reports Server (NTRS)
Lumpkin, Forrest; Marichalar, Jermiah; Piplica, Anthony
2007-01-01
The President's Vision for Space Exploration calls for the return of human exploration of the Moon. The plans are ambitious and call for the creation of a lunar outpost. Lunar Landers will therefore be required to land near predeployed hardware, and the dust storm created by the Lunar Lander's plume impingement to the lunar surface presents a hazard. Knowledge of the number density, size distribution, and velocity of the grains in the dust cloud entrained into the flow is needing to develop mitigation strategies. An initial step to acquire such knowledge is simulating the associated plume impingement flow field. The following paper presents results from a loosely coupled continuum flow solver/Direct Simulation Monte Carlo (DSMC) technique for simulating the plume impingement of the Apollo Lunar module on the lunar surface. These cases were chosen for initial study to allow for comparison with available Apollo video. The relatively high engine thrust and the desire to simulate interesting cases near touchdown result in flow that is nearly entirely continuum. The DSMC region of the flow field was simulated using NASA's DSMC Analysis Code (DAC) and must begin upstream of the impingement shock for the loosely coupled technique to succeed. It was therefore impossible to achieve mean free path resolution with a reasonable number of molecules (say 100 million) as is shown. In order to mitigate accuracy and performance issues when using such large cells, advanced techniques such as collision limiting and nearest neighbor collisions were employed. The final paper will assess the benefits and shortcomings of such techniques. In addition, the effects of plume orientation, plume altitude, and lunar topography, such as craters, on the flow field, the surface pressure distribution, and the surface shear stress distribution are presented.
NASA Astrophysics Data System (ADS)
Argha, Ahmadreza; Li, Li; W. Su, Steven
2017-04-01
This paper develops a novel stabilising sliding mode for systems involving uncertainties as well as measurement data packet dropouts. In contrast to the existing literature that designs the switching function by using unavailable system states, a novel linear sliding function is constructed by employing only the available communicated system states for the systems involving measurement packet losses. This also equips us with the possibility to build a novel switching component for discrete-time sliding mode control (DSMC) by using only available system states. Finally, using a numerical example, we evaluate the performance of the designed DSMC for networked systems.
NASA Technical Reports Server (NTRS)
Gupta, R. N.; Simmonds, A. L.
1986-01-01
Solutions of the Navier-Stokes equations with chemical nonequilibrium and multicomponent surface slip are presented along the stagnation streamline under low-density hypersonic flight conditions. The conditions analyzed are those encountered by the nose region of the Space Shuttle Orbiter during reentry. A detailed comparison of the Navier-Stokes (NS) results is made with the viscous shock-layer (VSL) and Direct Simulation Monte Carlo (DSMC) predictions. With the inclusion of surface-slip boundary conditions in NS calculations, the surface heat transfer and other flow field quantities adjacent to the surface are predicted favorably with the DSMC calculations from 75 km to 115 km in altitude. Therefore, the practical range for the applicability of Navier-Stokes solutions is much wider than previously thought. This is appealing because the continuum (NS and VSL) methods are commonly used to solve the fluid flow problems and are less demanding in terms of computer resource requirements than the noncontinuum (DSMC) methods. The NS solutions agree well with the VSL results for altitudes less than 92 km. An assessment is made of the frozen flow approximation employed in the VSL calculations.
Plume flowfield analysis of the shuttle primary Reaction Control System (RCS) rocket engine
NASA Technical Reports Server (NTRS)
Hueser, J. E.; Brock, F. J.
1990-01-01
A solution was generated for the physical properties of the Shuttle RCS 4000 N (900 lb) rocket engine exhaust plume flowfield. The modeled exhaust gas consists of the five most abundant molecular species, H2, N2, H2O, CO, and CO2. The solution is for a bare RCS engine firing into a vacuum; the only additional hardware surface in the flowfield is a cylinder (=engine mount) which coincides with the nozzle lip outer corner at X = 0, extends to the flowfield outer boundary at X = -137 m and is coaxial with the negative symmetry axis. Continuum gas dynamic methods and the Direct Simulation Monte Carlo (DSMC) method were combined in an iterative procedure to produce a selfconsistent solution. Continuum methods were used in the RCS nozzle and in the plume as far as the P = 0.03 breakdown contour; the DSMC method was used downstream of this continuum flow boundary. The DSMC flowfield extends beyond 100 m from the nozzle exit and thus the solution includes the farfield flow properties, but substantial information is developed on lip flow dynamics and thus results are also presented for the flow properties in the vicinity of the nozzle lip.
NASA Astrophysics Data System (ADS)
Borges Sebastião, Israel; Kulakhmetov, Marat; Alexeenko, Alina
2017-01-01
This work evaluates high-fidelity vibrational-translational (VT) energy relaxation and dissociation models for pure O2 normal shockwave simulations with the direct simulation Monte Carlo (DSMC) method. The O2-O collisions are described using ab initio state-specific relaxation and dissociation models. The Macheret-Fridman (MF) dissociation model is adapted to the DSMC framework by modifying the standard implementation of the total collision energy (TCE) model. The O2-O2 dissociation is modeled with this TCE+MF approach, which is calibrated with O2-O ab initio data and experimental equilibrium dissociation rates. The O2-O2 vibrational relaxation is modeled via the Larsen-Borgnakke model, calibrated to experimental VT rates. All the present results are compared to experimental data and previous calculations available in the literature. It is found that, in general, the ab initio dissociation model is better than the TCE model at matching the shock experiments. Therefore, when available, efficient ab initio models are preferred over phenomenological models. We also show that the proposed TCE + MF formulation can be used to improve the standard TCE model results when ab initio data are not available or limited.
Improved Regional Seismic Event Locations Using 3-D Velocity Models
1999-12-15
regional velocity model to estimate event hypocenters. Travel times for the regional phases are calculated using a sophisticated eikonal finite...can greatly improve estimates of event locations. Our algorithm calculates travel times using a finite difference approximation of the eikonal ...such as IASP91 or J-B. 3-D velocity models require more sophisticated travel time modeling routines; thus, we use a 3-D eikonal equation solver
DSMC simulations of Mach 20 nitrogen flows about a 70 degree blunted cone and its wake
NASA Technical Reports Server (NTRS)
Moss, James N.; Dogra, Virendra K.; Wilmoth, Richard G.
1993-01-01
Numerical results obtained with the direct simulation Monte Carlo (DSMC) method are presented for Mach 20 nitrogen flow about a 70-deg blunted cone. The flow conditions simulated are those that can be obtained in existing low-density hypersonic wind tunnels. Three sets of flow conditions are simulated with freestream Knudsen numbers ranging from 0.03 to 0.001. The focus is to characterize the wake flow under rarefied conditions. This is accomplished by calculating the influence of rarefaction on wake structure along with the impact that an afterbody has on flow features. This data report presents extensive information concerning flowfield features and surface quantities.
Investigation of Thermal Stress Convection in Nonisothermal Gases under Microgravity Conditions
NASA Technical Reports Server (NTRS)
Mackowski, Daniel W.
1999-01-01
The project has sought to ascertain the veracity of the Burnett relations, as applied to slow moving, highly nonisothermal gases, by comparison of convection and stress predictions with those generated by the DSMC method. The Burnett equations were found to provide reasonable descriptions of the pressure distribution and normal stress in stationary gases with a 1-D temperature gradient. Continuum/Burnett predictions of thermal stress convection in 2-D heated enclosures, however, are not quantitatively supported by DSMC results. For such situations, it appears that thermal creep flows, generated at the boundaries of the enclosure, will be significantly larger than the flows resulting from thermal stress in the gas.
Hypersonic Shock Interactions About a 25 deg/65 deg Sharp Double Cone
NASA Technical Reports Server (NTRS)
Moss, James N.; LeBeau, Gerald J.; Glass, Christopher E.
2002-01-01
This paper presents the results of a numerical study of shock interactions resulting from Mach 10 air flow about a sharp double cone. Computations are made with the direct simulation Monte Carlo (DSMC) method by using two different codes: the G2 code of Bird and the DAC (DSMC Analysis Code) code of LeBeau. The flow conditions are the pretest nominal free-stream conditions specified for the ONERA R5Ch low-density wind tunnel. The focus is on the sensitivity of the interactions to grid resolution while providing information concerning the flow structure and surface results for the extent of separation, heating, pressure, and skin friction.
The solution of a model problem of the atmospheric entry of a small meteoroid
NASA Astrophysics Data System (ADS)
Zalogin, G. N.; Kusov, A. L.
2016-03-01
Direct simulation Monte Carlo modeling (DSMC) is used to solve the problem of the entry into the Earth's atmosphere of a small meteoroid. The main aspects of the physical theory of meteors, such as mass loss (ablation) and effects of aerodynamic and thermal shielding, are considered based on the numerical solution of the model problem of the atmospheric entry of an iron meteoroid. The DSMC makes it possible to obtain insight into the structure of the disturbed area around the meteoroid (coma) and trace its evolution depending on entry velocity and height (Knudsen number) in a transitional flow regime where calculation methods used for free molecular and continuum regimes are inapplicable.
DSMC Simulations of Apollo Capsule Aerodynamics for Hypersonic Rarefied Conditions
NASA Technical Reports Server (NTRS)
Moss, James N.; Glass, Christopher E.; Greene, Francis A.
2006-01-01
Direct simulation Monte Carlo DSMC simulations are performed for the Apollo capsule in the hypersonic low density transitional flow regime. The focus is on ow conditions similar to that experienced by the Apollo Command Module during the high altitude portion of its reentry Results for aerodynamic forces and moments are presented that demonstrate their sensitivity to rarefaction that is for free molecular to continuum conditions. Also aerodynamic data are presented that shows their sensitivity to a range of reentry velocity encompasing conditions that include reentry from low Earth orbit lunar return and Mars return velocities to km/s. The rarefied results are anchored in the continuum regime with data from Navier Stokes simulations
Molecular-level simulations of turbulence and its decay
Gallis, M. A.; Bitter, N. P.; Koehler, T. P.; ...
2017-02-08
Here, we provide the first demonstration that molecular-level methods based on gas kinetic theory and molecular chaos can simulate turbulence and its decay. The direct simulation Monte Carlo (DSMC) method, a molecular-level technique for simulating gas flows that resolves phenomena from molecular to hydrodynamic (continuum) length scales, is applied to simulate the Taylor-Green vortex flow. The DSMC simulations reproduce the Kolmogorov –5/3 law and agree well with the turbulent kinetic energy and energy dissipation rate obtained from direct numerical simulation of the Navier-Stokes equations using a spectral method. This agreement provides strong evidence that molecular-level methods for gases can bemore » used to investigate turbulent flows quantitatively.« less
Numerical Simulations Of High-Altitude Aerothermodynamics Of A Prospective Spacecraft Model
NASA Astrophysics Data System (ADS)
Vashchenkov, P. V.; Kaskovsky, A. V.; Krylov, A. N.; Ivanov, M. S.
2011-05-01
The paper describes the computations of aerothermodynamic characteristics of a promising spacecraft (Prospective Piloted Transport System) along its de- scent trajectory at altitudes from 120 to 60 km. The computations are performed by the DSMC method with the use of the SMILE software system and by the engineering technique (local bridging method) with the use of the RuSat software system. The influence of real gas effects (excitation of rotational and vibrational energy modes and chemical reactions) on aerothermodynamic characteristics of the vehicle is studied. A comparison of results obtained by the approximate engineering method and the DSMC method allow the accuracy of prediction of aerodynamic characteristics by the local bridging method to be estimated.
Comparisons of the Maxwell and CLL gas/surface interaction models using DSMC
NASA Technical Reports Server (NTRS)
Hedahl, Marc O.; Wilmoth, Richard G.
1995-01-01
The behavior of two different models of gas-surface interactions is studied using the Direct Simulation Monte Carlo (DSMC) method. The DSMC calculations examine differences in predictions of aerodynamic forces and heat transfer between the Maxwell and the Cercignani-Lampis-Lord (CLL) models for flat plate configurations at freestream conditions corresponding to a 140 km orbit around Venus. The size of the flat plate represents one of the solar panels on the Magellan spacecraft, and the freestream conditions correspond to those experienced during aerobraking maneuvers. Results are presented for both a single flat plate and a two-plate configuration as a function of angle of attack and gas-surface accommodation coefficients. The two-plate system is not representative of the Magellan geometry but is studied to explore possible experiments that might be used to differentiate between the two gas-surface interaction models. The Maxwell and CLL models produce qualitatively similar results for the aerodynamic forces and heat transfer on a single flat plate. However, the flow fields produced with the two models are qualitatively different for both the single-plate and two-plate calculations. These differences in the flowfield lead to predictions of the angle of attack for maximum heat transfer in a two plate configuration that are distinctly different for the two gas-surface interactions models.
Consistent post-reaction vibrational energy redistribution in DSMC simulations using TCE model
NASA Astrophysics Data System (ADS)
Borges Sebastião, Israel; Alexeenko, Alina
2016-10-01
The direct simulation Monte Carlo (DSMC) method has been widely applied to study shockwaves, hypersonic reentry flows, and other nonequilibrium flow phenomena. Although there is currently active research on high-fidelity models based on ab initio data, the total collision energy (TCE) and Larsen-Borgnakke (LB) models remain the most often used chemistry and relaxation models in DSMC simulations, respectively. The conventional implementation of the discrete LB model, however, may not satisfy detailed balance when recombination and exchange reactions play an important role in the flow energy balance. This issue can become even more critical in reacting mixtures involving polyatomic molecules, such as in combustion. In this work, this important shortcoming is addressed and an empirical approach to consistently specify the post-reaction vibrational states close to thermochemical equilibrium conditions is proposed within the TCE framework. Following Bird's quantum-kinetic (QK) methodology for populating post-reaction states, the new TCE-based approach involves two main steps. The state-specific TCE reaction probabilities for a forward reaction are first pre-computed from equilibrium 0-D simulations. These probabilities are then employed to populate the post-reaction vibrational states of the corresponding reverse reaction. The new approach is illustrated by application to exchange and recombination reactions relevant to H2-O2 combustion processes.
Co-design of software and hardware to implement remote sensing algorithms
NASA Astrophysics Data System (ADS)
Theiler, James P.; Frigo, Janette R.; Gokhale, Maya; Szymanski, John J.
2002-01-01
Both for offline searches through large data archives and for onboard computation at the sensor head, there is a growing need for ever-more rapid processing of remote sensing data. For many algorithms of use in remote sensing, the bulk of the processing takes place in an ``inner loop'' with a large number of simple operations. For these algorithms, dramatic speedups can often be obtained with specialized hardware. The difficulty and expense of digital design continues to limit applicability of this approach, but the development of new design tools is making this approach more feasible, and some notable successes have been reported. On the other hand, it is often the case that processing can also be accelerated by adopting a more sophisticated algorithm design. Unfortunately, a more sophisticated algorithm is much harder to implement in hardware, so these approaches are often at odds with each other. With careful planning, however, it is sometimes possible to combine software and hardware design in such a way that each complements the other, and the final implementation achieves speedup that would not have been possible with a hardware-only or a software-only solution. We will in particular discuss the co-design of software and hardware to achieve substantial speedup of algorithms for multispectral image segmentation and for endmember identification.
Study of Plume Impingement Effects in the Lunar Lander Environment
NASA Technical Reports Server (NTRS)
Marichalar, Jeremiah; Prisbell, A.; Lumpkin, F.; LeBeau, G.
2010-01-01
Plume impingement effects from the descent and ascent engine firings of the Lunar Lander were analyzed in support of the Lunar Architecture Team under the Constellation Program. The descent stage analysis was performed to obtain shear and pressure forces on the lunar surface as well as velocity and density profiles in the flow field in an effort to understand lunar soil erosion and ejected soil impact damage which was analyzed as part of a separate study. A CFD/DSMC decoupled methodology was used with the Bird continuum breakdown parameter to distinguish the continuum flow from the rarefied flow. The ascent stage analysis was performed to ascertain the forces and moments acting on the Lunar Lander Ascent Module due to the firing of the main engine on take-off. The Reacting and Multiphase Program (RAMP) method of characteristics (MOC) code was used to model the continuum region of the nozzle plume, and the Direct Simulation Monte Carlo (DSMC) Analysis Code (DAC) was used to model the impingement results in the rarefied region. The ascent module (AM) was analyzed for various pitch and yaw rotations and for various heights in relation to the descent module (DM). For the ascent stage analysis, the plume inflow boundary was located near the nozzle exit plane in a region where the flow number density was large enough to make the DSMC solution computationally expensive. Therefore, a scaling coefficient was used to make the DSMC solution more computationally manageable. An analysis of the effectiveness of this scaling technique was performed by investigating various scaling parameters for a single height and rotation of the AM. Because the inflow boundary was near the nozzle exit plane, another analysis was performed investigating three different inflow contours to determine the effects of the flow expansion around the nozzle lip on the final plume impingement results.
Extension of a hybrid particle-continuum method for a mixture of chemical species
NASA Astrophysics Data System (ADS)
Verhoff, Ashley M.; Boyd, Iain D.
2012-11-01
Due to the physical accuracy and numerical efficiency achieved by analyzing transitional, hypersonic flow fields with hybrid particle-continuum methods, this paper describes a Modular Particle-Continuum (MPC) method and its extension to include multiple chemical species. Considerations that are specific to a hybrid approach for simulating gas mixtures are addressed, including a discussion of the Chapman-Enskog velocity distribution function (VDF) for near-equilibrium flows, and consistent viscosity models for the individual CFD and DSMC modules of the MPC method. Representative results for a hypersonic blunt-body flow are then presented, where the flow field properties, surface properties, and computational performance are compared for simulations employing full CFD, full DSMC, and the MPC method.
Poly-Gaussian model of randomly rough surface in rarefied gas flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aksenova, Olga A.; Khalidov, Iskander A.
2014-12-09
Surface roughness is simulated by the model of non-Gaussian random process. Our results for the scattering of rarefied gas atoms from a rough surface using modified approach to the DSMC calculation of rarefied gas flow near a rough surface are developed and generalized applying the poly-Gaussian model representing probability density as the mixture of Gaussian densities. The transformation of the scattering function due to the roughness is characterized by the roughness operator. Simulating rough surface of the walls by the poly-Gaussian random field expressed as integrated Wiener process, we derive a representation of the roughness operator that can be appliedmore » in numerical DSMC methods as well as in analytical investigations.« less
MorphoHawk: Geometric-based Software for Manufacturing and More
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keith Arterburn
2001-04-01
Hollywood movies portray facial recognition as a perfected technology, but reality is that sophisticated computers and algorithmic calculations are far from perfect. In fact, the most sophisticated and successful computer for recognizing faces and other imagery already is the human brain with more than 10 billion nerve cells. Beginning at birth, humans process data and connect optical and sensory experiences that create unparalleled accumulation of data for people to associate images with life experiences, emotions and knowledge. Computers are powerful, rapid and tireless, but still cannot compare to the highly sophisticated relational calculations and associations that the human computer canmore » produce in connecting ‘what we see with what we know.’« less
When Machines Think: Radiology's Next Frontier.
Dreyer, Keith J; Geis, J Raymond
2017-12-01
Artificial intelligence (AI), machine learning, and deep learning are terms now seen frequently, all of which refer to computer algorithms that change as they are exposed to more data. Many of these algorithms are surprisingly good at recognizing objects in images. The combination of large amounts of machine-consumable digital data, increased and cheaper computing power, and increasingly sophisticated statistical models combine to enable machines to find patterns in data in ways that are not only cost-effective but also potentially beyond humans' abilities. Building an AI algorithm can be surprisingly easy. Understanding the associated data structures and statistics, on the other hand, is often difficult and obscure. Converting the algorithm into a sophisticated product that works consistently in broad, general clinical use is complex and incompletely understood. To show how these AI products reduce costs and improve outcomes will require clinical translation and industrial-grade integration into routine workflow. Radiology has the chance to leverage AI to become a center of intelligently aggregated, quantitative, diagnostic information. Centaur radiologists, formed as a synergy of human plus computer, will provide interpretations using data extracted from images by humans and image-analysis computer algorithms, as well as the electronic health record, genomics, and other disparate sources. These interpretations will form the foundation of precision health care, or care customized to an individual patient. © RSNA, 2017.
Uniform rovibrational collisional N2 bin model for DSMC, with application to atmospheric entry flows
NASA Astrophysics Data System (ADS)
Torres, E.; Bondar, Ye. A.; Magin, T. E.
2016-11-01
A state-to-state model for internal energy exchange and molecular dissociation allows for high-fidelity DSMC simulations. Elementary reaction cross sections for the N2 (v, J)+ N system were previously extracted from a quantum-chemical database, originally compiled at NASA Ames Research Center. Due to the high computational cost of simulating the full range of inelastic collision processes (approx. 23 million reactions), a coarse-grain model, called the Uniform RoVibrational Collisional (URVC) bin model can be used instead. This allows to reduce the original 9390 rovibrational levels of N2 to 10 energy bins. In the present work, this reduced model is used to simulate a 2D flow configuration, which more closely reproduces the conditions of high-speed entry into Earth's atmosphere. For this purpose, the URVC bin model had to be adapted for integration into the "Rarefied Gas Dynamics Analysis System" (RGDAS), a separate high-performance DSMC code capable of handling complex geometries and parallel computations. RGDAS was developed at the Institute of Theoretical and Applied Mechanics in Novosibirsk, Russia for use by the European Space Agency (ESA) and shares many features with the well-known SMILE code developed by the same group. We show that the reduced mechanism developed previously can be implemented in RGDAS, and the results exhibit nonequilibrium effects consistent with those observed in previous 1D-simulations.
Axisymmetric Plume Simulations with NASA's DSMC Analysis Code
NASA Technical Reports Server (NTRS)
Stewart, B. D.; Lumpkin, F. E., III
2012-01-01
A comparison of axisymmetric Direct Simulation Monte Carlo (DSMC) Analysis Code (DAC) results to analytic and Computational Fluid Dynamics (CFD) solutions in the near continuum regime and to 3D DAC solutions in the rarefied regime for expansion plumes into a vacuum is performed to investigate the validity of the newest DAC axisymmetric implementation. This new implementation, based on the standard DSMC axisymmetric approach where the representative molecules are allowed to move in all three dimensions but are rotated back to the plane of symmetry by the end of the move step, has been fully integrated into the 3D-based DAC code and therefore retains all of DAC s features, such as being able to compute flow over complex geometries and to model chemistry. Axisymmetric DAC results for a spherically symmetric isentropic expansion are in very good agreement with a source flow analytic solution in the continuum regime and show departure from equilibrium downstream of the estimated breakdown location. Axisymmetric density contours also compare favorably against CFD results for the R1E thruster while temperature contours depart from equilibrium very rapidly away from the estimated breakdown surface. Finally, axisymmetric and 3D DAC results are in very good agreement over the entire plume region and, as expected, this new axisymmetric implementation shows a significant reduction in computer resources required to achieve accurate simulations for this problem over the 3D simulations.
Analysis of vibrational-translational energy transfer using the direct simulation Monte Carlo method
NASA Technical Reports Server (NTRS)
Boyd, Iain D.
1991-01-01
A new model is proposed for energy transfer between the vibrational and translational modes for use in the direct simulation Monte Carlo method (DSMC). The model modifies the Landau-Teller theory for a harmonic oscillator and the rate transition is related to an experimental correlation for the vibrational relaxation time. Assessment of the model is made with respect to three different computations: relaxation in a heat bath, a one-dimensional shock wave, and hypersonic flow over a two-dimensional wedge. These studies verify that the model achieves detailed balance, and excellent agreement with experimental data is obtained in the shock wave calculation. The wedge flow computation reveals that the usual phenomenological method for simulating vibrational nonequilibrium in the DSMC technique predicts much higher vibrational temperatures in the wake region.
DSMC Computations for Regions of Shock/Shock and Shock/Boundary Layer Interaction
NASA Technical Reports Server (NTRS)
Moss, James N.
2001-01-01
This paper presents the results of a numerical study of hypersonic interacting flows at flow conditions that include those for which experiments have been conducted in the Calspan-University of Buffalo Research Center (CUBRC) Large Energy National Shock (LENS) tunnel and the ONERA R5Ch low-density wind tunnel. The computations are made with the direct simulation Monte Carlo (DSMC) method of Bird. The focus is on Mach 9.3 to 11.4 flows about flared axisymmetric configurations, both hollow cylinder flares and double cones. The results presented highlight the sensitivity of the calculations to grid resolution, provide results concerning the conditions for incipient separation, and provide information concerning the flow structure and surface results for the extent of separation, heating, pressure, and skin friction.
Ocean Models and Proper Orthogonal Decomposition
NASA Astrophysics Data System (ADS)
Salas-de-Leon, D. A.
2007-05-01
The increasing computational developments and the better understanding of mathematical and physical systems resulted in an increasing number of ocean models. Long time ago, modelers were like a secret organization and recognize each other by using secret codes and languages that only a select group of people was able to recognize and understand. The access to computational systems was reduced, on one hand equipment and the using time of computers were expensive and restricted, and on the other hand, they required an advance computational languages that not everybody wanted to learn. Now a days most college freshman own a personal computer (PC or laptop), and/or have access to more sophisticated computational systems than those available for research in the early 80's. The resource availability resulted in a mayor access to all kind models. Today computer speed and time and the algorithms does not seem to be a problem, even though some models take days to run in small computational systems. Almost every oceanographic institution has their own model, what is more, in the same institution from one office to the next there are different models for the same phenomena, developed by different research member, the results does not differ substantially since the equations are the same, and the solving algorithms are similar. The algorithms and the grids, constructed with algorithms, can be found in text books and/or over the internet. Every year more sophisticated models are constructed. The Proper Orthogonal Decomposition is a technique that allows the reduction of the number of variables to solve keeping the model properties, for which it can be a very useful tool in diminishing the processes that have to be solved using "small" computational systems, making sophisticated models available for a greater community.
NASA Technical Reports Server (NTRS)
Johnson, Paul E.; Smith, Milton O.; Adams, John B.
1992-01-01
Algorithms were developed, based on Hapke's (1981) equations, for remote determinations of mineral abundances and particle sizes from reflectance spectra. In this method, spectra are modeled as a function of end-member abundances and illumination/viewing geometry. The method was tested on a laboratory data set. It is emphasized that, although there exist more sophisticated models, the present algorithms are particularly suited for remotely sensed data, where little opportunity exists to independently measure reflectance versus article size and phase function.
Numerical Simulation of Transitional, Hypersonic Flows using a Hybrid Particle-Continuum Method
NASA Astrophysics Data System (ADS)
Verhoff, Ashley Marie
Analysis of hypersonic flows requires consideration of multiscale phenomena due to the range of flight regimes encountered, from rarefied conditions in the upper atmosphere to fully continuum flow at low altitudes. At transitional Knudsen numbers there are likely to be localized regions of strong thermodynamic nonequilibrium effects that invalidate the continuum assumptions of the Navier-Stokes equations. Accurate simulation of these regions, which include shock waves, boundary and shear layers, and low-density wakes, requires a kinetic theory-based approach where no prior assumptions are made regarding the molecular distribution function. Because of the nature of these types of flows, there is much to be gained in terms of both numerical efficiency and physical accuracy by developing hybrid particle-continuum simulation approaches. The focus of the present research effort is the continued development of the Modular Particle-Continuum (MPC) method, where the Navier-Stokes equations are solved numerically using computational fluid dynamics (CFD) techniques in regions of the flow field where continuum assumptions are valid, and the direct simulation Monte Carlo (DSMC) method is used where strong thermodynamic nonequilibrium effects are present. Numerical solutions of transitional, hypersonic flows are thus obtained with increased physical accuracy relative to CFD alone, and improved numerical efficiency is achieved in comparison to DSMC alone because this more computationally expensive method is restricted to those regions of the flow field where it is necessary to maintain physical accuracy. In this dissertation, a comprehensive assessment of the physical accuracy of the MPC method is performed, leading to the implementation of a non-vacuum supersonic outflow boundary condition in particle domains, and more consistent initialization of DSMC simulator particles along hybrid interfaces. The relative errors between MPC and full DSMC results are greatly reduced as a direct result of these improvements. Next, a new parameter for detecting rotational nonequilibrium effects is proposed and shown to offer advantages over other continuum breakdown parameters, achieving further accuracy gains. Lastly, the capabilities of the MPC method are extended to accommodate multiple chemical species in rotational nonequilibrium, each of which is allowed to equilibrate independently, enabling application of the MPC method to more realistic atmospheric flows.
Conservative bin-to-bin fractional collisions
NASA Astrophysics Data System (ADS)
Martin, Robert
2016-11-01
Particle methods such as direct simulation Monte Carlo (DSMC) and particle-in-cell (PIC) are commonly used to model rarefied kinetic flows for engineering applications because of their ability to efficiently capture non-equilibrium behavior. The primary drawback to these methods relates to the poor convergence properties due to the stochastic nature of the methods which typically rely heavily on high degrees of non-equilibrium and time averaging to compensate for poor signal to noise ratios. For standard implementations, each computational particle represents many physical particles which further exacerbate statistical noise problems for flow with large species density variation such as encountered in flow expansions and chemical reactions. The stochastic weighted particle method (SWPM) introduced by Rjasanow and Wagner overcome this difficulty by allowing the ratio of real to computational particles to vary on a per particle basis throughout the flow. The DSMC procedure must also be slightly modified to properly sample the Boltzmann collision integral accounting for the variable particle weights and to avoid the creation of additional particles with negative weight. In this work, the SWPM with necessary modification to incorporate the variable hard sphere (VHS) collision cross section model commonly used in engineering applications is first incorporated into an existing engineering code, the Thermophysics Universal Research Framework. The results and computational efficiency are compared to a few simple test cases using a standard validated implementation of the DSMC method along with the adapted SWPM/VHS collision using an octree based conservative phase space reconstruction. The SWPM method is then further extended to combine the collision and phase space reconstruction into a single step which avoids the need to create additional computational particles only to destroy them again during the particle merge. This is particularly helpful when oversampling the collision integral when compared to the standard DSMC method. However, it is found that the more frequent phase space reconstructions can cause added numerical thermalization with low particle per cell counts due to the coarseness of the octree used. However, the methods are expected to be of much greater utility in transient expansion flows and chemical reactions in the future.
NASA Astrophysics Data System (ADS)
Gallis, M. A.; Torczynski, J. R.
2011-03-01
The ellipsoidal-statistical Bhatnagar-Gross-Krook (ES-BGK) kinetic model is investigated for steady gas-phase transport of heat, tangential momentum, and mass between parallel walls (i.e., Fourier, Couette, and Fickian flows). This investigation extends the original study of Cercignani and Tironi, who first applied the ES-BGK model to heat transport (i.e., Fourier flow) shortly after this model was proposed by Holway. The ES-BGK model is implemented in a molecular-gas-dynamics code so that results from this model can be compared directly to results from the full Boltzmann collision term, as computed by the same code with the direct simulation Monte Carlo (DSMC) algorithm of Bird. A gas of monatomic molecules is considered. These molecules collide in a pairwise fashion according to either the Maxwell or the hard-sphere interaction and reflect from the walls according to the Cercignani-Lampis-Lord model with unity accommodation coefficients. Simulations are performed at pressures from near-free-molecular to near-continuum. Unlike the BGK model, the ES-BGK model produces heat-flux and shear-stress values that both agree closely with the DSMC values at all pressures. However, for both interactions, the ES-BGK model produces molecular-velocity-distribution functions that are qualitatively similar to those determined for the Maxwell interaction from Chapman-Enskog theory for small wall temperature differences and moment-hierarchy theory for large wall temperature differences. Moreover, the ES-BGK model does not produce accurate values of the mass self-diffusion coefficient for either interaction. Nevertheless, given its reasonable accuracy for heat and tangential-momentum transport, its sound theoretical foundation (it obeys the H-theorem), and its available extension to polyatomic molecules, the ES-BGK model may be a useful method for simulating certain classes of single-species noncontinuum gas flows, as Cercignani suggested.
A Sequence of Sorting Strategies.
ERIC Educational Resources Information Center
Duncan, David R.; Litwiller, Bonnie H.
1984-01-01
Describes eight increasingly sophisticated and efficient sorting algorithms including linear insertion, binary insertion, shellsort, bubble exchange, shakersort, quick sort, straight selection, and tree selection. Provides challenges for the reader and the student to program these efficiently. (JM)
Comparison between phenomenological and ab-initio reaction and relaxation models in DSMC
NASA Astrophysics Data System (ADS)
Sebastião, Israel B.; Kulakhmetov, Marat; Alexeenko, Alina
2016-11-01
New state-specific vibrational-translational energy exchange and dissociation models, based on ab-initio data, are implemented in direct simulation Monte Carlo (DSMC) method and compared to the established Larsen-Borgnakke (LB) and total collision energy (TCE) phenomenological models. For consistency, both the LB and TCE models are calibrated with QCT-calculated O2+O data. The model comparison test cases include 0-D thermochemical relaxation under adiabatic conditions and 1-D normal shockwave calculations. The results show that both the ME-QCT-VT and LB models can reproduce vibrational relaxation accurately but the TCE model is unable to reproduce nonequilibrium rates even when it is calibrated to accurate equilibrium rates. The new reaction model does capture QCT-calculated nonequilibrium rates. For all investigated cases, we discuss the prediction differences based on the new model features.
Addison, Paul S; Wang, Rui; Uribe, Alberto A; Bergese, Sergio D
2015-06-01
DPOP (∆POP or Delta-POP) is a non-invasive parameter which measures the strength of respiratory modulations present in the pulse oximetry photoplethysmogram (pleth) waveform. It has been proposed as a non-invasive surrogate parameter for pulse pressure variation (PPV) used in the prediction of the response to volume expansion in hypovolemic patients. Many groups have reported on the DPOP parameter and its correlation with PPV using various semi-automated algorithmic implementations. The study reported here demonstrates the performance gains made by adding increasingly sophisticated signal processing components to a fully automated DPOP algorithm. A DPOP algorithm was coded and its performance systematically enhanced through a series of code module alterations and additions. Each algorithm iteration was tested on data from 20 mechanically ventilated OR patients. Correlation coefficients and ROC curve statistics were computed at each stage. For the purposes of the analysis we split the data into a manually selected 'stable' region subset of the data containing relatively noise free segments and a 'global' set incorporating the whole data record. Performance gains were measured in terms of correlation against PPV measurements in OR patients undergoing controlled mechanical ventilation. Through increasingly advanced pre-processing and post-processing enhancements to the algorithm, the correlation coefficient between DPOP and PPV improved from a baseline value of R = 0.347 to R = 0.852 for the stable data set, and, correspondingly, R = 0.225 to R = 0.728 for the more challenging global data set. Marked gains in algorithm performance are achievable for manually selected stable regions of the signals using relatively simple algorithm enhancements. Significant additional algorithm enhancements, including a correction for low perfusion values, were required before similar gains were realised for the more challenging global data set.
2005-01-01
more legible and to restore its unity [2]. The need to retouch the image in an unobtrusive way extended naturally from paintings to photography and...to software tools that allow a sophisticated but mostly manual process [7]. In this article we introduce a novel algorithm for automatic digi- tal...This is done only for a didactic purpose, since our algorithm was devised for 2D, and there are other techniques (such as splines) that might yield
Quantum algorithms for topological and geometric analysis of data
Lloyd, Seth; Garnerone, Silvano; Zanardi, Paolo
2016-01-01
Extracting useful information from large data sets can be a daunting task. Topological methods for analysing data sets provide a powerful technique for extracting such information. Persistent homology is a sophisticated tool for identifying topological features and for determining how such features persist as the data is viewed at different scales. Here we present quantum machine learning algorithms for calculating Betti numbers—the numbers of connected components, holes and voids—in persistent homology, and for finding eigenvectors and eigenvalues of the combinatorial Laplacian. The algorithms provide an exponential speed-up over the best currently known classical algorithms for topological data analysis. PMID:26806491
Review of blunt body wake flows at hypersonic low density conditions
NASA Technical Reports Server (NTRS)
Moss, J. N.; Price, J. M.
1996-01-01
Recent results of experimental and computational studies concerning hypersonic flows about blunted cones including their near wake are reviewed. Attention is focused on conditions where rarefaction effects are present, particularly in the wake. The experiments have been performed for a common model configuration (70 deg spherically-blunted cone) in five hypersonic facilities that encompass a significant range of rarefaction and nonequilibrium effects. Computational studies using direct simulation Monte Carlo (DSMC) and Navier-Stokes solvers have been applied to selected experiments performed in each of the facilities. In addition, computations have been made for typical flight conditions in both Earth and Mars atmospheres, hence more energetic flows than produced in the ground-based tests. Also, comparisons of DSMC calculations and forebody measurements made for the Japanese Orbital Reentry Experiment (OREX) vehicle (a 50 deg spherically-blunted cone) are presented to bridge the spectrum of ground to flight conditions.
Radiation Modeling with Direct Simulation Monte Carlo
NASA Technical Reports Server (NTRS)
Carlson, Ann B.; Hassan, H. A.
1991-01-01
Improvements in the modeling of radiation in low density shock waves with direct simulation Monte Carlo (DSMC) are the subject of this study. A new scheme to determine the relaxation collision numbers for excitation of electronic states is proposed. This scheme attempts to move the DSMC programs toward a more detailed modeling of the physics and more reliance on available rate data. The new method is compared with the current modeling technique and both techniques are compared with available experimental data. The differences in the results are evaluated. The test case is based on experimental measurements from the AVCO-Everett Research Laboratory electric arc-driven shock tube of a normal shock wave in air at 10 km/s and .1 Torr. The new method agrees with the available data as well as the results from the earlier scheme and is more easily extrapolated to di erent ow conditions.
A Fokker-Planck based kinetic model for diatomic rarefied gas flows
NASA Astrophysics Data System (ADS)
Gorji, M. Hossein; Jenny, Patrick
2013-06-01
A Fokker-Planck based kinetic model is presented here, which also accounts for internal energy modes characteristic for diatomic gas molecules. The model is based on a Fokker-Planck approximation of the Boltzmann equation for monatomic molecules, whereas phenomenological principles were employed for the derivation. It is shown that the model honors the equipartition theorem in equilibrium and fulfills the Landau-Teller relaxation equations for internal degrees of freedom. The objective behind this approximate kinetic model is accuracy at reasonably low computational cost. This can be achieved due to the fact that the resulting stochastic differential equations are continuous in time; therefore, no collisions between the simulated particles have to be calculated. Besides, because of the devised energy conserving time integration scheme, it is not required to resolve the collisional scales, i.e., the mean collision time and the mean free path of molecules. This, of course, gives rise to much more efficient simulations with respect to other particle methods, especially the conventional direct simulation Monte Carlo (DSMC), for small and moderate Knudsen numbers. To examine the new approach, first the computational cost of the model was compared with respect to DSMC, where significant speed up could be obtained for small Knudsen numbers. Second, the structure of a high Mach shock (in nitrogen) was studied, and the good performance of the model for such out of equilibrium conditions could be demonstrated. At last, a hypersonic flow of nitrogen over a wedge was studied, where good agreement with respect to DSMC (with level to level transition model) for vibrational and translational temperatures is shown.
DSMC Simulations of High Mach Number Taylor-Couette Flow
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev
2017-11-01
The main focus of this work is to characterise the Taylor-Couette flow of an ideal gas between two coaxial cylinders at Mach number Ma =(Uw /√{ kbTw / m }) in the range 0.01
NASA Astrophysics Data System (ADS)
Christou, Chariton; Kokou Dadzie, S.; Thomas, Nicolas; Hartogh, Paul; Jorda, Laurent; Kührt, Ekkehard; Whitby, James; Wright, Ian; Zarnecki, John
2017-04-01
While ESA's Rosetta mission has formally been completed, the data analysis and interpretation continues. Here, we address the physics of the gas flow at the surface of the comet. Understanding the sublimation of ice at the surface of the nucleus provides the initial boundary condition for studying the inner coma. The gas flow at the surface of the comet 67P/Churyumov-Gerasimenko can be in the rarefaction regime and a non-Maxwellian velocity distribution may be present. In these cases, continuum methods like Navier-Stokes-Fourier (NSF) set of equations are rarely applicable. Discrete particle methods such as Direct Simulation Monte Carlo (DSMC) method are usually adopted. DSMC is currently the dominant numerical method to study rarefied gas flows. It has been widely used to study cometary outflow over past years .1,2. In the present study, we investigate numerically, gas transport near the surface of the nucleus using DSMC. We focus on the outgassing from the near surface boundary layer into the vacuum (˜20 cm above the nucleus surface). Simulations are performed using the open source code dsmcFoam on an unstructured grid. Until now, artificially generated random porous media formed by packed spheres have been used to represent the comet surface boundary layer structure .3. In the present work, we used instead Micro-computerized-tomography (micro-CT) scanned images to provide geologically realistic 3D representations of the boundary layer porous structure. The images are from earth basins. The resolution is relatively high - in the range of some μm. Simulations from different rock samples with high porosity (and comparable to those expected at 67P) are compared. Gas properties near the surface boundary layer are presented and characterized. We have identified effects of the various porous structure properties on the gas flow fields. Temperature, density and velocity profiles have also been analyzed. .1. J.-F. Crifo, G. Loukianov, A. Rodionov and V. Zakharov, Icarus 176 (1), 192-219 (2005). 2. Y. Liao, C. Su, R. Marschall, J. Wu, M. Rubin, I. Lai, W. Ip, H. Keller, J. Knollenberg and E. Kührt, Earth, Moon, and Planets 117 (1), 41-64 (2016). 3. Y. V. Skorov, R. Van Lieshout, J. Blum and H. U. Keller, Icarus 212 (2), 867-876 (2011).
Computation of repetitions and regularities of biologically weighted sequences.
Christodoulakis, M; Iliopoulos, C; Mouchard, L; Perdikuri, K; Tsakalidis, A; Tsichlas, K
2006-01-01
Biological weighted sequences are used extensively in molecular biology as profiles for protein families, in the representation of binding sites and often for the representation of sequences produced by a shotgun sequencing strategy. In this paper, we address three fundamental problems in the area of biologically weighted sequences: (i) computation of repetitions, (ii) pattern matching, and (iii) computation of regularities. Our algorithms can be used as basic building blocks for more sophisticated algorithms applied on weighted sequences.
Fortier, Véronique; Levesque, Ives R
2018-06-01
Phase processing impacts the accuracy of quantitative susceptibility mapping (QSM). Techniques for phase unwrapping and background removal have been proposed and demonstrated mostly in brain. In this work, phase processing was evaluated in the context of large susceptibility variations (Δχ) and negligible signal, in particular for susceptibility estimation using the iterative phase replacement (IPR) algorithm. Continuous Laplacian, region-growing, and quality-guided unwrapping were evaluated. For background removal, Laplacian boundary value (LBV), projection onto dipole fields (PDF), sophisticated harmonic artifact reduction for phase data (SHARP), variable-kernel sophisticated harmonic artifact reduction for phase data (V-SHARP), regularization enabled sophisticated harmonic artifact reduction for phase data (RESHARP), and 3D quadratic polynomial field removal were studied. Each algorithm was quantitatively evaluated in simulation and qualitatively in vivo. Additionally, IPR-QSM maps were produced to evaluate the impact of phase processing on the susceptibility in the context of large Δχ with negligible signal. Quality-guided unwrapping was the most accurate technique, whereas continuous Laplacian performed poorly in this context. All background removal algorithms tested resulted in important phase inaccuracies, suggesting that techniques used for brain do not translate well to situations where large Δχ and no or low signal are expected. LBV produced the smallest errors, followed closely by PDF. Results suggest that quality-guided unwrapping should be preferred, with PDF or LBV for background removal, for QSM in regions with large Δχ and negligible signal. This reduces the susceptibility inaccuracy introduced by phase processing. Accurate background removal remains an open question. Magn Reson Med 79:3103-3113, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Key issues of ultraviolet radiation of OH at high altitudes
NASA Astrophysics Data System (ADS)
Zhang, Yuhuai; Wan, Tian; Jiang, Jianzheng; Fan, Jing
2014-12-01
Ultraviolet (UV) emissions radiated by hydroxyl (OH) is one of the fundamental elements in the prediction of radiation signature of high-altitude and high-speed vehicle. In this work, the OH A2Σ+→ X2Π ultraviolet emission band behind the bow shock is computed under the experimental condition of the second bow-shock ultraviolet flight (BSUV-2). Four related key issues are discussed, namely, the source of hydrogen element in the high-altitude atmosphere, the formation mechanism of OH species, efficient computational algorithm of trace species in rarefied flows, and accurate calculation of OH emission spectra. Firstly, by analyzing the typical atmospheric model, the vertical distributions of the number densities of different species containing hydrogen element are given. According to the different dominating species containing hydrogen element, the atmosphere is divided into three zones, and the formation mechanism of OH species is analyzed in the different zones. The direct simulation Monte Carlo (DSMC) method and the Navier-Stokes equations are employed to compute the number densities of the different OH electronically and vibrationally excited states. Different to the previous work, the trace species separation (TSS) algorithm is applied twice in order to accurately calculate the densities of OH species and its excited states. Using a non-equilibrium radiation model, the OH ultraviolet emission spectra and intensity at different altitudes are computed, and good agreement is obtained with the flight measured data.
XML Tactical Chat (XTC): The Way Ahead for Navy Chat
2007-09-01
multicast transmissions via sophisticated pruning algorithms, while allowing multicast packets to “ tunnel ” through IP routers. [Macedonia, Brutzman 1994...conference was Jabber Inc. who added some great insight into the power of Jabber. • Great features including blackberry handheld connectivity and
Axisymmetric Implementation for 3D-Based DSMC Codes
NASA Technical Reports Server (NTRS)
Stewart, Benedicte; Lumpkin, F. E.; LeBeau, G. J.
2011-01-01
The primary objective in developing NASA s DSMC Analysis Code (DAC) was to provide a high fidelity modeling tool for 3D rarefied flows such as vacuum plume impingement and hypersonic re-entry flows [1]. The initial implementation has been expanded over time to offer other capabilities including a novel axisymmetric implementation. Because of the inherently 3D nature of DAC, this axisymmetric implementation uses a 3D Cartesian domain and 3D surfaces. Molecules are moved in all three dimensions but their movements are limited by physical walls to a small wedge centered on the plane of symmetry (Figure 1). Unfortunately, far from the axis of symmetry, the cell size in the direction perpendicular to the plane of symmetry (the Z-direction) may become large compared to the flow mean free path. This frequently results in inaccuracies in these regions of the domain. A new axisymmetric implementation is presented which aims to solve this issue by using Bird s approach for the molecular movement while preserving the 3D nature of the DAC software [2]. First, the computational domain is similar to that previously used such that a wedge must still be used to define the inflow surface and solid walls within the domain. As before molecules are created inside the inflow wedge triangles but they are now rotated back to the symmetry plane. During the move step, molecules are moved in 3D but instead of interacting with the wedge walls, the molecules are rotated back to the plane of symmetry at the end of the move step. This new implementation was tested for multiple flows over axisymmetric shapes, including a sphere, a cone, a double cone and a hollow cylinder. Comparisons to previous DSMC solutions and experiments, when available, are made.
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2014-01-01
The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. Recently introduced molecular-level chemistry models that predict equilibrium and nonequilibrium reaction rates using only kinetic theory and fundamental molecular properties are extended in the current work to include electronic energy level transitions and reactions involving charged particles. These extensions are shown to agree favorably with reported transition and reaction rates from the literature for near-equilibrium conditions. Also, the extensions are applied to the second flight of the Project FIRE flight experiment at 1634 seconds with a Knudsen number of 0.001 at an altitude of 76.4 km. In order to accomplish this, NASA's direct simulation Monte Carlo code DAC was rewritten to include the ability to simulate charge-neutral ionized flows, take advantage of the recently introduced chemistry model, and to include the extensions presented in this work. The 1634 second data point was chosen for comparisons to be made in order to include a CFD solution. The Knudsen number at this point in time is such that the DSMC simulations are still tractable and the CFD computations are at the edge of what is considered valid because, although near-transitional, the flow is still considered to be continuum. It is shown that the inclusion of electronic energy levels in the DSMC simulation is necessary for flows of this nature and is required for comparison to the CFD solution. The flow field solutions are also post-processed by the nonequilibrium radiation code HARA to compute the radiative portion.
Statistical Mechanics of Combinatorial Auctions
NASA Astrophysics Data System (ADS)
Galla, Tobias; Leone, Michele; Marsili, Matteo; Sellitto, Mauro; Weigt, Martin; Zecchina, Riccardo
2006-09-01
Combinatorial auctions are formulated as frustrated lattice gases on sparse random graphs, allowing the determination of the optimal revenue by methods of statistical physics. Transitions between computationally easy and hard regimes are found and interpreted in terms of the geometric structure of the space of solutions. We introduce an iterative algorithm to solve intermediate and large instances, and discuss competing states of optimal revenue and maximal number of satisfied bidders. The algorithm can be generalized to the hard phase and to more sophisticated auction protocols.
Logic via Computer Programming.
ERIC Educational Resources Information Center
Wieschenberg, Agnes A.
This paper proposed the question "How do we teach logical thinking and sophisticated mathematics to unsophisticated college students?" One answer among many is through the writing of computer programs. The writing of computer algorithms is mathematical problem solving and logic in disguise and it may attract students who would otherwise stop…
Was Euclid an Unnecessarily Sophisticated Psychologist?
ERIC Educational Resources Information Center
Arabie, Phipps
1991-01-01
The current state of multidimensional scaling using the city-block metric is reviewed, with attention to (1) substantive and theoretical issues; (2) recent algorithmic developments and their implications for analysis; (3) isometries with other metrics; (4) links to graph-theoretic models; and (5) prospects for future development. (SLD)
Particle Methods for Simulating Atomic Radiation in Hypersonic Reentry Flows
NASA Astrophysics Data System (ADS)
Ozawa, T.; Wang, A.; Levin, D. A.; Modest, M.
2008-12-01
With a fast reentry speed, the Stardust vehicle generates a strong shock region ahead of its blunt body with a temperature above 60,000 K. These extreme Mach number flows are sufficiently energetic to initiate gas ionization processes and thermal and chemical ablation processes. The nonequilibrium gaseous radiation from the shock layer is so strong that it affects the flowfield macroparameter distributions. In this work, we present the first loosely coupled direct simulation Monte Carlo (DSMC) simulations with the particle-based photon Monte Carlo (p-PMC) method to simulate high-Mach number reentry flows in the near-continuum flow regime. To efficiently capture the highly nonequilibrium effects, emission and absorption cross section databases using the Nonequilibrium Air Radiation (NEQAIR) were generated, and atomic nitrogen and oxygen radiative transport was calculated by the p-PMC method. The radiation energy change calculated by the p-PMC method has been coupled in the DSMC calculations, and the atomic radiation was found to modify the flow field and heat flux at the wall.
Simulation of thermal transpiration flow using a high-order moment method
NASA Astrophysics Data System (ADS)
Sheng, Qiang; Tang, Gui-Hua; Gu, Xiao-Jun; Emerson, David R.; Zhang, Yong-Hao
2014-04-01
Nonequilibrium thermal transpiration flow is numerically analyzed by an extended thermodynamic approach, a high-order moment method. The captured velocity profiles of temperature-driven flow in a parallel microchannel and in a micro-chamber are compared with available kinetic data or direct simulation Monte Carlo (DSMC) results. The advantages of the high-order moment method are shown as a combination of more accuracy than the Navier-Stokes-Fourier (NSF) equations and less computation cost than the DSMC method. In addition, the high-order moment method is also employed to simulate the thermal transpiration flow in complex geometries in two types of Knudsen pumps. One is based on micro-mechanized channels, where the effect of different wall temperature distributions on thermal transpiration flow is studied. The other relies on porous structures, where the variation of flow rate with a changing porosity or pore surface area ratio is investigated. These simulations can help to optimize the design of a real Knudsen pump.
DSMC Simulation of Separated Flows About Flared Bodies at Hypersonic Conditions
NASA Technical Reports Server (NTRS)
Moss, James N.
2000-01-01
This paper describes the results of a numerical study of interacting hypersonic flows at conditions that can be produced in ground-based test facilities. The computations are made with the direct simulation Monte Carlo (DSMC) method of Bird. The focus is on Mach 10 flows about flared axisymmetric configurations, both hollow cylinder flares and double cones. The flow conditions are those for which experiments have been or will be performed in the ONERA R5Ch low-density wind tunnel and the Calspan-University of Buffalo Research Center (CUBRC) Large Energy National Shock (LENS) tunnel. The range of flow conditions, model configurations, and model sizes provides a significant range of shock/shock and shock/boundary layer interactions at low Reynolds number conditions. Results presented will highlight the sensitivity of the calculations to grid resolution, contrast the differences in flow structure for hypersonic cold flows and those of more energetic but still low enthalpy flows, and compare the present results with experimental measurements for surface heating, pressure, and extent of separation.
Numerical simulation of rarefied gas flow through a slit
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr.; Jeng, Duen-Ren; De Witt, Kenneth J.; Chung, Chan-Hong
1990-01-01
Two different approaches, the finite-difference method coupled with the discrete-ordinate method (FDDO), and the direct-simulation Monte Carlo (DSMC) method, are used in the analysis of the flow of a rarefied gas from one reservoir to another through a two-dimensional slit. The cases considered are for hard vacuum downstream pressure, finite pressure ratios, and isobaric pressure with thermal diffusion, which are not well established in spite of the simplicity of the flow field. In the FDDO analysis, by employing the discrete-ordinate method, the Boltzmann equation simplified by a model collision integral is transformed to a set of partial differential equations which are continuous in physical space but are point functions in molecular velocity space. The set of partial differential equations are solved by means of a finite-difference approximation. In the DSMC analysis, three kinds of collision sampling techniques, the time counter (TC) method, the null collision (NC) method, and the no time counter (NTC) method, are used.
A Multi-Scale Settlement Matching Algorithm Based on ARG
NASA Astrophysics Data System (ADS)
Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia
2016-06-01
Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.
Introduction to Autonomous Mobile Robotics Using "Lego Mindstorms" NXT
ERIC Educational Resources Information Center
Akin, H. Levent; Meriçli, Çetin; Meriçli, Tekin
2013-01-01
Teaching the fundamentals of robotics to computer science undergraduates requires designing a well-balanced curriculum that is complemented with hands-on applications on a platform that allows rapid construction of complex robots, and implementation of sophisticated algorithms. This paper describes such an elective introductory course where the…
Robotics for Computer Scientists: What's the Big Idea?
ERIC Educational Resources Information Center
Touretzky, David S.
2013-01-01
Modern robots, like today's smartphones, are complex devices with intricate software systems. Introductory robot programming courses must evolve to reflect this reality, by teaching students to make use of the sophisticated tools their robots provide rather than reimplementing basic algorithms. This paper focuses on teaching with Tekkotsu, an open…
Determining open cluster membership. A Bayesian framework for quantitative member classification
NASA Astrophysics Data System (ADS)
Stott, Jonathan J.
2018-01-01
Aims: My goal is to develop a quantitative algorithm for assessing open cluster membership probabilities. The algorithm is designed to work with single-epoch observations. In its simplest form, only one set of program images and one set of reference images are required. Methods: The algorithm is based on a two-stage joint astrometric and photometric assessment of cluster membership probabilities. The probabilities were computed within a Bayesian framework using any available prior information. Where possible, the algorithm emphasizes simplicity over mathematical sophistication. Results: The algorithm was implemented and tested against three observational fields using published survey data. M 67 and NGC 654 were selected as cluster examples while a third, cluster-free, field was used for the final test data set. The algorithm shows good quantitative agreement with the existing surveys and has a false-positive rate significantly lower than the astrometric or photometric methods used individually.
Program Manager - A Bimonthly Magazine of DSMC, Volume 27, Number 2.
1998-04-01
catalog. http /www.gsa.gov -------------- - Online shopping for commercial items to http ’Iwww.ndia.org I--- support government interests. Events...funds. Allows users access to GAO "Whats New in Contracting?" educational reports, FAQs. products catalog. http://www.gsa.gov Online shopping for
Supercomputing resources empowering superstack with interactive and integrated systems
NASA Astrophysics Data System (ADS)
Rückemann, Claus-Peter
2012-09-01
This paper presents the results from the development and implementation of Superstack algorithms to be dynamically used with integrated systems and supercomputing resources. Processing of geophysical data, thus named geoprocessing, is an essential part of the analysis of geoscientific data. The theory of Superstack algorithms and the practical application on modern computing architectures was inspired by developments introduced with processing of seismic data on mainframes and within the last years leading to high end scientific computing applications. There are several stacking algorithms known but with low signal to noise ratio in seismic data the use of iterative algorithms like the Superstack can support analysis and interpretation. The new Superstack algorithms are in use with wave theory and optical phenomena on highly performant computing resources for huge data sets as well as for sophisticated application scenarios in geosciences and archaeology.
Active Learning Using Hint Information.
Li, Chun-Liang; Ferng, Chun-Sung; Lin, Hsuan-Tien
2015-08-01
The abundance of real-world data and limited labeling budget calls for active learning, an important learning paradigm for reducing human labeling efforts. Many recently developed active learning algorithms consider both uncertainty and representativeness when making querying decisions. However, exploiting representativeness with uncertainty concurrently usually requires tackling sophisticated and challenging learning tasks, such as clustering. In this letter, we propose a new active learning framework, called hinted sampling, which takes both uncertainty and representativeness into account in a simpler way. We design a novel active learning algorithm within the hinted sampling framework with an extended support vector machine. Experimental results validate that the novel active learning algorithm can result in a better and more stable performance than that achieved by state-of-the-art algorithms. We also show that the hinted sampling framework allows improving another active learning algorithm designed from the transductive support vector machine.
Simple geometric algorithms to aid in clearance management for robotic mechanisms
NASA Technical Reports Server (NTRS)
Copeland, E. L.; Ray, L. D.; Peticolas, J. D.
1981-01-01
Global geometric shapes such as lines, planes, circles, spheres, cylinders, and the associated computational algorithms which provide relatively inexpensive estimates of minimum spatial clearance for safe operations were selected. The Space Shuttle, remote manipulator system, and the Power Extension Package are used as an example. Robotic mechanisms operate in quarters limited by external structures and the problem of clearance is often of considerable interest. Safe clearance management is simple and suited to real time calculation, whereas contact prediction requires more precision, sophistication, and computational overhead.
Weakly supervised classification in high energy physics
Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco; ...
2017-05-01
As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less
Weakly supervised classification in high energy physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco
As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less
USDA-ARS?s Scientific Manuscript database
The temptation to include model parameters and high resolution input data together with the availability of powerful optimization and uncertainty analysis algorithms has significantly enhanced the complexity of hydrologic and water quality modeling. However, the ability to take advantage of sophist...
Using Web Speech Technology with Language Learning Applications
ERIC Educational Resources Information Center
Daniels, Paul
2015-01-01
In this article, the author presents the history of human-to-computer interaction based upon the design of sophisticated computerized speech recognition algorithms. Advancements such as the arrival of cloud-based computing and software like Google's Web Speech API allows anyone with an Internet connection and Chrome browser to take advantage of…
Use of artificial landscapes to isolate controls on burn probability
Marc-Andre Parisien; Carol Miller; Alan A. Ager; Mark A. Finney
2010-01-01
Techniques for modeling burn probability (BP) combine the stochastic components of fire regimes (ignitions and weather) with sophisticated fire growth algorithms to produce high-resolution spatial estimates of the relative likelihood of burning. Despite the numerous investigations of fire patterns from either observed or simulated sources, the specific influence of...
Key issues of ultraviolet radiation of OH at high altitudes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yuhuai; Wan, Tian; Jiang, Jianzheng
2014-12-09
Ultraviolet (UV) emissions radiated by hydroxyl (OH) is one of the fundamental elements in the prediction of radiation signature of high-altitude and high-speed vehicle. In this work, the OH A{sup 2}Σ{sup +}→X{sup 2}Π ultraviolet emission band behind the bow shock is computed under the experimental condition of the second bow-shock ultraviolet flight (BSUV-2). Four related key issues are discussed, namely, the source of hydrogen element in the high-altitude atmosphere, the formation mechanism of OH species, efficient computational algorithm of trace species in rarefied flows, and accurate calculation of OH emission spectra. Firstly, by analyzing the typical atmospheric model, the verticalmore » distributions of the number densities of different species containing hydrogen element are given. According to the different dominating species containing hydrogen element, the atmosphere is divided into three zones, and the formation mechanism of OH species is analyzed in the different zones. The direct simulation Monte Carlo (DSMC) method and the Navier-Stokes equations are employed to compute the number densities of the different OH electronically and vibrationally excited states. Different to the previous work, the trace species separation (TSS) algorithm is applied twice in order to accurately calculate the densities of OH species and its excited states. Using a non-equilibrium radiation model, the OH ultraviolet emission spectra and intensity at different altitudes are computed, and good agreement is obtained with the flight measured data.« less
A digitally implemented preambleless demodulator for maritime and mobile data communications
NASA Astrophysics Data System (ADS)
Chalmers, Harvey; Shenoy, Ajit; Verahrami, Farhad B.
The hardware design and software algorithms for a low-bit-rate, low-cost, all-digital preambleless demodulator are described. The demodulator operates under severe high-noise conditions, fast Doppler frequency shifts, large frequency offsets, and multipath fading. Sophisticated algorithms, including a fast Fourier transform (FFT)-based burst acquisition algorithm, a cycle-slip resistant carrier phase tracker, an innovative Doppler tracker, and a fast acquisition symbol synchronizer, were developed and extensively simulated for reliable burst reception. The compact digital signal processor (DSP)-based demodulator hardware uses a unique personal computer test interface for downloading test data files. The demodulator test results demonstrate a near-ideal performance within 0.2 dB of theory.
Genetic algorithms and their use in Geophysical Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, Paul B.
1999-04-01
Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show thatmore » certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.« less
Genetic algorithms and their use in geophysical problems
NASA Astrophysics Data System (ADS)
Parker, Paul Bradley
Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or "fittest" models from a "population" and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Also, optimal efficiency is usually achieved with smaller (<50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (>2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.
NASA Astrophysics Data System (ADS)
Li, Zhi-Hui; Peng, Ao-Ping; Zhang, Han-Xin; Yang, Jaw-Yen
2015-04-01
This article reviews rarefied gas flow computations based on nonlinear model Boltzmann equations using deterministic high-order gas-kinetic unified algorithms (GKUA) in phase space. The nonlinear Boltzmann model equations considered include the BGK model, the Shakhov model, the Ellipsoidal Statistical model and the Morse model. Several high-order gas-kinetic unified algorithms, which combine the discrete velocity ordinate method in velocity space and the compact high-order finite-difference schemes in physical space, are developed. The parallel strategies implemented with the accompanying algorithms are of equal importance. Accurate computations of rarefied gas flow problems using various kinetic models over wide ranges of Mach numbers 1.2-20 and Knudsen numbers 0.0001-5 are reported. The effects of different high resolution schemes on the flow resolution under the same discrete velocity ordinate method are studied. A conservative discrete velocity ordinate method to ensure the kinetic compatibility condition is also implemented. The present algorithms are tested for the one-dimensional unsteady shock-tube problems with various Knudsen numbers, the steady normal shock wave structures for different Mach numbers, the two-dimensional flows past a circular cylinder and a NACA 0012 airfoil to verify the present methodology and to simulate gas transport phenomena covering various flow regimes. Illustrations of large scale parallel computations of three-dimensional hypersonic rarefied flows over the reusable sphere-cone satellite and the re-entry spacecraft using almost the largest computer systems available in China are also reported. The present computed results are compared with the theoretical prediction from gas dynamics, related DSMC results, slip N-S solutions and experimental data, and good agreement can be found. The numerical experience indicates that although the direct model Boltzmann equation solver in phase space can be computationally expensive, nevertheless, the present GKUAs for kinetic model Boltzmann equations in conjunction with current available high-performance parallel computer power can provide a vital engineering tool for analyzing rarefied gas flows covering the whole range of flow regimes in aerospace engineering applications.
Numerical Modeling of Thermal Edge Flow
NASA Astrophysics Data System (ADS)
Ibrayeva, Aizhan
A gas flow can be induced between two interdigitated arrays of thin vanes, when one of the arrays is uniformly heated or cooled. Sharply curved isotherms near the vane edges leads to momentum imbalance among incident particles, which creates Knudsen force to the vane and thermal edge flow in a gas. The flow is observed in a rarefied gas, when the mean free path of the molecules are comparable with the characteristic length scale of the system. In order to understand a physical mechanism of the flow and Knudsen force, the configuration was numerically investigated under different gas rarefication degrees and temperature gradients in the system by direct simulation Monte Carlo (DSMC) method. From simulations, the highest force value is obtained when Knudsen number is around 0.5 and becomes negligible in free molecular and continuum regimes. DSMC results are analyzed from the theoretical point of view and compared to experimental data. Validation of the simulations is done by the RKDG method. An effect of various geometric parameters to the performance of the actuator was investigated and suggestions were made for improved design of the device.
AEROELASTIC SIMULATION TOOL FOR INFLATABLE BALLUTE AEROCAPTURE
NASA Technical Reports Server (NTRS)
Liever, P. A.; Sheta, E. F.; Habchi, S. D.
2006-01-01
A multidisciplinary analysis tool is under development for predicting the impact of aeroelastic effects on the functionality of inflatable ballute aeroassist vehicles in both the continuum and rarefied flow regimes. High-fidelity modules for continuum and rarefied aerodynamics, structural dynamics, heat transfer, and computational grid deformation are coupled in an integrated multi-physics, multi-disciplinary computing environment. This flexible and extensible approach allows the integration of state-of-the-art, stand-alone NASA and industry leading continuum and rarefied flow solvers and structural analysis codes into a computing environment in which the modules can run concurrently with synchronized data transfer. Coupled fluid-structure continuum flow demonstrations were conducted on a clamped ballute configuration. The feasibility of implementing a DSMC flow solver in the simulation framework was demonstrated, and loosely coupled rarefied flow aeroelastic demonstrations were performed. A NASA and industry technology survey identified CFD, DSMC and structural analysis codes capable of modeling non-linear shape and material response of thin-film inflated aeroshells. The simulation technology will find direct and immediate applications with NASA and industry in ongoing aerocapture technology development programs.
Effect of plasma distribution on propulsion performance in electrodeless plasma thrusters
NASA Astrophysics Data System (ADS)
Takao, Yoshinori; Takase, Kazuki; Takahashi, Kazunori
2016-09-01
A helicon plasma thruster consisting of a helicon plasma source and a magnetic nozzle is one of the candidates for long-lifetime thrusters because no electrodes are employed to generate or accelerate plasma. A recent experiment, however, detected the non-negligible axial momentum lost to the lateral wall boundary, which degrades thruster performance, when the source was operated with highly ionized gases. To investigate this mechanism, we have conducted two-dimensional axisymmetric particle-in-cell (PIC) simulations with the neutral distribution obtained by Direct Simulation Monte Carlo (DSMC) method. The numerical results have indicated that the axially asymmetric profiles of the plasma density and potential are obtained when the strong decay of neutrals occurs at the source downstream. This asymmetric potential profile leads to the accelerated ion towards the lateral wall, leading to the non-negligible net axial force in the opposite direction of the thrust. Hence, to reduce this asymmetric profile by increasing the neutral density at downstream and/or by confining plasma with external magnetic field would result in improvement of the propulsion performance. These effects are also analyzed by PIC/DSMC simulations.
Pauley, Tim; Gargaro, Judith; Chenard, Glen; Cavanagh, Helen; McKay, Sandra M
2016-01-01
This study evaluated paraprofessional-led diabetes self-management coaching (DSMC) among 94 clients with type 2 diabetes recruited from a Community Care Access Centre in Ontario, Canada. Subjects were randomized to standard care or standard care plus coaching. Measures included the Diabetes Self-Efficacy Scale (DSES), Insulin Management Diabetes Self-Efficacy Scale (IMDSES), and Hospital Anxiety and Depression Scale (HADS). Both groups showed improvement in DSES (6.6 + 1.5 vs. 7.2 + 1.5, p < .001) and IMDSES (113.5 + 20.6 vs. 125.7 + 22.3, p < .001); there were no between-groups differences. There were no between-groups differences in anxiety (p > .05 for all) or depression scores (p > .05 for all), or anxiety (p > .05 for all) or depression (p > .05 for all) categories at baseline, postintervention, or follow-up. While all subjects demonstrated significant improvements in self-efficacy measures, there is no evidence to support paraprofessional-led DSMC as an intervention which conveys additional benefits over standard care.
Evaluation of nonequilibrium boundary conditions for hypersonic rarefied gas flows
NASA Astrophysics Data System (ADS)
Le, N. T. P.; Greenshields, Ch. J.; Reese, J. M.
2012-01-01
A new Computational Fluid Dynamics (CFD) solver for high-speed viscous §ows in the OpenFOAM code is validated against published experimental data and Direct Simulation Monte Carlo (DSMC) results. The laminar §at plate and circular cylinder cases are studied for Mach numbers, Ma, ranging from 6 to 12.7, and with argon and nitrogen as working gases. Simulation results for the laminar §at plate cases show that the combination of accommodation coefficient values σu = 0.7 and σT = 1.0 in the Maxwell/Smoluchowski conditions, and the coefficient values A1 = 1.5 and A2 = 1.0 in the second-order velocity slip condition, give best agreement with experimental data of surface pressure. The values σu = 0.7 and σT = 1.0 also give good agreement with DSMC data of surface pressure at the stagnation point in the circular cylinder case at Kn = 0.25. The Langmuir surface adsorption condition is also tested for the laminar §at plate case, but initial results were not as good as the Maxwell/Smoluchowski boundary conditions.
Predicting Flows of Rarefied Gases
NASA Technical Reports Server (NTRS)
LeBeau, Gerald J.; Wilmoth, Richard G.
2005-01-01
DSMC Analysis Code (DAC) is a flexible, highly automated, easy-to-use computer program for predicting flows of rarefied gases -- especially flows of upper-atmospheric, propulsion, and vented gases impinging on spacecraft surfaces. DAC implements the direct simulation Monte Carlo (DSMC) method, which is widely recognized as standard for simulating flows at densities so low that the continuum-based equations of computational fluid dynamics are invalid. DAC enables users to model complex surface shapes and boundary conditions quickly and easily. The discretization of a flow field into computational grids is automated, thereby relieving the user of a traditionally time-consuming task while ensuring (1) appropriate refinement of grids throughout the computational domain, (2) determination of optimal settings for temporal discretization and other simulation parameters, and (3) satisfaction of the fundamental constraints of the method. In so doing, DAC ensures an accurate and efficient simulation. In addition, DAC can utilize parallel processing to reduce computation time. The domain decomposition needed for parallel processing is completely automated, and the software employs a dynamic load-balancing mechanism to ensure optimal parallel efficiency throughout the simulation.
The Art of Snaring Dragons. Artificial Intelligence Memo Number 338. Revised.
ERIC Educational Resources Information Center
Cohen, Harvey A.
Several models for problem solving are discussed, and the idea of a heuristic frame is developed. This concept provides a description of the evolution of problem-solving skills in terms of the growth of the number of algorithms available and increased sophistication in their use. The heuristic frame model is applied to two sets of physical…
Data mining: sophisticated forms of managed care modeling through artificial intelligence.
Borok, L S
1997-01-01
Data mining is a recent development in computer science that combines artificial intelligence algorithms and relational databases to discover patterns automatically, without the use of traditional statistical methods. Work with data mining tools in health care is in a developmental stage that holds great promise, given the combination of demographic and diagnostic information.
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2013-01-01
The ability to compute rarefied, ionized hypersonic flows is becoming more important as missions such as Earth reentry, landing high mass payloads on Mars, and the exploration of the outer planets and their satellites are being considered. Recently introduced molecular-level chemistry models that predict equilibrium and nonequilibrium reaction rates using only kinetic theory and fundamental molecular properties are extended in the current work to include electronic energy level transitions and reactions involving charged particles. These extensions are shown to agree favorably with reported transition and reaction rates from the literature for nearequilibrium conditions. Also, the extensions are applied to the second flight of the Project FIRE flight experiment at 1634 seconds with a Knudsen number of 0.001 at an altitude of 76.4 km. In order to accomplish this, NASA's direct simulation Monte Carlo code DAC was rewritten to include the ability to simulate charge-neutral ionized flows, take advantage of the recently introduced chemistry model, and to include the extensions presented in this work. The 1634 second data point was chosen for comparisons to be made in order to include a CFD solution. The Knudsen number at this point in time is such that the DSMC simulations are still tractable and the CFD computations are at the edge of what is considered valid because, although near-transitional, the flow is still considered to be continuum. It is shown that the inclusion of electronic energy levels in the DSMC simulation is necessary for flows of this nature and is required for comparison to the CFD solution. The flow field solutions are also post-processed by the nonequilibrium radiation code HARA to compute the radiative portion of the heating and is then compared to the total heating measured in flight.
NASA Astrophysics Data System (ADS)
Hansen, Kenneth; Altwegg, Kathrin; Berthelier, Jean-Jacques; Bieler, Andre; Calmonte, Ursina; Combi, Michael; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, Tamas; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Lena; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu
2016-04-01
We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet coma (<400 km) of comet 67P for the pre-equinox orbit of comet 67P/Churyumov-Gerasimenko. In this work we extend the empirical model to the post-equinox, post-perihelion time period. In addition, we extend the coma model to significantly further from the comet (~100,000-1,000,000 km). The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. Furthermore, we have generalized the model beyond application to 67P by replacing the heliocentric distance parameterizations and mapping them to production rates. Using this method, the model become significantly more general and can be applied to any comet. The model is a significant improvement over simpler empirical models, such as the Haser model. For 67P, the DSMC results are, of course, a more accurate representation of the coma at any given time, but the advantage of a mean state, empirical model is the ease and speed of use. One application of the empirical model is to de-trend the spacecraft motion from the ROSINA COPS and DFMS data (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Comet Pressure Sensor, Double Focusing Mass Spectrometer). The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on the single point measurement. In this presentation we will present the coma production rate as a function of heliocentric distance both pre- and post-equinox and perihelion.
Unified gas-kinetic scheme with multigrid convergence for rarefied flow study
NASA Astrophysics Data System (ADS)
Zhu, Yajun; Zhong, Chengwen; Xu, Kun
2017-09-01
The unified gas kinetic scheme (UGKS) is based on direct modeling of gas dynamics on the mesh size and time step scales. With the modeling of particle transport and collision in a time-dependent flux function in a finite volume framework, the UGKS can connect the flow physics smoothly from the kinetic particle transport to the hydrodynamic wave propagation. In comparison with the direct simulation Monte Carlo (DSMC) method, the current equation-based UGKS can implement implicit techniques in the updates of macroscopic conservative variables and microscopic distribution functions. The implicit UGKS significantly increases the convergence speed for steady flow computations, especially in the highly rarefied and near continuum regimes. In order to further improve the computational efficiency, for the first time, a geometric multigrid technique is introduced into the implicit UGKS, where the prediction step for the equilibrium state and the evolution step for the distribution function are both treated with multigrid acceleration. More specifically, a full approximate nonlinear system is employed in the prediction step for fast evaluation of the equilibrium state, and a correction linear equation is solved in the evolution step for the update of the gas distribution function. As a result, convergent speed has been greatly improved in all flow regimes from rarefied to the continuum ones. The multigrid implicit UGKS (MIUGKS) is used in the non-equilibrium flow study, which includes microflow, such as lid-driven cavity flow and the flow passing through a finite-length flat plate, and high speed one, such as supersonic flow over a square cylinder. The MIUGKS shows 5-9 times efficiency increase over the previous implicit scheme. For the low speed microflow, the efficiency of MIUGKS is several orders of magnitude higher than the DSMC. Even for the hypersonic flow at Mach number 5 and Knudsen number 0.1, the MIUGKS is still more than 100 times faster than the DSMC method for obtaining a convergent steady state solution.
The Matrix Element Method: Past, Present, and Future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.
2013-07-12
The increasing use of multivariate methods, and in particular the Matrix Element Method (MEM), represents a revolution in experimental particle physics. With continued exponential growth in computing capabilities, the use of sophisticated multivariate methods-- already common-- will soon become ubiquitous and ultimately almost compulsory. While the existence of sophisticated algorithms for disentangling signal and background might naively suggest a diminished role for theorists, the use of the MEM, with its inherent connection to the calculation of differential cross sections will benefit from collaboration between theorists and experimentalists. In this white paper, we will briefly describe the MEM and some ofmore » its recent uses, note some current issues and potential resolutions, and speculate about exciting future opportunities.« less
NASA Technical Reports Server (NTRS)
Young, William D.
1992-01-01
The application of formal methods to the analysis of computing systems promises to provide higher and higher levels of assurance as the sophistication of our tools and techniques increases. Improvements in tools and techniques come about as we pit the current state of the art against new and challenging problems. A promising area for the application of formal methods is in real-time and distributed computing. Some of the algorithms in this area are both subtle and important. In response to this challenge and as part of an ongoing attempt to verify an implementation of the Interactive Convergence Clock Synchronization Algorithm (ICCSA), we decided to undertake a proof of the correctness of the algorithm using the Boyer-Moore theorem prover. This paper describes our approach to proving the ICCSA using the Boyer-Moore prover.
Experimental research of flow servo-valve
NASA Astrophysics Data System (ADS)
Takosoglu, Jakub
Positional control of pneumatic drives is particularly important in pneumatic systems. Some methods of positioning pneumatic cylinders for changeover and tracking control are known. Choking method is the most development-oriented and has the greatest potential. An optimal and effective method, particularly when applied to pneumatic drives, has been searched for a long time. Sophisticated control systems with algorithms utilizing artificial intelligence methods are designed therefor. In order to design the control algorithm, knowledge about real parameters of servo-valves used in control systems of electro-pneumatic servo-drives is required. The paper presents the experimental research of flow servo-valve.
DSMC Modeling of Flows with Recombination Reactions
2017-06-23
Rogasinsky, “Analysis of the numerical techniques of the direct simulation Monte Carlo method in the rarefied gas dynamics,” Russ. J. Numer. Anal. Math ...reflection in steady flows,” Comput. Math . Appl. 35(1-2), 113–126 (1998). 45K. L. Wray, “Shock-tube study of the recombination of O atoms by Ar catalysts at
1993-06-01
lr __________ r onM eth S()4 Greg Caruth _________________ William J. Perry, Typography and Design DEPSECDEF 43 Paula Croisetlere 3 Program Manager...the DSMC Press to be such a link to the govern- for publication consideration in either the brand ment and private sector defense acquisition com- new
Applying a visual language for image processing as a graphical teaching tool in medical imaging
NASA Astrophysics Data System (ADS)
Birchman, James J.; Tanimoto, Steven L.; Rowberg, Alan H.; Choi, Hyung-Sik; Kim, Yongmin
1992-05-01
Typical user interaction in image processing is with command line entries, pull-down menus, or text menu selections from a list, and as such is not generally graphical in nature. Although applying these interactive methods to construct more sophisticated algorithms from a series of simple image processing steps may be clear to engineers and programmers, it may not be clear to clinicians. A solution to this problem is to implement a visual programming language using visual representations to express image processing algorithms. Visual representations promote a more natural and rapid understanding of image processing algorithms by providing more visual insight into what the algorithms do than the interactive methods mentioned above can provide. Individuals accustomed to dealing with images will be more likely to understand an algorithm that is represented visually. This is especially true of referring physicians, such as surgeons in an intensive care unit. With the increasing acceptance of picture archiving and communications system (PACS) workstations and the trend toward increasing clinical use of image processing, referring physicians will need to learn more sophisticated concepts than simply image access and display. If the procedures that they perform commonly, such as window width and window level adjustment and image enhancement using unsharp masking, are depicted visually in an interactive environment, it will be easier for them to learn and apply these concepts. The software described in this paper is a visual programming language for imaging processing which has been implemented on the NeXT computer using NeXTstep user interface development tools and other tools in an object-oriented environment. The concept is based upon the description of a visual language titled `Visualization of Vision Algorithms' (VIVA). Iconic representations of simple image processing steps are placed into a workbench screen and connected together into a dataflow path by the user. As the user creates and edits a dataflow path, more complex algorithms can be built on the screen. Once the algorithm is built, it can be executed, its results can be reviewed, and operator parameters can be interactively adjusted until an optimized output is produced. The optimized algorithm can then be saved and added to the system as a new operator. This system has been evaluated as a graphical teaching tool for window width and window level adjustment, image enhancement using unsharp masking, and other techniques.
NASA Astrophysics Data System (ADS)
Yerdelen-Damar, Sevda; Elby, Andrew
2016-06-01
This study investigates how elite Turkish high school physics students claim to approach learning physics when they are simultaneously (i) engaged in a curriculum that led to significant gains in their epistemological sophistication and (ii) subject to a high-stakes college entrance exam. Students reported taking surface (rote) approaches to learning physics, largely driven by college entrance exam preparation and therefore focused on algorithmic problem solving at the expense of exploring concepts and real-life examples more deeply. By contrast, in recommending study strategies to "Arzu," a hypothetical student who doesn't need to take a college entrance exam and just wants to understand physics deeply, the students focused more on linking concepts and real-life examples and on making sense of the formulas and concepts—deep approaches to learning that reflect somewhat sophisticated epistemologies. These results illustrate how students can epistemically compartmentalize, consciously taking different epistemic stances—different views of what counts as knowing and learning—in different contexts even within the same discipline.
Efficient and secure outsourcing of genomic data storage.
Sousa, João Sá; Lefebvre, Cédric; Huang, Zhicong; Raisaro, Jean Louis; Aguilar-Melchor, Carlos; Killijian, Marc-Olivier; Hubaux, Jean-Pierre
2017-07-26
Cloud computing is becoming the preferred solution for efficiently dealing with the increasing amount of genomic data. Yet, outsourcing storage and processing sensitive information, such as genomic data, comes with important concerns related to privacy and security. This calls for new sophisticated techniques that ensure data protection from untrusted cloud providers and that still enable researchers to obtain useful information. We present a novel privacy-preserving algorithm for fully outsourcing the storage of large genomic data files to a public cloud and enabling researchers to efficiently search for variants of interest. In order to protect data and query confidentiality from possible leakage, our solution exploits optimal encoding for genomic variants and combines it with homomorphic encryption and private information retrieval. Our proposed algorithm is implemented in C++ and was evaluated on real data as part of the 2016 iDash Genome Privacy-Protection Challenge. Results show that our solution outperforms the state-of-the-art solutions and enables researchers to search over millions of encrypted variants in a few seconds. As opposed to prior beliefs that sophisticated privacy-enhancing technologies (PETs) are unpractical for real operational settings, our solution demonstrates that, in the case of genomic data, PETs are very efficient enablers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szadkowski, Zbigniew; Glas, Dariusz; Pytel, Krzysztof
Neutrinos play a fundamental role in the understanding of the origin of ultra-high-energy cosmic rays. They interact through charged and neutral currents in the atmosphere generating extensive air showers. However, their a very low rate of events potentially generated by neutrinos is a significant challenge for a detection technique and requires both sophisticated algorithms and high-resolution hardware. A trigger based on a artificial neural network was implemented into the Cyclone{sup R} V E FPGA 5CEFA9F31I7 - the heart of the prototype Front-End boards developed for tests of new algorithms in the Pierre Auger surface detectors. Showers for muon and taumore » neutrino initiating particles on various altitudes, angles and energies were simulated in CORSICA and Offline platforms giving pattern of ADC traces in Auger water Cherenkov detectors. The 3-layer 12-8-1 neural network was taught in MATLAB by simulated ADC traces according the Levenberg-Marquardt algorithm. Results show that a probability of a ADC traces generation is very low due to a small neutrino cross-section. Nevertheless, ADC traces, if occur, for 1-10 EeV showers are relatively short and can be analyzed by 16-point input algorithm. We optimized the coefficients from MATLAB to get a maximal range of potentially registered events and for fixed-point FPGA processing to minimize calculation errors. New sophisticated triggers implemented in Cyclone{sup R} V E FPGAs with large amount of DSP blocks, embedded memory running with 120 - 160 MHz sampling may support a discovery of neutrino events in the Pierre Auger Observatory. (authors)« less
On global optimization using an estimate of Lipschitz constant and simplicial partition
NASA Astrophysics Data System (ADS)
Gimbutas, Albertas; Žilinskas, Antanas
2016-10-01
A new algorithm is proposed for finding the global minimum of a multi-variate black-box Lipschitz function with an unknown Lipschitz constant. The feasible region is initially partitioned into simplices; in the subsequent iteration, the most suitable simplices are selected and bisected via the middle point of the longest edge. The suitability of a simplex for bisection is evaluated by minimizing of a surrogate function which mimics the lower bound for the considered objective function over that simplex. The surrogate function is defined using an estimate of the Lipschitz constant and the objective function values at the vertices of a simplex. The novelty of the algorithm is the sophisticated method of estimating the Lipschitz constant, and the appropriate method to minimize the surrogate function. The proposed algorithm was tested using 600 random test problems of different complexity, showing competitive results with two popular advanced algorithms which are based on similar assumptions.
Nonlinear estimation for arrays of chemical sensors
NASA Astrophysics Data System (ADS)
Yosinski, Jason; Paffenroth, Randy
2010-04-01
Reliable detection of hazardous materials is a fundamental requirement of any national security program. Such materials can take a wide range of forms including metals, radioisotopes, volatile organic compounds, and biological contaminants. In particular, detection of hazardous materials in highly challenging conditions - such as in cluttered ambient environments, where complex collections of analytes are present, and with sensors lacking specificity for the analytes of interest - is an important part of a robust security infrastructure. Sophisticated single sensor systems provide good specificity for a limited set of analytes but often have cumbersome hardware and environmental requirements. On the other hand, simple, broadly responsive sensors are easily fabricated and efficiently deployed, but such sensors individually have neither the specificity nor the selectivity to address analyte differentiation in challenging environments. However, arrays of broadly responsive sensors can provide much of the sensitivity and selectivity of sophisticated sensors but without the substantial hardware overhead. Unfortunately, arrays of simple sensors are not without their challenges - the selectivity of such arrays can only be realized if the data is first distilled using highly advanced signal processing algorithms. In this paper we will demonstrate how the use of powerful estimation algorithms, based on those commonly used within the target tracking community, can be extended to the chemical detection arena. Herein our focus is on algorithms that not only provide accurate estimates of the mixture of analytes in a sample, but also provide robust measures of ambiguity, such as covariances.
Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds
NASA Astrophysics Data System (ADS)
Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni
2012-09-01
Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.
CellAnimation: an open source MATLAB framework for microscopy assays.
Georgescu, Walter; Wikswo, John P; Quaranta, Vito
2012-01-01
Advances in microscopy technology have led to the creation of high-throughput microscopes that are capable of generating several hundred gigabytes of images in a few days. Analyzing such wealth of data manually is nearly impossible and requires an automated approach. There are at present a number of open-source and commercial software packages that allow the user to apply algorithms of different degrees of sophistication to the images and extract desired metrics. However, the types of metrics that can be extracted are severely limited by the specific image processing algorithms that the application implements, and by the expertise of the user. In most commercial software, code unavailability prevents implementation by the end user of newly developed algorithms better suited for a particular type of imaging assay. While it is possible to implement new algorithms in open-source software, rewiring an image processing application requires a high degree of expertise. To obviate these limitations, we have developed an open-source high-throughput application that allows implementation of different biological assays such as cell tracking or ancestry recording, through the use of small, relatively simple image processing modules connected into sophisticated imaging pipelines. By connecting modules, non-expert users can apply the particular combination of well-established and novel algorithms developed by us and others that are best suited for each individual assay type. In addition, our data exploration and visualization modules make it easy to discover or select specific cell phenotypes from a heterogeneous population. CellAnimation is distributed under the Creative Commons Attribution-NonCommercial 3.0 Unported license (http://creativecommons.org/licenses/by-nc/3.0/). CellAnimationsource code and documentation may be downloaded from www.vanderbilt.edu/viibre/software/documents/CellAnimation.zip. Sample data are available at www.vanderbilt.edu/viibre/software/documents/movies.zip. walter.georgescu@vanderbilt.edu Supplementary data available at Bioinformatics online.
Thermal Nonequilibrium in Hypersonic Separated Flow
2014-12-22
flow duration and steadiness. 15. SUBJECT TERMS Hypersonic Flowfield Measurements, Laser Diagnostics of Gas Flow, Laser Induced...extent than the NS computation. While it would be convenient to believe that the more physically realistic flow modeling of the DSMC gas - surface...index and absorption coefficient. Each of the curves was produced assuming a 0.5 % concentration of lithium at the Condition A nozzle exit conditions
NASA Astrophysics Data System (ADS)
Akhlaghi, H.; Roohi, E.; Myong, R. S.
2012-11-01
Micro/nano geometries with specified wall heat flux are widely encountered in electronic cooling and micro-/nano-fluidic sensors. We introduce a new technique to impose the desired (positive/negative) wall heat flux boundary condition in the DSMC simulations. This technique is based on an iterative progress on the wall temperature magnitude. It is found that the proposed iterative technique has a good numerical performance and could implement both positive and negative values of wall heat flux rates accurately. Using present technique, rarefied gas flow through micro-/nanochannels under specified wall heat flux conditions is simulated and unique behaviors are observed in case of channels with cooling walls. For example, contrary to the heating process, it is observed that cooling of micro/nanochannel walls would result in small variations in the density field. Upstream thermal creep effects in the cooling process decrease the velocity slip despite of the Knudsen number increase along the channel. Similarly, cooling process decreases the curvature of the pressure distribution below the linear incompressible distribution. Our results indicate that flow cooling increases the mass flow rate through the channel, and vice versa.
Molecular simulation of small Knudsen number flows
NASA Astrophysics Data System (ADS)
Fei, Fei; Fan, Jing
2012-11-01
The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.
Study of cluster behavior in the riser of CFB by the DSMC method
NASA Astrophysics Data System (ADS)
Liu, H. P.; Liu, D. Y.; Liu, H.
2010-03-01
The flow behaviors of clusters in the riser of a two-dimensional (2D) circulating fluidized bed was numerically studied based on the Euler-Lagrangian approach. Gas turbulence was modeled by means of Large Eddy Simulation (LES). Particle collision was modeled by means of the direct simulation Monte Carlo (DSMC) method. Clusters' hydrodynamic characteristics are obtained using a cluster identification method proposed by sharrma et al. (2000). The descending clusters near the wall region and the up- and down-flowing clusters in the core were studied separately due to their different flow behaviors. The effects of superficial gas velocity on the cluster behavior were analyzed. Simulated results showed that near wall clusters flow downward and the descent velocity is about -45 cm/s. The occurrence frequency of the up-flowing cluster is higher than that of down-flowing cluster in the core of riser. With the increase of superficial gas velocity, the solid concentration and occurrence frequency of clusters decrease, while the cluster axial velocity increase. Simulated results were in agreement with experimental data. The stochastic method used in present paper is feasible for predicting the cluster flow behavior in CFBs.
Rarefied flow past a flat plate at incidence
NASA Technical Reports Server (NTRS)
Dogra, Virendra K.; Moss, James N.; Price, Joseph M.
1988-01-01
Results of a numerical study using the direct simulation Monte Carlo (DSMC) method are presented for the transitional flow about a flat plate at 40 deg incidence. The plate has zero thickness and a length of 1.0 m. The flow conditions simulated are those experienced by the Shuttle Orbiter during reentry at 7.5 km/s. The range of freestream conditions are such that the freestream Knudsen number values are between 0.02 and 8.4, i.e., conditions that encompass most of the transitional flow regime. The DSMC simulations show that transitional effects are evident when compared with free molecule results for all cases considered. The calculated results demonstrate clearly the necessity of having a means of identifying the effects of transitional flow when making aerodynamic flight measurements as are currently being made with the Space Shuttle Orbiter vehicles. Previous flight data analyses have relied exclusively on adjustments in the gas-surface interaction models without accounting for the transitional effect which can be comparable in magnitude. The present calculations show that the transitional effect at 175 km would increase the Space Shuttle Orbiter lift-drag ratio by 90 percent over the free molecule value.
NASA Astrophysics Data System (ADS)
Yang, Guang; Weigand, Bernhard
2018-04-01
The pressure-driven gas transport characteristics through a porous medium consisting of arrays of discrete elements is investigated by using the direct simulation Monte Carlo (DSMC) method. Different porous structures are considered, accounting for both two- and three-dimensional arrangements of basic microscale and nanoscale elements. The pore scale flow patterns in the porous medium are obtained, and the Knudsen diffusion in the pores is studied in detail for slip and transition flow regimes. A new effective pore size of the porous medium is defined, which is a function of the porosity, the tortuosity, the contraction factor, and the intrinsic permeability of the porous medium. It is found that the Klinkenberg effect in different porous structures can be fully described by the Knudsen number characterized by the effective pore size. The accuracies of some widely used Klinkenberg correlations are evaluated by the present DSMC results. It is also found that the available correlations for apparent permeability, most of which are derived from simple pipe or channel flows, can still be applicative for more complex porous media flows, by using the effective pore size defined in this study.
Collisional spreading of Enceladus’ neutral cloud
NASA Astrophysics Data System (ADS)
Cassidy, T. A.; Johnson, R. E.
2010-10-01
We describe a direct simulation Monte Carlo (DSMC) model of Enceladus' neutral cloud and compare its results to observations of OH and O orbiting Saturn. The OH and O are observed far from Enceladus (at 3.95 R S), as far out as 25 R S for O. Previous DSMC models attributed this breadth primarily to ion/neutral scattering (including charge exchange) and molecular dissociation. However, the newly reported O observations and a reinterpretation of the OH observations (Melin, H., Shemansky, D.E., Liu, X. [2009] Planet. Space Sci., 57, 1743-1753, PS&S) showed that the cloud is broader than previously thought. We conclude that the addition of neutral/neutral scattering (Farmer, A.J. [2009] Icarus, 202, 280-286), which was underestimated by previous models, brings the model results in line with the new observations. Neutral/neutral collisions primarily happen in the densest part of the cloud, near Enceladus' orbit, but contribute to the spreading by pumping up orbital eccentricity. Based on the cloud model presented here Enceladus maybe the ultimate source of oxygen for the upper atmospheres of Titan and Saturn. We also predict that large quantities of OH, O and H 2O bombard Saturn's icy satellites.
An evaluation of computer assisted clinical classification algorithms.
Chute, C G; Yang, Y; Buntrock, J
1994-01-01
The Mayo Clinic has a long tradition of indexing patient records in high resolution and volume. Several algorithms have been developed which promise to help human coders in the classification process. We evaluate variations on code browsers and free text indexing systems with respect to their speed and error rates in our production environment. The more sophisticated indexing systems save measurable time in the coding process, but suffer from incompleteness which requires a back-up system or human verification. Expert Network does the best job of rank ordering clinical text, potentially enabling the creation of thresholds for the pass through of computer coded data without human review.
Automatic design of IMA systems
NASA Astrophysics Data System (ADS)
Salomon, U.; Reichel, R.
During the last years, the integrated modular avionics (IMA) design philosophy became widely established at aircraft manufacturers, giving rise to a series of new design challenges, most notably the allocation of avionics functions to the various IMA components and the placement of this equipment in the aircraft. This paper presents a modelling approach for avionics that allows automation of some steps of the design process by applying an optimisation algorithm which searches for system configurations that fulfil the safety requirements and have low costs. The algorithm was implemented as a quite sophisticated software prototype, therefore we will also present detailed results of its application to actual avionics systems.
Graphical Requirements for Force Level Planning. Volume 2
1991-09-01
technology review includes graphics algorithms, computer hardware, computer software, and design methodologies. The technology can either exist today or...level graphics language. 7.4 User Interface Design Tools As user interfaces have become more sophisticated, they have become harder to develop. Xl...Setphen M. Pizer, editors. Proceedings 1986 Workshop on Interactive 31) Graphics , October 1986. 18 J. S. Dumas. Designing User Interface Software. Prentice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szadkowski, Zbigniew; Glas, Dariusz; Pytel, Krzysztof
Observations of ultra-high energy neutrinos became a priority in experimental astro-particle physics. Up to now, the Pierre Auger Observatory did not find any candidate on a neutrino event. This imposes competitive limits to the diffuse flux of ultra-high energy neutrinos in the EeV range and above. A very low rate of events potentially generated by neutrinos is a significant challenge for a detection technique and requires both sophisticated algorithms and high-resolution hardware. A trigger based on a artificial neural network was implemented into the Cyclone{sup R} V E FPGA 5CEFA9F31I7. The prototype Front-End boards for Auger-Beyond-2015 with Cyclone{sup R} Vmore » E can test the neural network algorithm in real pampas conditions in 2015. Showers for muon and tau neutrino initiating particles on various altitudes, angles and energies were simulated in CORSICA and Offline platforms giving pattern of ADC traces in Auger water Cherenkov detectors. The 3-layer 12-10-1 neural network was taught in MATLAB by simulated ADC traces according the Levenberg-Marquardt algorithm. Results show that a probability of a ADC traces generation is very low due to a small neutrino cross-section. Nevertheless, ADC traces, if occur, for 1-10 EeV showers are relatively short and can be analyzed by 16-point input algorithm. For 100 EeV range traces are much longer, but with significantly higher amplitudes, which can be detected by standard threshold algorithms. We optimized the coefficients from MATLAB to get a maximal range of potentially registered events and for fixed-point FPGA processing to minimize calculation errors. Currently used Front-End boards based on no-more produced ACEXR PLDs and obsolete Cyclone{sup R} FPGAs allow an implementation of relatively simple threshold algorithms for triggers. New sophisticated trigger implemented in Cyclone{sup R} V E FPGAs with large amount of DSP blocks, embedded memory running with 120 - 160 MHz sampling may support to discover neutrino events in the Pierre Auger Observatory. (authors)« less
Biologically inspired binaural hearing aid algorithms: Design principles and effectiveness
NASA Astrophysics Data System (ADS)
Feng, Albert
2002-05-01
Despite rapid advances in the sophistication of hearing aid technology and microelectronics, listening in noise remains problematic for people with hearing impairment. To solve this problem two algorithms were designed for use in binaural hearing aid systems. The signal processing strategies are based on principles in auditory physiology and psychophysics: (a) the location/extraction (L/E) binaural computational scheme determines the directions of source locations and cancels noise by applying a simple subtraction method over every frequency band; and (b) the frequency-domain minimum-variance (FMV) scheme extracts a target sound from a known direction amidst multiple interfering sound sources. Both algorithms were evaluated using standard metrics such as signal-to-noise-ratio gain and articulation index. Results were compared with those from conventional adaptive beam-forming algorithms. In free-field tests with multiple interfering sound sources our algorithms performed better than conventional algorithms. Preliminary intelligibility and speech reception results in multitalker environments showed gains for every listener with normal or impaired hearing when the signals were processed in real time with the FMV binaural hearing aid algorithm. [Work supported by NIH-NIDCD Grant No. R21DC04840 and the Beckman Institute.
Ni, Yepeng; Liu, Jianbo; Liu, Shan; Bai, Yaxin
2016-01-01
With the rapid development of smartphones and wireless networks, indoor location-based services have become more and more prevalent. Due to the sophisticated propagation of radio signals, the Received Signal Strength Indicator (RSSI) shows a significant variation during pedestrian walking, which introduces critical errors in deterministic indoor positioning. To solve this problem, we present a novel method to improve the indoor pedestrian positioning accuracy by embedding a fuzzy pattern recognition algorithm into a Hidden Markov Model. The fuzzy pattern recognition algorithm follows the rule that the RSSI fading has a positive correlation to the distance between the measuring point and the AP location even during a dynamic positioning measurement. Through this algorithm, we use the RSSI variation trend to replace the specific RSSI value to achieve a fuzzy positioning. The transition probability of the Hidden Markov Model is trained by the fuzzy pattern recognition algorithm with pedestrian trajectories. Using the Viterbi algorithm with the trained model, we can obtain a set of hidden location states. In our experiments, we demonstrate that, compared with the deterministic pattern matching algorithm, our method can greatly improve the positioning accuracy and shows robust environmental adaptability. PMID:27618053
Coma dust scattering concepts applied to the Rosetta mission
NASA Astrophysics Data System (ADS)
Fink, Uwe; Rinaldi, Giovanna
2015-09-01
This paper describes basic concepts, as well as providing a framework, for the interpretation of the light scattered by the dust in a cometary coma as observed by instruments on a spacecraft such as Rosetta. It is shown that the expected optical depths are small enough that single scattering can be applied. Each of the quantities that contribute to the scattered intensity is discussed in detail. Using optical constants of the likely coma dust constituents, olivine, pyroxene and carbon, the scattering properties of the dust are calculated. For the resulting observable scattering intensities several particle size distributions are considered, a simple power law, power laws with a small particle cut off and a log-normal distributions with various parameters. Within the context of a simple outflow model, the standard definition of Afρ for a circular observing aperture is expanded to an equivalent Afρ for an annulus and specific line-of-sight observation. The resulting equivalence between the observed intensity and Afρ is used to predict observable intensities for 67P/Churyumov-Gerasimenko at the spacecraft encounter near 3.3 AU and near perihelion at 1.3 AU. This is done by normalizing particle production rates of various size distributions to agree with observed ground based Afρ values. Various geometries for the column densities in a cometary coma are considered. The calculations for a simple outflow model are compared with more elaborate Direct Simulation Monte Carlo Calculation (DSMC) models to define the limits of applicability of the simpler analytical approach. Thus our analytical approach can be applied to the majority of the Rosetta coma observations, particularly beyond several nuclear radii where the dust is no longer in a collisional environment, without recourse to computer intensive DSMC calculations for specific cases. In addition to a spherically symmetric 1-dimensional approach we investigate column densities for the 2-dimensional DSMC model on the day and night side of the comet. Our calculations are also applied to estimates of the dust particle densities and flux which are useful for the in-situ experiments on Rosetta.
NASA Astrophysics Data System (ADS)
Hansen, Kenneth C.; Altwegg, Kathrin; Bieler, Andre; Berthelier, Jean-Jacques; Calmonte, Ursina; Combi, Michael R.; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, T. I.; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Léna; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu; ROSINA Team
2016-10-01
We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet water (H2O) coma of comet 67P/Churyumov-Gerasimenko. In this work we create additional empirical models for the coma distributions of CO2 and CO. The AMPS simulations are based on ROSINA DFMS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Double Focusing Mass Spectrometer) data taken over the entire timespan of the Rosetta mission. The empirical model is created using AMPS DSMC results which are extracted from simulations at a range of radial distances, rotation phases and heliocentric distances. The simulation results are then averaged over a comet rotation and fitted to an empirical model distribution. Model coefficients are then fitted to piecewise-linear functions of heliocentric distance. The final product is an empirical model of the coma distribution which is a function of heliocentric distance, radial distance, and sun-fixed longitude and latitude angles. The model clearly mimics the behavior of water shifting production from North to South across the inbound equinox while the CO2 production is always in the South.The empirical model can be used to de-trend the spacecraft motion from the ROSINA COPS and DFMS data. The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on single point measurements. In this presentation we will present the coma production rates as a function of heliocentric distance for the entire Rosetta mission.This work was supported by contracts JPL#1266313 and JPL#1266314 from the US Rosetta Project and NASA grant NNX14AG84G from the Planetary Atmospheres Program.
Information mining in weighted complex networks with nonlinear rating projection
NASA Astrophysics Data System (ADS)
Liao, Hao; Zeng, An; Zhou, Mingyang; Mao, Rui; Wang, Bing-Hong
2017-10-01
Weighted rating networks are commonly used by e-commerce providers nowadays. In order to generate an objective ranking of online items' quality according to users' ratings, many sophisticated algorithms have been proposed in the complex networks domain. In this paper, instead of proposing new algorithms we focus on a more fundamental problem: the nonlinear rating projection. The basic idea is that even though the rating values given by users are linearly separated, the real preference of users to items between the different given values is nonlinear. We thus design an approach to project the original ratings of users to more representative values. This approach can be regarded as a data pretreatment method. Simulation in both artificial and real networks shows that the performance of the ranking algorithms can be improved when the projected ratings are used.
Performance prediction: A case study using a multi-ring KSR-1 machine
NASA Technical Reports Server (NTRS)
Sun, Xian-He; Zhu, Jianping
1995-01-01
While computers with tens of thousands of processors have successfully delivered high performance power for solving some of the so-called 'grand-challenge' applications, the notion of scalability is becoming an important metric in the evaluation of parallel machine architectures and algorithms. In this study, the prediction of scalability and its application are carefully investigated. A simple formula is presented to show the relation between scalability, single processor computing power, and degradation of parallelism. A case study is conducted on a multi-ring KSR1 shared virtual memory machine. Experimental and theoretical results show that the influence of topology variation of an architecture is predictable. Therefore, the performance of an algorithm on a sophisticated, heirarchical architecture can be predicted and the best algorithm-machine combination can be selected for a given application.
Deep frequency modulation interferometry.
Gerberding, Oliver
2015-06-01
Laser interferometry with pm/Hz precision and multi-fringe dynamic range at low frequencies is a core technology to measure the motion of various objects (test masses) in space and ground based experiments for gravitational wave detection and geodesy. Even though available interferometer schemes are well understood, their construction remains complex, often involving, for example, the need to build quasi-monolithic optical benches with dozens of components. In recent years techniques have been investigated that aim to reduce this complexity by combining phase modulation techniques with sophisticated digital readout algorithms. This article presents a new scheme that uses strong laser frequency modulations in combination with the deep phase modulation readout algorithm to construct simpler and easily scalable interferometers.
Description of the GMAO OSSE for Weather Analysis Software Package: Version 3
NASA Technical Reports Server (NTRS)
Koster, Randal D. (Editor); Errico, Ronald M.; Prive, Nikki C.; Carvalho, David; Sienkiewicz, Meta; El Akkraoui, Amal; Guo, Jing; Todling, Ricardo; McCarty, Will; Putman, William M.;
2017-01-01
The Global Modeling and Assimilation Office (GMAO) at the NASA Goddard Space Flight Center has developed software and products for conducting observing system simulation experiments (OSSEs) for weather analysis applications. Such applications include estimations of potential effects of new observing instruments or data assimilation techniques on improving weather analysis and forecasts. The GMAO software creates simulated observations from nature run (NR) data sets and adds simulated errors to those observations. The algorithms employed are much more sophisticated, adding a much greater degree of realism, compared with OSSE systems currently available elsewhere. The algorithms employed, software designs, and validation procedures are described in this document. Instructions for using the software are also provided.
[Coagulation Monitoring and Bleeding Management in Cardiac Surgery].
Bein, Berthold; Schiewe, Robert
2018-05-01
The transfusion of allogeneic blood products is associated with increased morbidity and mortality. An impaired hemostasis is frequently found in patients undergoing cardiac surgery and may in turn cause bleeding and transfusions. A goal directed coagulation management addressing the often complex coagulation disorders needs sophisticated diagnostics. This may improve both patients' outcome and costs. Recent data suggest that coagulation management based on a rational algorithm is more effective than traditional therapy based on conventional laboratory variables such as PT and INR. Platelet inhibitors, cumarins, direct oral anticoagulants and heparin need different diagnostic and therapeutic approaches. An algorithm specifically developed for use during cardiac surgery is presented. Georg Thieme Verlag KG Stuttgart · New York.
Loeffler, Troy D; Sepehri, Aliasghar; Chen, Bin
2015-09-08
Reformulation of existing Monte Carlo algorithms used in the study of grand canonical systems has yielded massive improvements in efficiency. Here we present an energy biasing scheme designed to address targeting issues encountered in particle swap moves using sophisticated algorithms such as the Aggregation-Volume-Bias and Unbonding-Bonding methods. Specifically, this energy biasing scheme allows a particle to be inserted to (or removed from) a region that is more acceptable. As a result, this new method showed a several-fold increase in insertion/removal efficiency in addition to an accelerated rate of convergence for the thermodynamic properties of the system.
CARHTA GENE: multipopulation integrated genetic and radiation hybrid mapping.
de Givry, Simon; Bouchez, Martin; Chabrier, Patrick; Milan, Denis; Schiex, Thomas
2005-04-15
CAR(H)(T)A GENE: is an integrated genetic and radiation hybrid (RH) mapping tool which can deal with multiple populations, including mixtures of genetic and RH data. CAR(H)(T)A GENE: performs multipoint maximum likelihood estimations with accelerated expectation-maximization algorithms for some pedigrees and has sophisticated algorithms for marker ordering. Dedicated heuristics for framework mapping are also included. CAR(H)(T)A GENE: can be used as a C++ library, through a shell command and a graphical interface. The XML output for companion tools is integrated. The program is available free of charge from www.inra.fr/bia/T/CarthaGene for Linux, Windows and Solaris machines (with Open Source). tschiex@toulouse.inra.fr.
NASA Astrophysics Data System (ADS)
Karakostas, Spiros
2015-05-01
The multi-objective nature of most spatial planning initiatives and the numerous constraints that are introduced in the planning process by decision makers, stakeholders, etc., synthesize a complex spatial planning context in which the concept of solid and meaningful optimization is a unique challenge. This article investigates new approaches to enhance the effectiveness of multi-objective evolutionary algorithms (MOEAs) via the adoption of a well-known metaheuristic: the non-dominated sorting genetic algorithm II (NSGA-II). In particular, the contribution of a sophisticated crossover operator coupled with an enhanced initialization heuristic is evaluated against a series of metrics measuring the effectiveness of MOEAs. Encouraging results emerge for both the convergence rate of the evolutionary optimization process and the occupation of valuable regions of the objective space by non-dominated solutions, facilitating the work of spatial planners and decision makers. Based on the promising behaviour of both heuristics, topics for further research are proposed to improve their effectiveness.
TinyOS-based quality of service management in wireless sensor networks
Peterson, N.; Anusuya-Rangappa, L.; Shirazi, B.A.; Huang, R.; Song, W.-Z.; Miceli, M.; McBride, D.; Hurson, A.; LaHusen, R.
2009-01-01
Previously the cost and extremely limited capabilities of sensors prohibited Quality of Service (QoS) implementations in wireless sensor networks. With advances in technology, sensors are becoming significantly less expensive and the increases in computational and storage capabilities are opening the door for new, sophisticated algorithms to be implemented. Newer sensor network applications require higher data rates with more stringent priority requirements. We introduce a dynamic scheduling algorithm to improve bandwidth for high priority data in sensor networks, called Tiny-DWFQ. Our Tiny-Dynamic Weighted Fair Queuing scheduling algorithm allows for dynamic QoS for prioritized communications by continually adjusting the treatment of communication packages according to their priorities and the current level of network congestion. For performance evaluation, we tested Tiny-DWFQ, Tiny-WFQ (traditional WFQ algorithm implemented in TinyOS), and FIFO queues on an Imote2-based wireless sensor network and report their throughput and packet loss. Our results show that Tiny-DWFQ performs better in all test cases. ?? 2009 IEEE.
DOGMA: A Disk-Oriented Graph Matching Algorithm for RDF Databases
NASA Astrophysics Data System (ADS)
Bröcheler, Matthias; Pugliese, Andrea; Subrahmanian, V. S.
RDF is an increasingly important paradigm for the representation of information on the Web. As RDF databases increase in size to approach tens of millions of triples, and as sophisticated graph matching queries expressible in languages like SPARQL become increasingly important, scalability becomes an issue. To date, there is no graph-based indexing method for RDF data where the index was designed in a way that makes it disk-resident. There is therefore a growing need for indexes that can operate efficiently when the index itself resides on disk. In this paper, we first propose the DOGMA index for fast subgraph matching on disk and then develop a basic algorithm to answer queries over this index. This algorithm is then significantly sped up via an optimized algorithm that uses efficient (but correct) pruning strategies when combined with two different extensions of the index. We have implemented a preliminary system and tested it against four existing RDF database systems developed by others. Our experiments show that our algorithm performs very well compared to these systems, with orders of magnitude improvements for complex graph queries.
DSMC Simulations of Blunt Body Flows for Mars Entries: Mars Pathfinder and Mars Microprobe Capsules
NASA Technical Reports Server (NTRS)
Moss, James N.; Wilmoth, Richard G.; Price, Joseph M.
1997-01-01
The hypersonic transitional flow aerodynamics of the Mars Pathfinder and Mars Microprobe capsules are simulated with the direct simulation Monte Carlo method. Calculations of axial, normal, and static pitching coefficients were obtained over an angle of attack range comparable to actual flight requirements. Comparisons are made with modified Newtonian and free-molecular-flow calculations. Aerothermal results were also obtained for zero incidence entry conditions.
Comparison of Hall Thruster Plume Expansion Model with Experimental Data
2006-05-23
focus of this study, is a hybrid particle- in-cell ( PIC ) model that tracks particles along an unstructured tetrahedral mesh. * Research Engineer...measurements of the ion current density profile, ion energy distributions, and ion species fraction distributions using a nude Faraday probe, retarding...Vol.37 No.1. 6 Oh, D. and Hastings, D., “Three Dimensional PIC -DSMC Simulations of Hall Thruster Plumes and Analysis for Realistic Spacecraft
Comparison of DSMC Reaction Models with QCT Reaction Rates for Nitrogen
2016-07-17
The U.S. Government is joint author of the work and has the right to use, modify, reproduce, release, perform, display, or disclose the work. 13...Distribution A: Approved for Public Release, Distribution Unlimited PA #16299 Introduction • Comparison with measurements is final goal • Validation...model verification and parameter adjustment • Four chemistry models: total collision energy (TCE), quantum kinetic (QK), vibration-dissociation favoring
State-specific catalytic recombination boundary condition for DSMC methods in aerospace applications
NASA Astrophysics Data System (ADS)
Bariselli, F.; Torres, E.; Magin, T. E.
2016-11-01
Accurate characterization of the hypersonic flow around a vehicle during its atmospheric entry is important for a precise quantification of heat flux margins. In some cases, exothermic reactions promoted by the catalytic properties of the surface material can significantly contribute to the overall heat flux. In this work, the effect of catalytic recombination of atomic nitrogen is examined within the framework of a state-specific DSMC implementation. State-to-state reaction cross sections are derived from a detailed quantum-chemical database for the N2(v, J) + N system. A coarse-grain model is used to reduce the number of internal states and state-specific reactions to a manageable level. The catalytic boundary condition is based on an phenomenological approach and the state-specific surface recombination probabilities can be imposed by the user. This can represent an important aspect in modelling catalysis, since experiments and molecular dynamics suggest that only part of the chemical energy is absorbed by the wall, with the formed molecules leaving the surface in an excited state. The implementation is verified in a simplified geometrical configuration by comparing the numerical results with an analytical solution, developed for a 1D diffusion problem in a binary mixture. Then, the effect of catalysis in a hypersonic flow along the stagnation line of a blunt body is studied.
DSMC simulations of vapor transport toward development of the lithium vapor box divertor concept
NASA Astrophysics Data System (ADS)
Jagoe, Christopher; Schwartz, Jacob; Goldston, Robert
2016-10-01
The lithium vapor divertor box concept attempts to achieve volumetric dissipation of the high heat efflux from a fusion power system. The vapor extracts the heat of the incoming plasma by ionization and radiation, while remaining localized in the vapor box due to differential pumping based on rapid condensation. Preliminary calculations with lithium vapor at densities appropriate for an NSTX-U-scale machine give Knudsen numbers between 0.01 and 1, outside both the range of continuum fluid dynamics and of collisionless Monte Carlo. The direct-simulation Monte Carlo (DSMC) method, however, can simulate rarefied gas flows in this regime. Using the solver contained in the OpenFOAM package, pressure-driven flows of water vapor will be analyzed. The use of water vapor in the relevant range of Knudsen number allows for a flexible similarity experiment to verify the reliability of the code before moving to tests with lithium. The simulation geometry consists of chains of boxes on a temperature gradient, connected by slots with widths that are a representative fraction of the dimensions of the box. We expect choked flow, sonic shocks, and order-of-magnitude pressure and density drops from box to box, but this expectation will be tested in the simulation and then experiment. This work is supported by the Princeton Environmental Institute.
Hypersonic separated flows about "tick" configurations with sensitivity to model design
NASA Astrophysics Data System (ADS)
Moss, J. N.; O'Byrne, S.; Gai, S. L.
2014-12-01
This paper presents computational results obtained by applying the direct simulation Monte Carlo (DSMC) method for hypersonic nonequilibrium flow about "tick-shaped" model configurations. These test models produces a complex flow where the nonequilibrium and rarefied aspects of the flow are initially enhanced as the flow passes over an expansion surface, and then the flow encounters a compression surface that can induce flow separation. The resulting flow is such that meaningful numerical simulations must have the capability to account for a significant range of rarefaction effects; hence the application of the DSMC method in the current study as the flow spans several flow regimes, including transitional, slip, and continuum. The current focus is to examine the sensitivity of both the model surface response (heating, friction and pressure) and flowfield structure to assumptions regarding surface boundary conditions and more extensively the impact of model design as influenced by leading edge configuration as well as the geometrical features of the expansion and compression surfaces. Numerical results indicate a strong sensitivity to both the extent of the leading edge sharpness and the magnitude of the leading edge bevel angle. Also, the length of the expansion surface for a fixed compression surface has a significant impact on the extent of separated flow.
A diffusive information preservation method for small Knudsen number flows
NASA Astrophysics Data System (ADS)
Fei, Fei; Fan, Jing
2013-06-01
The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.
Hypersonic Separated Flows About "Tick" Configurations With Sensitivity to Model Design
NASA Technical Reports Server (NTRS)
Moss, J. N.; O'Byrne, S.; Gai, S. L.
2014-01-01
This paper presents computational results obtained by applying the direct simulation Monte Carlo (DSMC) method for hypersonic nonequilibrium flow about "tick-shaped" model configurations. These test models produces a complex flow where the nonequilibrium and rarefied aspects of the flow are initially enhanced as the flow passes over an expansion surface, and then the flow encounters a compression surface that can induce flow separation. The resulting flow is such that meaningful numerical simulations must have the capability to account for a significant range of rarefaction effects; hence the application of the DSMC method in the current study as the flow spans several flow regimes, including transitional, slip, and continuum. The current focus is to examine the sensitivity of both the model surface response (heating, friction and pressure) and flowfield structure to assumptions regarding surface boundary conditions and more extensively the impact of model design as influenced by leading edge configuration as well as the geometrical features of the expansion and compression surfaces. Numerical results indicate a strong sensitivity to both the extent of the leading edge sharpness and the magnitude of the leading edge bevel angle. Also, the length of the expansion surface for a fixed compression surface has a significant impact on the extent of separated flow.
NASA Astrophysics Data System (ADS)
Takase, Kazuki; Takahashi, Kazunori; Takao, Yoshinori
2018-02-01
The effects of neutral distribution and an external magnetic field on plasma distribution and thruster performance are numerically investigated using a particle-in-cell simulation with Monte Carlo collisions (PIC-MCC) and the direct simulation Monte Carlo (DSMC) method. The modeled thruster consists of a quartz tube 1 cm in diameter and 3 cm in length, where a double-turn rf loop antenna is wound at the center of the tube and a solenoid is placed between the loop antenna and the downstream tube exit. A xenon propellant is introduced from both the upstream and downstream sides of the thruster, and the flow rates are varied while maintaining the total gas flow rate of 30 μg/s. The PIC-MCC calculations have been conducted using the neutral distribution obtained from the DSMC calculations, which were applied with different strengths of the magnetic field. The numerical results show that both the downstream gas injection and the external magnetic field with a maximum strength near the thruster exit lead to a shift of the plasma density peak from the upstream to the downstream side. Consequently, a larger total thrust is obtained when increasing the downstream gas injection and the magnetic field strength, which qualitatively agrees with a previous experiment using a helicon plasma source.
Numerical investigation of rarefaction effects in the vicinity of a sharp leading edge
NASA Astrophysics Data System (ADS)
Pan, Shaowu; Gao, Zhenxun; Lee, Chunhian
2014-12-01
This paper presents a study of rarefaction effect on hypersonic flow over a sharp leading edge. Both continuum approach and kinetic method: a widely spread commercial Computational Fluid Dynamics-Navior-Stokes-Fourier (CFD-NSF) software - Fluent together with a direct simulation Monte Carlo (DSMC) code developed by the authors are employed for simulation of transition regime with Knudsen number ranging from 0.005 to 0.2. It is found that Fluent can predict the wall fluxes in the case of hypersonic argon flow over the sharp leading edge for the lowest Kn case (Kn = 0.005) in current paper while for other cases it also has a good agreement with DSMC except at the location near the sharp leading edge. Among all of the wall fluxes, it is found that coefficient of pressure is the most sensitive to rarefaction while heat transfer is the least one. A parameter based on translational nonequilibrium and a cut-off value of 0.34 is proposed for continuum breakdown in this paper. The structure of entropy and velocity profile in boundary layer is analyzed. Also, it is found that the ratio of heat transfer coefficient to skin friction coefficient remains uniform along the surface for the four cases in this paper.
TiOx deposited by magnetron sputtering: a joint modelling and experimental study
NASA Astrophysics Data System (ADS)
Tonneau, R.; Moskovkin, P.; Pflug, A.; Lucas, S.
2018-05-01
This paper presents a 3D multiscale simulation approach to model magnetron reactive sputter deposition of TiOx⩽2 at various O2 inlets and its validation against experimental results. The simulation first involves the transport of sputtered material in a vacuum chamber by means of a three-dimensional direct simulation Monte Carlo (DSMC) technique. Second, the film growth at different positions on a 3D substrate is simulated using a kinetic Monte Carlo (kMC) method. When simulating the transport of species in the chamber, wall chemistry reactions are taken into account in order to get the proper content of the reactive species in the volume. Angular and energy distributions of particles are extracted from DSMC and used for film growth modelling by kMC. Along with the simulation, experimental deposition of TiOx coatings on silicon samples placed at different positions on a curved sample holder was performed. The experimental results are in agreement with the simulated ones. For a given coater, the plasma phase hysteresis behaviour, film composition and film morphology are predicted. The used methodology can be applied to any coater and any films. This paves the way to the elaboration of a virtual coater allowing a user to predict composition and morphology of films deposited in silico.
Lattice Boltzmann simulation of nonequilibrium effects in oscillatory gas flow.
Tang, G H; Gu, X J; Barber, R W; Emerson, D R; Zhang, Y H
2008-08-01
Accurate evaluation of damping in laterally oscillating microstructures is challenging due to the complex flow behavior. In addition, device fabrication techniques and surface properties will have an important effect on the flow characteristics. Although kinetic approaches such as the direct simulation Monte Carlo (DSMC) method and directly solving the Boltzmann equation can address these challenges, they are beyond the reach of current computer technology for large scale simulation. As the continuum Navier-Stokes equations become invalid for nonequilibrium flows, we take advantage of the computationally efficient lattice Boltzmann method to investigate nonequilibrium oscillating flows. We have analyzed the effects of the Stokes number, Knudsen number, and tangential momentum accommodation coefficient for oscillating Couette flow and Stokes' second problem. Our results are in excellent agreement with DSMC data for Knudsen numbers up to Kn=O(1) and show good agreement for Knudsen numbers as large as 2.5. In addition to increasing the Stokes number, we demonstrate that increasing the Knudsen number or decreasing the accommodation coefficient can also expedite the breakdown of symmetry for oscillating Couette flow. This results in an earlier transition from quasisteady to unsteady flow. Our paper also highlights the deviation in velocity slip between Stokes' second problem and the confined Couette case.
A study of internal energy relaxation in shocks using molecular dynamics based models
NASA Astrophysics Data System (ADS)
Li, Zheng; Parsons, Neal; Levin, Deborah A.
2015-10-01
Recent potential energy surfaces (PESs) for the N2 + N and N2 + N2 systems are used in molecular dynamics (MD) to simulate rates of vibrational and rotational relaxations for conditions that occur in hypersonic flows. For both chemical systems, it is found that the rotational relaxation number increases with the translational temperature and decreases as the rotational temperature approaches the translational temperature. The vibrational relaxation number is observed to decrease with translational temperature and approaches the rotational relaxation number in the high temperature region. The rotational and vibrational relaxation numbers are generally larger in the N2 + N2 system. MD-quasi-classical trajectory (QCT) with the PESs is also used to calculate the V-T transition cross sections, the collision cross section, and the dissociation cross section for each collision pair. Direct simulation Monte Carlo (DSMC) results for hypersonic flow over a blunt body with the total collision cross section from MD/QCT simulations, Larsen-Borgnakke with new relaxation numbers, and the N2 dissociation rate from MD/QCT show a profile with a decreased translational temperature and a rotational temperature close to vibrational temperature. The results demonstrate that many of the physical models employed in DSMC should be revised as fundamental potential energy surfaces suitable for high temperature conditions become available.
Chaotic particle swarm optimization with mutation for classification.
Assarzadeh, Zahra; Naghsh-Nilchi, Ahmad Reza
2015-01-01
In this paper, a chaotic particle swarm optimization with mutation-based classifier particle swarm optimization is proposed to classify patterns of different classes in the feature space. The introduced mutation operators and chaotic sequences allows us to overcome the problem of early convergence into a local minima associated with particle swarm optimization algorithms. That is, the mutation operator sharpens the convergence and it tunes the best possible solution. Furthermore, to remove the irrelevant data and reduce the dimensionality of medical datasets, a feature selection approach using binary version of the proposed particle swarm optimization is introduced. In order to demonstrate the effectiveness of our proposed classifier, mutation-based classifier particle swarm optimization, it is checked out with three sets of data classifications namely, Wisconsin diagnostic breast cancer, Wisconsin breast cancer and heart-statlog, with different feature vector dimensions. The proposed algorithm is compared with different classifier algorithms including k-nearest neighbor, as a conventional classifier, particle swarm-classifier, genetic algorithm, and Imperialist competitive algorithm-classifier, as more sophisticated ones. The performance of each classifier was evaluated by calculating the accuracy, sensitivity, specificity and Matthews's correlation coefficient. The experimental results show that the mutation-based classifier particle swarm optimization unequivocally performs better than all the compared algorithms.
Acoustic intrusion detection and positioning system
NASA Astrophysics Data System (ADS)
Berman, Ohad; Zalevsky, Zeev
2002-08-01
Acoustic sensors are becoming more and more applicable as a military battlefield technology. Those sensors allow a detection and direciton estimation with low false alarm rate and high probability of detection. The recent technological progress related to these fields of reserach, together with an evolution of sophisticated algorithms, allow the successful integration of those sensoe in battlefield technologies. In this paper the performances of an acoustic sensor for a detection of avionic vessels is investigated and analyzed.
Foundations for a syntatic pattern recognition system for genomic DNA sequences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Searles, D.B.
1993-03-01
The goal of the proposed work is the creation of a software system that will perform sophisticated pattern recognition and related functions at a level of abstraction and with expressive power beyond current general-purpose pattern-matching systems for biological sequences; and with a more uniform language, environment, and graphical user interface, and with greater flexibility, extensibility, embeddability, and ability to incorporate other algorithms, than current special-purpose analytic software.
NASA Technical Reports Server (NTRS)
Swanson, T. D.; Ollendorf, S.
1979-01-01
This paper addresses the potential for enhanced solar system performance through sophisticated control of the collector loop flow rate. Computer simulations utilizing the TRNSYS solar energy program were performed to study the relative effect on system performance of eight specific control algorithms. Six of these control algorithms are of the proportional type: two are concave exponentials, two are simple linear functions, and two are convex exponentials. These six functions are typical of what might be expected from future, more advanced, controllers. The other two algorithms are of the on/off type and are thus typical of existing control devices. Results of extensive computer simulations utilizing actual weather data indicate that proportional control does not significantly improve system performance. However, it is shown that thermal stratification in the liquid storage tank may significantly improve performance.
Multicast Routing of Hierarchical Data
NASA Technical Reports Server (NTRS)
Shacham, Nachum
1992-01-01
The issue of multicast of broadband, real-time data in a heterogeneous environment, in which the data recipients differ in their reception abilities, is considered. Traditional multicast schemes, which are designed to deliver all the source data to all recipients, offer limited performance in such an environment, since they must either force the source to overcompress its signal or restrict the destination population to those who can receive the full signal. We present an approach for resolving this issue by combining hierarchical source coding techniques, which allow recipients to trade off reception bandwidth for signal quality, and sophisticated routing algorithms that deliver to each destination the maximum possible signal quality. The field of hierarchical coding is briefly surveyed and new multicast routing algorithms are presented. The algorithms are compared in terms of network utilization efficiency, lengths of paths, and the required mechanisms for forwarding packets on the resulting paths.
Transmission over UWB channels with OFDM system using LDPC coding
NASA Astrophysics Data System (ADS)
Dziwoki, Grzegorz; Kucharczyk, Marcin; Sulek, Wojciech
2009-06-01
Hostile wireless environment requires use of sophisticated signal processing methods. The paper concerns on Ultra Wideband (UWB) transmission over Personal Area Networks (PAN) including MB-OFDM specification of physical layer. In presented work the transmission system with OFDM modulation was connected with LDPC encoder/decoder. Additionally the frame and bit error rate (FER and BER) of the system was decreased using results from the LDPC decoder in a kind of turbo equalization algorithm for better channel estimation. Computational block using evolutionary strategy, from genetic algorithms family, was also used in presented system. It was placed after SPA (Sum-Product Algorithm) decoder and is conditionally turned on in the decoding process. The result is increased effectiveness of the whole system, especially lower FER. The system was tested with two types of LDPC codes, depending on type of parity check matrices: randomly generated and constructed deterministically, optimized for practical decoder architecture implemented in the FPGA device.
Ubiquitousness of link-density and link-pattern communities in real-world networks
NASA Astrophysics Data System (ADS)
Šubelj, L.; Bajec, M.
2012-01-01
Community structure appears to be an intrinsic property of many complex real-world networks. However, recent work shows that real-world networks reveal even more sophisticated modules than classical cohesive (link-density) communities. In particular, networks can also be naturally partitioned according to similar patterns of connectedness among the nodes, revealing link-pattern communities. We here propose a propagation based algorithm that can extract both link-density and link-pattern communities, without any prior knowledge of the true structure. The algorithm was first validated on different classes of synthetic benchmark networks with community structure, and also on random networks. We have further applied the algorithm to different social, information, technological and biological networks, where it indeed reveals meaningful (composites of) link-density and link-pattern communities. The results thus seem to imply that, similarly as link-density counterparts, link-pattern communities appear ubiquitous in nature and design.
Web-accessible cervigram automatic segmentation tool
NASA Astrophysics Data System (ADS)
Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Thoma, George R.
2010-03-01
Uterine cervix image analysis is of great importance to the study of uterine cervix cancer, which is among the leading cancers affecting women worldwide. In this paper, we describe our proof-of-concept, Web-accessible system for automated segmentation of significant tissue regions in uterine cervix images, which also demonstrates our research efforts toward promoting collaboration between engineers and physicians for medical image analysis projects. Our design and implementation unifies the merits of two commonly used languages, MATLAB and Java. It circumvents the heavy workload of recoding the sophisticated segmentation algorithms originally developed in MATLAB into Java while allowing remote users who are not experienced programmers and algorithms developers to apply those processing methods to their own cervicographic images and evaluate the algorithms. Several other practical issues of the systems are also discussed, such as the compression of images and the format of the segmentation results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mace, Gerald G.
What has made the ASR program unique is the amount of information that is available. The suite of recently deployed instruments significantly expands the scope of the program (Mather and Voyles, 2013). The breadth of this information allows us to pose sophisticated process-level questions. Our ASR project, now entering its third year, has been about developing algorithms that use this information in ways that fully exploit the new capacity of the ARM data streams. Using optimal estimation (OE) and Markov Chain Monte Carlo (MCMC) inversion techniques, we have developed methodologies that allow us to use multiple radar frequency Doppler spectramore » along with lidar and passive constraints where data streams can be added or subtracted efficiently and algorithms can be reformulated for various combinations of hydrometeors by exchanging sets of empirical coefficients. These methodologies have been applied to boundary layer clouds, mixed phase snow cloud systems, and cirrus.« less
Estimation of color filter array data from JPEG images for improved demosaicking
NASA Astrophysics Data System (ADS)
Feng, Wei; Reeves, Stanley J.
2006-02-01
On-camera demosaicking algorithms are necessarily simple and therefore do not yield the best possible images. However, off-camera demosaicking algorithms face the additional challenge that the data has been compressed and therefore corrupted by quantization noise. We propose a method to estimate the original color filter array (CFA) data from JPEG-compressed images so that more sophisticated (and better) demosaicking schemes can be applied to get higher-quality images. The JPEG image formation process, including simple demosaicking, color space transformation, chrominance channel decimation and DCT, is modeled as a series of matrix operations followed by quantization on the CFA data, which is estimated by least squares. An iterative method is used to conserve memory and speed computation. Our experiments show that the mean square error (MSE) with respect to the original CFA data is reduced significantly using our algorithm, compared to that of unprocessed JPEG and deblocked JPEG data.
Tsanas, Athanasios; Zañartu, Matías; Little, Max A.; Fox, Cynthia; Ramig, Lorraine O.; Clifford, Gari D.
2014-01-01
There has been consistent interest among speech signal processing researchers in the accurate estimation of the fundamental frequency (F0) of speech signals. This study examines ten F0 estimation algorithms (some well-established and some proposed more recently) to determine which of these algorithms is, on average, better able to estimate F0 in the sustained vowel /a/. Moreover, a robust method for adaptively weighting the estimates of individual F0 estimation algorithms based on quality and performance measures is proposed, using an adaptive Kalman filter (KF) framework. The accuracy of the algorithms is validated using (a) a database of 117 synthetic realistic phonations obtained using a sophisticated physiological model of speech production and (b) a database of 65 recordings of human phonations where the glottal cycles are calculated from electroglottograph signals. On average, the sawtooth waveform inspired pitch estimator and the nearly defect-free algorithms provided the best individual F0 estimates, and the proposed KF approach resulted in a ∼16% improvement in accuracy over the best single F0 estimation algorithm. These findings may be useful in speech signal processing applications where sustained vowels are used to assess vocal quality, when very accurate F0 estimation is required. PMID:24815269
Tutorial for Thermophysics Universal Research Framework
2017-07-30
DS1V are compared in Section 3.4.5. 3.4.2 Description of the Example Problem In a fluid, disturbance information is communicated within a medium at the...Universal Research Framework development (TURF-DEV) package on a case-by-case basis. Brief descriptions of the operations are provided in Tables 4.1 and...of additional experimental (E) and research (R) operations included in TURF-DEV. Module Operation Description DSMC SPDistDirectDSMCCellMergeOp (R
Study of the Transition Flow Regime using Monte Carlo Methods
NASA Technical Reports Server (NTRS)
Hassan, H. A.
1999-01-01
This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.
Comparison of Hall Thruster Plume Expansion Model with Experimental Data (Preprint)
2006-07-01
Cartesian mesh. AQUILA, the focus of this study, is a hybrid PIC model that tracks particles along an unstructured tetrahedral mesh. COLISEUM is capable...measurements of the ion current density profile, ion energy distributions, and ion species fraction distributions using a nude Faraday probe...Spacecraft and Rockets, Vol.37 No.1. 6 Oh, D. and Hastings, D., “Three Dimensional PIC -DSMC Simulations of Hall Thruster Plumes and Analysis for
Sachetto Oliveira, Rafael; Martins Rocha, Bernardo; Burgarelli, Denise; Meira, Wagner; Constantinides, Christakis; Weber Dos Santos, Rodrigo
2018-02-01
The use of computer models as a tool for the study and understanding of the complex phenomena of cardiac electrophysiology has attained increased importance nowadays. At the same time, the increased complexity of the biophysical processes translates into complex computational and mathematical models. To speed up cardiac simulations and to allow more precise and realistic uses, 2 different techniques have been traditionally exploited: parallel computing and sophisticated numerical methods. In this work, we combine a modern parallel computing technique based on multicore and graphics processing units (GPUs) and a sophisticated numerical method based on a new space-time adaptive algorithm. We evaluate each technique alone and in different combinations: multicore and GPU, multicore and GPU and space adaptivity, multicore and GPU and space adaptivity and time adaptivity. All the techniques and combinations were evaluated under different scenarios: 3D simulations on slabs, 3D simulations on a ventricular mouse mesh, ie, complex geometry, sinus-rhythm, and arrhythmic conditions. Our results suggest that multicore and GPU accelerate the simulations by an approximate factor of 33×, whereas the speedups attained by the space-time adaptive algorithms were approximately 48. Nevertheless, by combining all the techniques, we obtained speedups that ranged between 165 and 498. The tested methods were able to reduce the execution time of a simulation by more than 498× for a complex cellular model in a slab geometry and by 165× in a realistic heart geometry simulating spiral waves. The proposed methods will allow faster and more realistic simulations in a feasible time with no significant loss of accuracy. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Zhou, Yaping; Kratz, David P.; Wilber, Anne C.; Gupta, Shashi K.; Cess, Robert D.
2007-01-01
Zhou and Cess [2001] developed an algorithm for retrieving surface downwelling longwave radiation (SDLW) based upon detailed studies using radiative transfer model calculations and surface radiometric measurements. Their algorithm linked clear sky SDLW with surface upwelling longwave flux and column precipitable water vapor. For cloudy sky cases, they used cloud liquid water path as an additional parameter to account for the effects of clouds. Despite the simplicity of their algorithm, it performed very well for most geographical regions except for those regions where the atmospheric conditions near the surface tend to be extremely cold and dry. Systematic errors were also found for scenes that were covered with ice clouds. An improved version of the algorithm prevents the large errors in the SDLW at low water vapor amounts by taking into account that under such conditions the SDLW and water vapor amount are nearly linear in their relationship. The new algorithm also utilizes cloud fraction and cloud liquid and ice water paths available from the Cloud and the Earth's Radiant Energy System (CERES) single scanner footprint (SSF) product to separately compute the clear and cloudy portions of the fluxes. The new algorithm has been validated against surface measurements at 29 stations around the globe for Terra and Aqua satellites. The results show significant improvement over the original version. The revised Zhou-Cess algorithm is also slightly better or comparable to more sophisticated algorithms currently implemented in the CERES processing and will be incorporated as one of the CERES empirical surface radiation algorithms.
Fast Dating Using Least-Squares Criteria and Algorithms.
To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier
2016-01-01
Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that of the most sophisticated methods, while their computing time is much faster. We apply these algorithms on a large data set comprising 1194 strains of Influenza virus from the pdm09 H1N1 Human pandemic. Again the results show that these algorithms provide a very fast alternative with results similar to those of other computer programs. These algorithms are implemented in the LSD software (least-squares dating), which can be downloaded from http://www.atgc-montpellier.fr/LSD/, along with all our data sets and detailed results. An Online Appendix, providing additional algorithm descriptions, tables, and figures can be found in the Supplementary Material available on Dryad at http://dx.doi.org/10.5061/dryad.968t3. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Fast Dating Using Least-Squares Criteria and Algorithms
To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier
2016-01-01
Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley–Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley–Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that of the most sophisticated methods, while their computing time is much faster. We apply these algorithms on a large data set comprising 1194 strains of Influenza virus from the pdm09 H1N1 Human pandemic. Again the results show that these algorithms provide a very fast alternative with results similar to those of other computer programs. These algorithms are implemented in the LSD software (least-squares dating), which can be downloaded from http://www.atgc-montpellier.fr/LSD/, along with all our data sets and detailed results. An Online Appendix, providing additional algorithm descriptions, tables, and figures can be found in the Supplementary Material available on Dryad at http://dx.doi.org/10.5061/dryad.968t3. PMID:26424727
TAIR: A transonic airfoil analysis computer code
NASA Technical Reports Server (NTRS)
Dougherty, F. C.; Holst, T. L.; Grundy, K. L.; Thomas, S. D.
1981-01-01
The operation of the TAIR (Transonic AIRfoil) computer code, which uses a fast, fully implicit algorithm to solve the conservative full-potential equation for transonic flow fields about arbitrary airfoils, is described on two levels of sophistication: simplified operation and detailed operation. The program organization and theory are elaborated to simplify modification of TAIR for new applications. Examples with input and output are given for a wide range of cases, including incompressible, subcritical compressible, and transonic calculations.
Alanna Conners and the Origins of Principled Data Analysis
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
2013-01-01
Alanna was one of the most important pioneers in the development of not just sophisticated algorithms for analyzing astronomical data but more importantly an overall viewpoint emphasizing the use of statistically sound principles in place of blind application of cook-book recipes, or black boxes. I will outline some of the threads of this viewpoint, emphasizing time series data, with a focus on the importance of these developments for the Age of Digital Astronomy that we are entering.
Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods
NASA Astrophysics Data System (ADS)
Sohn, Ilyoup
During extreme-Mach number reentry into Earth's atmosphere, spacecraft experience hypersonic non-equilibrium flow conditions that dissociate molecules and ionize atoms. Such situations occur behind a shock wave leading to high temperatures, which have an adverse effect on the thermal protection system and radar communications. Since the electronic energy levels of gaseous species are strongly excited for high Mach number conditions, the radiative contribution to the total heat load can be significant. In addition, radiative heat source within the shock layer may affect the internal energy distribution of dissociated and weakly ionized gas species and the number density of ablative species released from the surface of vehicles. Due to the radiation total heat load to the heat shield surface of the vehicle may be altered beyond mission tolerances. Therefore, in the design process of spacecrafts the effect of radiation must be considered and radiation analyses coupled with flow solvers have to be implemented to improve the reliability during the vehicle design stage. To perform the first stage for radiation analyses coupled with gas-dynamics, efficient databasing schemes for emission and absorption coefficients were developed to model radiation from hypersonic, non-equilibrium flows. For bound-bound transitions, spectral information including the line-center wavelength and assembled parameters for efficient calculations of emission and absorption coefficients are stored for typical air plasma species. Since the flow is non-equilibrium, a rate equation approach including both collisional and radiatively induced transitions was used to calculate the electronic state populations, assuming quasi-steady-state (QSS). The Voigt line shape function was assumed for modeling the line broadening effect. The accuracy and efficiency of the databasing scheme was examined by comparing results of the databasing scheme with those of NEQAIR for the Stardust flowfield. An accuracy of approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking into account non-local radiation. The definition of the escape factor which is dependent on the incoming radiative intensity from over all directions is presented. The effect of the escape factor on the distribution of electronic state populations of the atomic N and O radiating species is examined in a highly non-equilibrium flow condition using DSMC and PMC methods and the corresponding change of the radiative heat flux due to the non-local radiation is also investigated.
NASA Astrophysics Data System (ADS)
Fougere, N.; Combi, M. R.; Tenishev, V.; Bieler, A. M.; Migliorini, A.; Bockelée-Morvan, D.; Toth, G.; Huang, Z.; Gombosi, T. I.; Hansen, K. C.; Capaccioni, F.; Filacchione, G.; Piccioni, G.; Debout, V.; Erard, S.; Leyrat, C.; Fink, U.; Rubin, M.; Altwegg, K.; Tzou, C. Y.; Le Roy, L.; Calmonte, U.; Berthelier, J. J.; Rème, H.; Hässig, M.; Fuselier, S. A.; Fiethe, B.; De Keyser, J.
2015-12-01
As it orbits around comet 67P/Churyumov-Gerasimenko (CG), the Rosetta spacecraft acquires more information about its main target. The numerous observations made at various geometries and at different times enable a good spatial and temporal coverage of the evolution of CG's cometary coma. However, the question regarding the link between the coma measurements and the nucleus activity remains relatively open notably due to gas expansion and strong kinetic effects in the comet's rarefied atmosphere. In this work, we use coma observations made by the ROSINA-DFMS instrument to constrain the activity at the surface of the nucleus. The distribution of the H2O and CO2 outgassing is described with the use of spherical harmonics. The coordinates in the orthogonal system represented by the spherical harmonics are computed using a least squared method, minimizing the sum of the square residuals between an analytical coma model and the DFMS data. Then, the previously deduced activity distributions are used in a Direct Simulation Monte Carlo (DSMC) model to compute a full description of the H2O and CO2 coma of comet CG from the nucleus' surface up to several hundreds of kilometers. The DSMC outputs are used to create synthetic images, which can be directly compared with VIRTIS measurements. The good agreement between the VIRTIS observations and the DSMC model, itself constrained with ROSINA data, provides a compelling juxtaposition of the measurements from these two instruments. Acknowledgements Work at UofM was supported by contracts JPL#1266313, JPL#1266314 and NASA grant NNX09AB59G. Work at UoB was funded by the State of Bern, the Swiss National Science Foundation and by the ESA PRODEX Program. Work at Southwest Research institute was supported by subcontract #1496541 from the JPL. Work at BIRA-IASB was supported by the Belgian Science Policy Office via PRODEX/ROSINA PEA 90020. The authors would like to thank ASI, CNES, DLR, NASA for supporting this research. VIRTIS was built by a consortium formed by Italy, France and Germany, under the scientific responsibility of the IAPS of INAF, which guides also the scientific operations. The consortium includes also the LESIA of the Observatoire de Paris, and the Institut für Planetenforschung of DLR. The authors wish to thank the RSGS and the RMOC for their continuous support.
Subsurface Gas Flow and Ice Grain Acceleration within Enceladus and Europa Fissures: 2D DSMC Models
NASA Astrophysics Data System (ADS)
Tucker, O. J.; Combi, M. R.; Tenishev, V.
2014-12-01
The ejection of material from geysers is a ubiquitous occurrence on outer solar system bodies. Water vapor plumes have been observed emanating from the southern hemispheres of Enceladus and Europa (Hansen et al. 2011, Roth et al. 2014), and N2plumes carrying ice and ark particles on Triton (Soderblom et al. 2009). The gas and ice grain distributions in the Enceladus plume depend on the subsurface gas properties and the geometry of the fissures e.g., (Schmidt et al. 2008, Ingersoll et al. 2010). Of course the fissures can have complex geometries due to tidal stresses, melting, freezing etc., but directly sampled and inferred gas and grain properties for the plume (source rate, bulk velocity, terminal grain velocity) can be used to provide a basis to constrain characteristic dimensions of vent width and depth. We used a 2-dimensional Direct Simulation Monte Carlo (DSMC) technique to model venting from both axi-symmetric canyons with widths ~2 km and narrow jets with widths ~15-40 m. For all of our vent geometries, considered the water vapor source rates (1027 - 1028 s-1) and bulk gas velocities (~330 - 670 m/s) obtained at the surface were consistent with inferred values obtained by fits of the data for the plume densities (1026 - 1028 s-1, 250 - 1000 m/s) respectively. However, when using the resulting DSMC gas distribution for the canyon geometries to integrate the trajectories of ice grains we found it insufficient to accelerate submicron ice grains to Enceladus' escape speed. On the other hand, the gas distributions in the jet like vents accelerated grains > 10 μm significantly above Enceladus' escape speed. It has been suggested that micron-sized grains are ejected from the vents with speeds comparable to the Enceladus escape speed. Here we report on these results including comparisons to results obtained from 1D models as well as discuss the implications of our plume model results. We also show preliminary results for similar considerations applied to Europa. References: Hansen, 2011. Geophys. Res. Lett. 38, L11202; Ingersoll, 2010. Icarus 206, 594 - 607; Schmidt, 2008. Nature 451, 685 - 688; Soderblom, 2009. Science 250, 412 - 415; Roth, 2013l. Science http://dx.doi.org/10.1126/science.1247051 2013
Bayesian approach for peak detection in two-dimensional chromatography.
Vivó-Truyols, Gabriel
2012-03-20
A new method for peak detection in two-dimensional chromatography is presented. In a first step, the method starts with a conventional one-dimensional peak detection algorithm to detect modulated peaks. In a second step, a sophisticated algorithm is constructed to decide which of the individual one-dimensional peaks have been originated from the same compound and should then be arranged in a two-dimensional peak. The merging algorithm is based on Bayesian inference. The user sets prior information about certain parameters (e.g., second-dimension retention time variability, first-dimension band broadening, chromatographic noise). On the basis of these priors, the algorithm calculates the probability of myriads of peak arrangements (i.e., ways of merging one-dimensional peaks), finding which of them holds the highest value. Uncertainty in each parameter can be accounted by adapting conveniently its probability distribution function, which in turn may change the final decision of the most probable peak arrangement. It has been demonstrated that the Bayesian approach presented in this paper follows the chromatographers' intuition. The algorithm has been applied and tested with LC × LC and GC × GC data and takes around 1 min to process chromatograms with several thousands of peaks.
Gauß and beyond: the making of Easter algorithms
NASA Astrophysics Data System (ADS)
Bien, Reinhold
2004-07-01
It is amazing to see how many webpages are devoted to the art of finding the date of Easter Sunday. Just for illustration, the reader may search for terms such as Gregorian calendar, date of Easter, or Easter algorithm. Sophisticated essays as well as less enlightening contributions are presented, and many a doubt is expressed about the reliability of some results obtained with some Easter algorithms. In short, there is still a great interest in those problems. Gregorian Easter algorithms exist for two centuries (or more?), but most of their history is rather obscure. Some reasons may be that some important sources are written in Latin or in the German of Goethe's time, or they are difficult to discover. Without being complete, the following paper is intended to shed light on how those techniques emerged and evolved. Like a microcosm, the history of Easter algorithms resembles the history of any science: it is a story of trials, errors, and successes, and, last but not least, a story of offended pride. A number of articles, published before 1910, are cited in: A. Fraenkel, Die Berechnung des Osterfestes. Journal für die reine und angewandte Mathematik, Volume 138 (1910), 133-146.
Chaotic Particle Swarm Optimization with Mutation for Classification
Assarzadeh, Zahra; Naghsh-Nilchi, Ahmad Reza
2015-01-01
In this paper, a chaotic particle swarm optimization with mutation-based classifier particle swarm optimization is proposed to classify patterns of different classes in the feature space. The introduced mutation operators and chaotic sequences allows us to overcome the problem of early convergence into a local minima associated with particle swarm optimization algorithms. That is, the mutation operator sharpens the convergence and it tunes the best possible solution. Furthermore, to remove the irrelevant data and reduce the dimensionality of medical datasets, a feature selection approach using binary version of the proposed particle swarm optimization is introduced. In order to demonstrate the effectiveness of our proposed classifier, mutation-based classifier particle swarm optimization, it is checked out with three sets of data classifications namely, Wisconsin diagnostic breast cancer, Wisconsin breast cancer and heart-statlog, with different feature vector dimensions. The proposed algorithm is compared with different classifier algorithms including k-nearest neighbor, as a conventional classifier, particle swarm-classifier, genetic algorithm, and Imperialist competitive algorithm-classifier, as more sophisticated ones. The performance of each classifier was evaluated by calculating the accuracy, sensitivity, specificity and Matthews's correlation coefficient. The experimental results show that the mutation-based classifier particle swarm optimization unequivocally performs better than all the compared algorithms. PMID:25709937
NASA Astrophysics Data System (ADS)
Yu, Xu; Shao, Quanqin; Zhu, Yunhai; Deng, Yuejin; Yang, Haijun
2006-10-01
With the development of informationization and the separation between data management departments and application departments, spatial data sharing becomes one of the most important objectives for the spatial information infrastructure construction, and spatial metadata management system, data transmission security and data compression are the key technologies to realize spatial data sharing. This paper discusses the key technologies for metadata based on data interoperability, deeply researches the data compression algorithms such as adaptive Huffman algorithm, LZ77 and LZ78 algorithm, studies to apply digital signature technique to encrypt spatial data, which can not only identify the transmitter of spatial data, but also find timely whether the spatial data are sophisticated during the course of network transmission, and based on the analysis of symmetric encryption algorithms including 3DES,AES and asymmetric encryption algorithm - RAS, combining with HASH algorithm, presents a improved mix encryption method for spatial data. Digital signature technology and digital watermarking technology are also discussed. Then, a new solution of spatial data network distribution is put forward, which adopts three-layer architecture. Based on the framework, we give a spatial data network distribution system, which is efficient and safe, and also prove the feasibility and validity of the proposed solution.
Merging Digital Medicine and Economics: Two Moving Averages Unlock Biosignals for Better Health.
Elgendi, Mohamed
2018-01-06
Algorithm development in digital medicine necessitates ongoing knowledge and skills updating to match the current demands and constant progression in the field. In today's chaotic world there is an increasing trend to seek out simple solutions for complex problems that can increase efficiency, reduce resource consumption, and improve scalability. This desire has spilled over into the world of science and research where many disciplines have taken to investigating and applying more simplistic approaches. Interestingly, through a review of current literature and research efforts, it seems that the learning and teaching principles in digital medicine continue to push towards the development of sophisticated algorithms with a limited scope and has not fully embraced or encouraged a shift towards more simple solutions that yield equal or better results. This short note aims to demonstrate that within the world of digital medicine and engineering, simpler algorithms can offer effective and efficient solutions, where traditionally more complex algorithms have been used. Moreover, the note demonstrates that bridging different research disciplines is very beneficial and yields valuable insights and results.
Optics measurement algorithms and error analysis for the proton energy frontier
NASA Astrophysics Data System (ADS)
Langner, A.; Tomás, R.
2015-03-01
Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β -functions (β*). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β* values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumuluru, Jaya Shankar; McCulloch, Richard Chet James
In this work a new hybrid genetic algorithm was developed which combines a rudimentary adaptive steepest ascent hill climbing algorithm with a sophisticated evolutionary algorithm in order to optimize complex multivariate design problems. By combining a highly stochastic algorithm (evolutionary) with a simple deterministic optimization algorithm (adaptive steepest ascent) computational resources are conserved and the solution converges rapidly when compared to either algorithm alone. In genetic algorithms natural selection is mimicked by random events such as breeding and mutation. In the adaptive steepest ascent algorithm each variable is perturbed by a small amount and the variable that caused the mostmore » improvement is incremented by a small step. If the direction of most benefit is exactly opposite of the previous direction with the most benefit then the step size is reduced by a factor of 2, thus the step size adapts to the terrain. A graphical user interface was created in MATLAB to provide an interface between the hybrid genetic algorithm and the user. Additional features such as bounding the solution space and weighting the objective functions individually are also built into the interface. The algorithm developed was tested to optimize the functions developed for a wood pelleting process. Using process variables (such as feedstock moisture content, die speed, and preheating temperature) pellet properties were appropriately optimized. Specifically, variables were found which maximized unit density, bulk density, tapped density, and durability while minimizing pellet moisture content and specific energy consumption. The time and computational resources required for the optimization were dramatically decreased using the hybrid genetic algorithm when compared to MATLAB's native evolutionary optimization tool.« less
NASA Astrophysics Data System (ADS)
Peng, Ao-Ping; Li, Zhi-Hui; Wu, Jun-Lin; Jiang, Xin-Yu
2016-12-01
Based on the previous researches of the Gas-Kinetic Unified Algorithm (GKUA) for flows from highly rarefied free-molecule transition to continuum, a new implicit scheme of cell-centered finite volume method is presented for directly solving the unified Boltzmann model equation covering various flow regimes. In view of the difficulty in generating the single-block grid system with high quality for complex irregular bodies, a multi-block docking grid generation method is designed on the basis of data transmission between blocks, and the data structure is constructed for processing arbitrary connection relations between blocks with high efficiency and reliability. As a result, the gas-kinetic unified algorithm with the implicit scheme and multi-block docking grid has been firstly established and used to solve the reentry flow problems around the multi-bodies covering all flow regimes with the whole range of Knudsen numbers from 10 to 3.7E-6. The implicit and explicit schemes are applied to computing and analyzing the supersonic flows in near-continuum and continuum regimes around a circular cylinder with careful comparison each other. It is shown that the present algorithm and modelling possess much higher computational efficiency and faster converging properties. The flow problems including two and three side-by-side cylinders are simulated from highly rarefied to near-continuum flow regimes, and the present computed results are found in good agreement with the related DSMC simulation and theoretical analysis solutions, which verify the good accuracy and reliability of the present method. It is observed that the spacing of the multi-body is smaller, the cylindrical throat obstruction is greater with the flow field of single-body asymmetrical more obviously and the normal force coefficient bigger. While in the near-continuum transitional flow regime of near-space flying surroundings, the spacing of the multi-body increases to six times of the diameter of the single-body, the interference effects of the multi-bodies tend to be negligible. The computing practice has confirmed that it is feasible for the present method to compute the aerodynamics and reveal flow mechanism around complex multi-body vehicles covering all flow regimes from the gas-kinetic point of view of solving the unified Boltzmann model velocity distribution function equation.
Demonstration of Hybrid DSMC-CFD Capability for Nonequilibrium Reacting Flow
2018-02-09
Lens-XX facility. This flow was chosen since a recent blind-code validation exercise revealed differences in CFD predictions and experimental data... experimental data that could be due to rarefied flow effects. The CFD solutions (using the US3D code) were run with no-slip boundary conditions and with...excellent agreement with that predicted by CFD. This implies that the dif- ference between CFD predictions and experimental data is not due to rarefied
1992-10-01
sealed bidding and competitive proposals. governed by the same regulations and laws The sealed bidding procedure requires ade- that govern procurement ...Summary xiv NDI ACQUISITION: An Alternative to "Business as Usual" to successful, effective government procure - posal Cover Sheet). Moreover, the...became policy when the OPlP ;,;sued the first opment costs. These benefits may be offset by in a series of memoranda governing procure - performance
Integrated Logistics Guide. Second Edition
1994-06-14
FORMER FACULTY DEPARTMENT CHAIRMAN MR. JOHN RIFFEE MR. JOEL MANARY CDR DALE IMMEL, USN COL SHAROLYN HAYES, USA LT COL RICHARD EZZELL , USAF DSMC LOGISTICS...Compliance with the requirement by program management should depict of DoDI 5000.2, Part 7A, to establish an ILS the most essential support program mile ...system level fac- tors and the performance of readiness simu- 3.4 SUMMARY lations. e Initial LSA activities prior to Mile - 3.5 REFERENCES stone 0 and
2011-10-01
specific modules as needed. The term “startup” is inclusive of any point in a DoD acquisition program. As noted above, methodology for conducting...Acquisition Sustainment =Decision Point =Milestone Review =Decision Point if PDR is not conducted before Milestone B ProgramA B Initiation) C IOC FOC...start a new program 2.2 Background Conclusions flowing from these observations led the Office of the Secretary of Defense, the De - fense Acquisition
Heat transfer in nonequilibrium boundary layer flow over a partly catalytic wall
NASA Astrophysics Data System (ADS)
Wang, Zhi-Hui
2016-11-01
Surface catalysis has a huge influence on the aeroheating performance of hypersonic vehicles. For the reentry flow problem of a traditional blunt vehicle, it is reasonable to assume a frozen boundary layer surrounding the vehicles' nose, and the catalytic heating can be decoupled with the heat conduction. However, when considering a hypersonic cruise vehicle flying in the medium-density near space, the boundary layer flow around its sharp leading-edge is likely to be nonequilibrium rather than frozen due to rarefied gas effects. As a result, there will be a competition between the heat conduction and the catalytic heating. In this paper, the theoretical modeling and the direct simulation Monte Carlo (DSMC) method are employed to study the corresponding rarefied nonequilibrium flow and heat transfer phenomena near the leading edge of the near space hypersonic vehicles. It is found that even under identical rarefication degree, the nonequilibrium degree of the flow and the corresponding heat transfer performance of the sharp leading edges could be different from that of the big blunt noses. A generalized model is preliminarily proposed to describe and to evaluate the competitive effects between the homogeneous recombination of atoms inside the nonequilibrium boundary layer and the heterogeneous recombination of atoms on the catalytic wall surface. The introduced nonequilibrium criterion and the analytical formula are validated and calibrated by the DSMC results, and the physical mechanism is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Tong, E-mail: tongzhu2@illinois.edu; Levin, Deborah A., E-mail: deblevin@illinois.edu; Li, Zheng, E-mail: zul107@psu.edu
2016-08-14
A high fidelity internal energy relaxation model for N{sub 2}–N suitable for use in direct simulation Monte Carlo (DSMC) modeling of chemically reacting flows is proposed. A novel two-dimensional binning approach with variable bin energy resolutions in the rotational and vibrational modes is developed for treating the internal mode of N{sub 2}. Both bin-to-bin and state-specific relaxation cross sections are obtained using the molecular dynamics/quasi-classical trajectory (MD/QCT) method with two potential energy surfaces as well as the state-specific database of Jaffe et al. The MD/QCT simulations of inelastic energy exchange between N{sub 2} and N show that there is amore » strong forward-preferential scattering behavior at high collision velocities. The 99 bin model is used in homogeneous DSMC relaxation simulations and is found to be able to recover the state-specific master equation results of Panesi et al. when the Jaffe state-specific cross sections are used. Rotational relaxation energy profiles and relaxation times obtained using the ReaxFF and Jaffe potential energy surfaces (PESs) are in general agreement but there are larger differences between the vibrational relaxation times. These differences become smaller as the translational temperature increases because the difference in the PES energy barrier becomes less important.« less
A generalized form of the Bernoulli Trial collision scheme in DSMC: Derivation and evaluation
NASA Astrophysics Data System (ADS)
Roohi, Ehsan; Stefanov, Stefan; Shoja-Sani, Ahmad; Ejraei, Hossein
2018-02-01
The impetus of this research is to present a generalized Bernoulli Trial collision scheme in the context of the direct simulation Monte Carlo (DSMC) method. Previously, a subsequent of several collision schemes have been put forward, which were mathematically based on the Kac stochastic model. These include Bernoulli Trial (BT), Ballot Box (BB), Simplified Bernoulli Trial (SBT) and Intelligent Simplified Bernoulli Trial (ISBT) schemes. The number of considered pairs for a possible collision in the above-mentioned schemes varies between N (l) (N (l) - 1) / 2 in BT, 1 in BB, and (N (l) - 1) in SBT or ISBT, where N (l) is the instantaneous number of particles in the lth cell. Here, we derive a generalized form of the Bernoulli Trial collision scheme (GBT) where the number of selected pairs is any desired value smaller than (N (l) - 1), i.e., Nsel < (N (l) - 1), keeping the same the collision frequency and accuracy of the solution as the original SBT and BT models. We derive two distinct formulas for the GBT scheme, where both formula recover BB and SBT limits if Nsel is set as 1 and N (l) - 1, respectively, and provide accurate solutions for a wide set of test cases. The present generalization further improves the computational efficiency of the BT-based collision models compared to the standard no time counter (NTC) and nearest neighbor (NN) collision models.
Parsons, Neal; Levin, Deborah A; van Duin, Adri C T; Zhu, Tong
2014-12-21
The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N2(Σg+1)-N2(Σg+1) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections.
2017-01-01
A space propulsion system is important for the normal mission operations of a spacecraft by adjusting its attitude and maneuver. Generally, a mono- and a bipropellant thruster have been mainly used for low thrust liquid rocket engines. But as the plume gas expelled from these small thrusters diffuses freely in a vacuum space along all directions, unwanted effects due to the plume collision onto the spacecraft surfaces can dramatically cause a deterioration of the function and performance of a spacecraft. Thus, aim of the present study is to investigate and compare the major differences of the plume gas impingement effects quantitatively between the small mono- and bipropellant thrusters using the computational fluid dynamics (CFD). For an efficiency of the numerical calculations, the whole calculation domain is divided into two different flow regimes depending on the flow characteristics, and then Navier-Stokes equations and parallelized Direct Simulation Monte Carlo (DSMC) method are adopted for each flow regime. From the present analysis, thermal and mass influences of the plume gas impingements on the spacecraft were analyzed for the mono- and the bipropellant thrusters. As a result, it is concluded that a careful understanding on the plume impingement effects depending on the chemical characteristics of different propellants are necessary for the efficient design of the spacecraft. PMID:28636625
NASA Astrophysics Data System (ADS)
Gicquel, Adeline; Vincent, Jean-Baptiste; Sierks, Holger; Rose, Martin; Agarwal, Jessica; Deller, Jakob; Guettler, Carsten; Hoefner, Sebastian; Hofmann, Marc; Hu, Xuanyu; Kovacs, Gabor; Oklay Vincent, Nilda; Shi, Xian; Tubiana, Cecilia; Barbieri, Cesare; Lamy, Phylippe; Rodrigo, Rafael; Koschny, Detlef; Rickman, Hans; OSIRIS Team
2016-10-01
Images of the nucleus and the coma (gas and dust) of comet 67P/Churyumov- Gerasimenko have been acquired by the OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) cameras system since March 2014 using both the wide angle camera (WAC) and the narrow angle camera (NAC). We are using the NAC camera to study the bright outburst observed on July 29th, 2015 in the southern hemisphere. The NAC camera's wavelength ranges between 250-1000 nm with a combination of 12 filters. The high spatial resolution is needed to localize the source point of the outburst on the surface of the nucleus. At the time of the observations, the heliocentric distance was 1.25AU and the distance between the spacecraft and the comet was 126 km. We aim to understand the physics leading to such outgassing: Is the jet associated to the outbursts controlled by the micro-topography? Or by ice suddenly exposed? We are using the Direct Simulation Monte Carlo (DSMC) method to study the gas flow close to the nucleus. The goal of the DSMC code is to reproduce the opening angle of the jet, and constrain the outgassing ratio between outburst source and local region. The results of this model will be compared to the images obtained with the NAC camera.
Neurofeedback Training for BCI Control
NASA Astrophysics Data System (ADS)
Neuper, Christa; Pfurtscheller, Gert
Brain-computer interface (BCI) systems detect changes in brain signals that reflect human intention, then translate these signals to control monitors or external devices (for a comprehensive review, see [1]). BCIs typically measure electrical signals resulting from neural firing (i.e. neuronal action potentials, Electroencephalogram (ECoG), or Electroencephalogram (EEG)). Sophisticated pattern recognition and classification algorithms convert neural activity into the required control signals. BCI research has focused heavily on developing powerful signal processing and machine learning techniques to accurately classify neural activity [2-4].
2017-01-01
Inverted effective ONVMS for an M30 Bomb in a test-stand scenario. The target is oriented 45 degrees at a depth of 150 cm depth (top) and oriented...vertically at a depth of 210 cm (bottom). The red lines are the total ONVMS for a library AN M30 Bomb , and the other lines correspond to the...Centimeter DE Differential Evolution DLL Dynamic Link Libraries DoD Department of Defense EM Electromagnetic EMA Expectation
2016 KIVA-hpFE Development: A Robust and Accurate Engine Modeling Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrington, David Bradley; Waters, Jiajia
Los Alamos National Laboratory and its collaborators are facilitating engine modeling by improving accuracy and robustness of the modeling, and improving the robustness of software. We also continue to improve the physical modeling methods. We are developing and implementing new mathematical algorithms, those that represent the physics within an engine. We provide software that others may use directly or that they may alter with various models e.g., sophisticated chemical kinetics, different turbulent closure methods or other fuel injection and spray systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Searles, D.B.
1993-03-01
The goal of the proposed work is the creation of a software system that will perform sophisticated pattern recognition and related functions at a level of abstraction and with expressive power beyond current general-purpose pattern-matching systems for biological sequences; and with a more uniform language, environment, and graphical user interface, and with greater flexibility, extensibility, embeddability, and ability to incorporate other algorithms, than current special-purpose analytic software.
Numerical realization of the variational method for generating self-trapped beams
NASA Astrophysics Data System (ADS)
Duque, Erick I.; Lopez-Aguayo, Servando; Malomed, Boris A.
2018-03-01
We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schr\\"odinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.
Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc
2014-05-01
Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maier, Joscha, E-mail: joscha.maier@dkfz.de; Sawall, Stefan; Kachelrieß, Marc
2014-05-15
Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levelsmore » from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. Conclusions: LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.« less
NASA Technical Reports Server (NTRS)
Holden, Michael S.; Harvey, John K.; Boyd, Iain D.; George, Jyothish; Horvath, Thomas J.
1997-01-01
This paper summarizes the results of a series of experimental studies in the LENS shock tunnel and computations with DSMC and Navier Stokes codes which have been made to examine the aerothermal and flowfield characteristics of the flow over a sting-supported planetary probe configuration in hypervelocity air and nitrogen flows. The experimental program was conducted in the LENS hypervelocity shock tunnel at total enthalpies of 5and 10 MJkg for a range of reservoir pressure conditions from 70 to 500 bars. Heat transfer and pressure measurements were made on the front and rear face of the probe and along the supporting sting. High-speed and single shot schlieren photography were also employed to examine the flow over the model and the time to establish the flow in the base recirculation region. Predictions of the flowfield characteristics and the distributions of heat transfer and pressure were made with DSMC codes for rarefied flow conditions and with the Navier-Stokes solvers for the higher pressure conditions where the flows were assumed to be laminar. Analysis of the time history records from the heat transfer and pressure instrumentation on the face of the probe and in the base region indicated that the base flow was fully established in under 4 milliseconds from flow initiation or between 35 and 50 flow lengths based on base height. The measurements made in three different tunnel entries with two models of identical geometries but with different instrumentation packages, one prepared by NASA Langley and the second prepared by CUBRC, demonstrated good agreement between heat transfer measurements made with two different types of thin film and coaxial gage instrumentation. The measurements of heat transfer and pressure to the front face of the probe were in good agreement with theoretical predictions from both the DSMC and Navier Stokes codes. For the measurements made in low density flows, computations with the DSMC code were found to compare well with the pressure and heat transfer measurements on the sting, although the computed heat transfer rates in the recirculation region did not exhibit the same characteristics as the measurements. For the 10MJkg and 500 bar reservoir match point condition, the measurements and heat transfer along the sting from the first group of studies were in agreement with the Navier Stokes solutions for laminar conditions. A similar set of measurements made in later tests where the model was moved to a slightly different position in the test section indicated that the boundary layer in the reattachment compression region was close to transition or transitional where small changes in the test environment can result in larger than laminar heating rates. The maximum heating coefficients on the sting observed in the present studies was a small fraction of similar measurements obtained at nominally the same conditions in the HEG shock tunnel, where it is possible for transition to occur in the base flow, and in the low enthalpy studies conducted in the NASA Langley high Reynolds number Mach 10 tunnel where the base flow was shown to be turbulent. While the hybrid Navier- StokedDMSC calculations by Gochberg et al. (Reference 1) suggested that employing the Navier- Stokes calculations for the entire flowfield could be seriously in error in the base region for the 10 MJkg, 500 bar test case, similar calculations performed by Cornell, presented here, do not.
Khosravi, H R; Nodehi, Mr Golrokh; Asnaashari, Kh; Mahdavi, S R; Shirazi, A R; Gholami, S
2012-07-01
The aim of this study was to evaluate and analytically compare different calculation algorithms applied in our country radiotherapy centers base on the methodology developed by IAEA for treatment planning systems (TPS) commissioning (IAEA TEC-DOC 1583). Thorax anthropomorphic phantom (002LFC CIRS inc.), was used to measure 7 tests that simulate the whole chain of external beam TPS. The dose were measured with ion chambers and the deviation between measured and TPS calculated dose was reported. This methodology, which employs the same phantom and the same setup test cases, was tested in 4 different hospitals which were using 5 different algorithms/ inhomogeneity correction methods implemented in different TPS. The algorithms in this study were divided into two groups including correction based and model based algorithms. A total of 84 clinical test case datasets for different energies and calculation algorithms were produced, which amounts of differences in inhomogeneity points with low density (lung) and high density (bone) was decreased meaningfully with advanced algorithms. The number of deviations outside agreement criteria was increased with the beam energy and decreased with advancement of the TPS calculation algorithm. Large deviations were seen in some correction based algorithms, so sophisticated algorithms, would be preferred in clinical practices, especially for calculation in inhomogeneous media. Use of model based algorithms with lateral transport calculation, is recommended. Some systematic errors which were revealed during this study, is showing necessity of performing periodic audits on TPS in radiotherapy centers. © 2012 American Association of Physicists in Medicine.
Liang, Tengfei; Li, Qi; Ye, Wenjing
2013-07-01
A systematic study on the performance of two empirical gas-wall interaction models, the Maxwell model and the Cercignani-Lampis (CL) model, in the entire Knudsen range is conducted. The models are evaluated by examining the accuracy of key macroscopic quantities such as temperature, density, and pressure, in three benchmark thermal problems, namely the Fourier thermal problem, the Knudsen force problem, and the thermal transpiration problem. The reference solutions are obtained from a validated hybrid DSMC-MD algorithm developed in-house. It has been found that while both models predict temperature and density reasonably well in the Fourier thermal problem, the pressure profile obtained from Maxwell model exhibits a trend that opposes that from the reference solution. As a consequence, the Maxwell model is unable to predict the orientation change of the Knudsen force acting on a cold cylinder embedded in a hot cylindrical enclosure at a certain Knudsen number. In the simulation of the thermal transpiration coefficient, although all three models overestimate the coefficient, the coefficient obtained from CL model is the closest to the reference solution. The Maxwell model performs the worst. The cause of the overestimated coefficient is investigated and its link to the overly constrained correlation between the tangential momentum accommodation coefficient and the tangential energy accommodation coefficient inherent in the models is pointed out. Directions for further improvement of models are suggested.
Machine Learning for Discriminating Quantum Measurement Trajectories and Improving Readout.
Magesan, Easwar; Gambetta, Jay M; Córcoles, A D; Chow, Jerry M
2015-05-22
Current methods for classifying measurement trajectories in superconducting qubit systems produce fidelities systematically lower than those predicted by experimental parameters. Here, we place current classification methods within the framework of machine learning (ML) algorithms and improve on them by investigating more sophisticated ML approaches. We find that nonlinear algorithms and clustering methods produce significantly higher assignment fidelities that help close the gap to the fidelity possible under ideal noise conditions. Clustering methods group trajectories into natural subsets within the data, which allows for the diagnosis of systematic errors. We find large clusters in the data associated with T1 processes and show these are the main source of discrepancy between our experimental and ideal fidelities. These error diagnosis techniques help provide a path forward to improve qubit measurements.
Commissioning of the FTS-2 Data Reduction Pipeline
NASA Astrophysics Data System (ADS)
Sherwood, M.; Naylor, D.; Gom, B.; Bell, G.; Friberg, P.; Bintley, D.
2015-09-01
FTS-2 is the intermediate resolution Fourier Transform Spectrometer coupled to the SCUBA-2 facility bolometer camera at the James Clerk Maxwell Telescope in Hawaii. Although in principle FTS instruments have the advantage of relatively simple optics compared to other spectrometers, they require more sophisticated data processing to compute spectra from the recorded interferogram signal. In the case of FTS-2, the complicated optical design required to interface with the existing telescope optics introduces performance compromises that complicate spectral and spatial calibration, and the response of the SCUBA-2 arrays introduce interferogram distortions that are a challenge for data reduction algorithms. We present an overview of the pipeline and discuss new algorithms that have been written to correct the noise introduced by unexpected behavior of the SCUBA-2 arrays.
Design Principles of Regulatory Networks: Searching for the Molecular Algorithms of the Cell
Lim, Wendell A.; Lee, Connie M.; Tang, Chao
2013-01-01
A challenge in biology is to understand how complex molecular networks in the cell execute sophisticated regulatory functions. Here we explore the idea that there are common and general principles that link network structures to biological functions, principles that constrain the design solutions that evolution can converge upon for accomplishing a given cellular task. We describe approaches for classifying networks based on abstract architectures and functions, rather than on the specific molecular components of the networks. For any common regulatory task, can we define the space of all possible molecular solutions? Such inverse approaches might ultimately allow the assembly of a design table of core molecular algorithms that could serve as a guide for building synthetic networks and modulating disease networks. PMID:23352241
NASA Astrophysics Data System (ADS)
Lin, Chien-Liang; Su, Yu-Zheng; Hung, Min-Wei; Huang, Kuo-Cheng
2010-08-01
In recent years, Augmented Reality (AR)[1][2][3] is very popular in universities and research organizations. The AR technology has been widely used in Virtual Reality (VR) fields, such as sophisticated weapons, flight vehicle development, data model visualization, virtual training, entertainment and arts. AR has characteristics to enhance the display output as a real environment with specific user interactive functions or specific object recognitions. It can be use in medical treatment, anatomy training, precision instrument casting, warplane guidance, engineering and distance robot control. AR has a lot of vantages than VR. This system developed combines sensors, software and imaging algorithms to make users feel real, actual and existing. Imaging algorithms include gray level method, image binarization method, and white balance method in order to make accurate image recognition and overcome the effects of light.
Fast optimization algorithms and the cosmological constant
NASA Astrophysics Data System (ADS)
Bao, Ning; Bousso, Raphael; Jordan, Stephen; Lackey, Brad
2017-11-01
Denef and Douglas have observed that in certain landscape models the problem of finding small values of the cosmological constant is a large instance of a problem that is hard for the complexity class NP (Nondeterministic Polynomial-time). The number of elementary operations (quantum gates) needed to solve this problem by brute force search exceeds the estimated computational capacity of the observable Universe. Here we describe a way out of this puzzling circumstance: despite being NP-hard, the problem of finding a small cosmological constant can be attacked by more sophisticated algorithms whose performance vastly exceeds brute force search. In fact, in some parameter regimes the average-case complexity is polynomial. We demonstrate this by explicitly finding a cosmological constant of order 10-120 in a randomly generated 1 09-dimensional Arkani-Hamed-Dimopoulos-Kachru landscape.
Tools for Atmospheric Radiative Transfer: Streamer and FluxNet. Revised
NASA Technical Reports Server (NTRS)
Key, Jeffrey R.; Schweiger, Axel J.
1998-01-01
Two tools for the solution of radiative transfer problems are presented. Streamer is a highly flexible medium spectral resolution radiative transfer model based on the plane-parallel theory of radiative transfer. Capable of computing either fluxes or radiances, it is suitable for studying radiative processes at the surface or within the atmosphere and for the development of remote-sensing algorithms. FluxNet is a fast neural network-based implementation of Streamer for computing surface fluxes. It allows for a sophisticated treatment of radiative processes in the analysis of large data sets and potential integration into geophysical models where computational efficiency is an issue. Documentation and tools for the development of alternative versions of Fluxnet are available. Collectively, Streamer and FluxNet solve a wide variety of problems related to radiative transfer: Streamer provides the detail and sophistication needed to perform basic research on most aspects of complex radiative processes while the efficiency and simplicity of FluxNet make it ideal for operational use.
Software algorithms for false alarm reduction in LWIR hyperspectral chemical agent detection
NASA Astrophysics Data System (ADS)
Manolakis, D.; Model, J.; Rossacci, M.; Zhang, D.; Ontiveros, E.; Pieper, M.; Seeley, J.; Weitz, D.
2008-04-01
The long-wave infrared (LWIR) hyperpectral sensing modality is one that is often used for the problem of detection and identification of chemical warfare agents (CWA) which apply to both military and civilian situations. The inherent nature and complexity of background clutter dictates a need for sophisticated and robust statistical models which are then used in the design of optimum signal processing algorithms that then provide the best exploitation of hyperspectral data to ultimately make decisions on the absence or presence of potentially harmful CWAs. This paper describes the basic elements of an automated signal processing pipeline developed at MIT Lincoln Laboratory. In addition to describing this signal processing architecture in detail, we briefly describe the key signal models that form the foundation of these algorithms as well as some spatial processing techniques used for false alarm mitigation. Finally, we apply this processing pipeline to real data measured by the Telops FIRST hyperspectral (FIRST) sensor to demonstrate its practical utility for the user community.
Automated parameterization of intermolecular pair potentials using global optimization techniques
NASA Astrophysics Data System (ADS)
Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk
2014-12-01
In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters' influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.
NASA Astrophysics Data System (ADS)
Leclerc, Arnaud; Thomas, Phillip S.; Carrington, Tucker
2017-08-01
Vibrational spectra and wavefunctions of polyatomic molecules can be calculated at low memory cost using low-rank sum-of-product (SOP) decompositions to represent basis functions generated using an iterative eigensolver. Using a SOP tensor format does not determine the iterative eigensolver. The choice of the interative eigensolver is limited by the need to restrict the rank of the SOP basis functions at every stage of the calculation. We have adapted, implemented and compared different reduced-rank algorithms based on standard iterative methods (block-Davidson algorithm, Chebyshev iteration) to calculate vibrational energy levels and wavefunctions of the 12-dimensional acetonitrile molecule. The effect of using low-rank SOP basis functions on the different methods is analysed and the numerical results are compared with those obtained with the reduced rank block power method. Relative merits of the different algorithms are presented, showing that the advantage of using a more sophisticated method, although mitigated by the use of reduced-rank SOP functions, is noticeable in terms of CPU time.
Assessing semantic similarity of texts - Methods and algorithms
NASA Astrophysics Data System (ADS)
Rozeva, Anna; Zerkova, Silvia
2017-12-01
Assessing the semantic similarity of texts is an important part of different text-related applications like educational systems, information retrieval, text summarization, etc. This task is performed by sophisticated analysis, which implements text-mining techniques. Text mining involves several pre-processing steps, which provide for obtaining structured representative model of the documents in a corpus by means of extracting and selecting the features, characterizing their content. Generally the model is vector-based and enables further analysis with knowledge discovery approaches. Algorithms and measures are used for assessing texts at syntactical and semantic level. An important text-mining method and similarity measure is latent semantic analysis (LSA). It provides for reducing the dimensionality of the document vector space and better capturing the text semantics. The mathematical background of LSA for deriving the meaning of the words in a given text by exploring their co-occurrence is examined. The algorithm for obtaining the vector representation of words and their corresponding latent concepts in a reduced multidimensional space as well as similarity calculation are presented.
Moving Object Detection Using a Parallax Shift Vector Algorithm
NASA Astrophysics Data System (ADS)
Gural, Peter S.; Otto, Paul R.; Tedesco, Edward F.
2018-07-01
There are various algorithms currently in use to detect asteroids from ground-based observatories, but they are generally restricted to linear or mildly curved movement of the target object across the field of view. Space-based sensors in high inclination, low Earth orbits can induce significant parallax in a collected sequence of images, especially for objects at the typical distances of asteroids in the inner solar system. This results in a highly nonlinear motion pattern of the asteroid across the sensor, which requires a more sophisticated search pattern for detection processing. Both the classical pattern matching used in ground-based asteroid search and the more sensitive matched filtering and synthetic tracking techniques, can be adapted to account for highly complex parallax motion. A new shift vector generation methodology is discussed along with its impacts on commonly used detection algorithms, processing load, and responsiveness to asteroid track reporting. The matched filter, template generator, and pattern matcher source code for the software described herein are available via GitHub.
Faster PET reconstruction with a stochastic primal-dual hybrid gradient method
NASA Astrophysics Data System (ADS)
Ehrhardt, Matthias J.; Markiewicz, Pawel; Chambolle, Antonin; Richtárik, Peter; Schott, Jonathan; Schönlieb, Carola-Bibiane
2017-08-01
Image reconstruction in positron emission tomography (PET) is computationally challenging due to Poisson noise, constraints and potentially non-smooth priors-let alone the sheer size of the problem. An algorithm that can cope well with the first three of the aforementioned challenges is the primal-dual hybrid gradient algorithm (PDHG) studied by Chambolle and Pock in 2011. However, PDHG updates all variables in parallel and is therefore computationally demanding on the large problem sizes encountered with modern PET scanners where the number of dual variables easily exceeds 100 million. In this work, we numerically study the usage of SPDHG-a stochastic extension of PDHG-but is still guaranteed to converge to a solution of the deterministic optimization problem with similar rates as PDHG. Numerical results on a clinical data set show that by introducing randomization into PDHG, similar results as the deterministic algorithm can be achieved using only around 10 % of operator evaluations. Thus, making significant progress towards the feasibility of sophisticated mathematical models in a clinical setting.
Single snapshot DOA estimation
NASA Astrophysics Data System (ADS)
Häcker, P.; Yang, B.
2010-10-01
In array signal processing, direction of arrival (DOA) estimation has been studied for decades. Many algorithms have been proposed and their performance has been studied thoroughly. Yet, most of these works are focused on the asymptotic case of a large number of snapshots. In automotive radar applications like driver assistance systems, however, only a small number of snapshots of the radar sensor array or, in the worst case, a single snapshot is available for DOA estimation. In this paper, we investigate and compare different DOA estimators with respect to their single snapshot performance. The main focus is on the estimation accuracy and the angular resolution in multi-target scenarios including difficult situations like correlated targets and large target power differences. We will show that some algorithms lose their ability to resolve targets or do not work properly at all. Other sophisticated algorithms do not show a superior performance as expected. It turns out that the deterministic maximum likelihood estimator is a good choice under these hard conditions.
Establishing a Department of Defense Program Management Body of Knowledge
1991-09-01
systems included, "...thousands of jet fighters, bombers and transport aircraft; one hundred new combat and support vessels; and thousands of tanks and...cannon-carrying troop transports and strategic and tactical missiles" (12:9). Such systems were designed to achieve goals and performance levels never...to L. A a 20-week Program Mnageme-.nt .ur., ’ DSMc b-,o : taking command of a mra or pLog-im. A Major De ?-n.5 Acquisition (Category I) Program in the
2009-03-27
ones like the Lennard - Jones potential with established parameters for each gas (e.g. N2 and 02), and for inelastic collisions DSMC method employs...solution of the collision integral. Lennard - Jones potential with two free parameters is used to obtain the elastic cross-section of the gas molecules...and the so called "combinatory relations" are used to obtain parameters of Lennard - Jones potential for an interaction of molecule A with molecule B
1995-02-01
ANo11C ,ing Eio Collie J. Johnson Art Director Greg Caruth K Typography nod Design Paula Croisetiere > Jeanne Elmore es~ Protrm Mlanager (ISSN 0199...and is especially helpful in two cisions. "The message here is to all of small-purchase categories - under us - from program directors, to pro...Facilitation Center riers in meetings due to emotions , rank and personality; The facility uses GROUPWARE wil enabe the * parallel processing, as all partici
Molecular gas dynamics applied to low-thrust propulsion
NASA Astrophysics Data System (ADS)
Zelesnik, Donna; Penko, Paul F.; Boyd, Iain D.
1993-11-01
The Direct Simulation Monte Carlo method is currently being applied to study flowfields of small thrusters, including both the internal nozzle and the external plume flow. The DSMC method is employed because of its inherent ability to capture nonequilibrium effects and proper boundary physics in low-density flow that are not readily obtained by continuum methods. Accurate prediction of both the internal and external nozzle flow is important in determining plume expansion which, in turn, bears directly on impingement and contamination effects.
2008-07-02
In order to cover a range of molecular species, argon , nitrogen, and methane were used as test gases. The polarizability to mass ratio of these gases...Japan, 21-25 July 2008. 14. ABSTRACT The Direct Simulation Monte Carlo (DSMC) method was used to investigate the interaction between argon ...reducing the maximum temperature. The optimal intervening time was found to be 0.7, 1.0 and 0.25 ns for argon , nitrogen, and methane at one atmosphere
Molecular gas dynamics applied to low-thrust propulsion
NASA Technical Reports Server (NTRS)
Zelesnik, Donna; Penko, Paul F.; Boyd, Iain D.
1993-01-01
The Direct Simulation Monte Carlo method is currently being applied to study flowfields of small thrusters, including both the internal nozzle and the external plume flow. The DSMC method is employed because of its inherent ability to capture nonequilibrium effects and proper boundary physics in low-density flow that are not readily obtained by continuum methods. Accurate prediction of both the internal and external nozzle flow is important in determining plume expansion which, in turn, bears directly on impingement and contamination effects.
1990-09-01
decrease in average consumer prices , to think of Europe 1992 as a starting date or a point of departure for what some have called the largest...overall The European Community’s four consumer prices . executive institutions-- Commission, Parliament, Council of Ministers and Court In 1985, the...of the draft, but also for may want to skim Chapter One and go to the extra effort he put forth to ensure that Chapter Two’s discussion on parallel
The Role and Nature of Anti-Tamper Techniques in U.S. Defense Acquisition
1999-01-01
sales to an ally, accidental loss, or capture during a conflict by an enemy. Because U.S. military hardware and software have a high technical content...that provides a qualitative edge, protection of this technological superiority is a high priority. Program managers can mitigate such risks with a...dealing with technical and military topics. He is a graduate of DSMC’s APMC 97-3 and the USAF Test Pilot School . He has an M.S. degree in aerospace
Modeling shock waves in an ideal gas: combining the Burnett approximation and Holian's conjecture.
He, Yi-Guang; Tang, Xiu-Zhang; Pu, Yi-Kang
2008-07-01
We model a shock wave in an ideal gas by combining the Burnett approximation and Holian's conjecture. We use the temperature in the direction of shock propagation rather than the average temperature in the Burnett transport coefficients. The shock wave profiles and shock thickness are compared with other theories. The results are found to agree better with the nonequilibrium molecular dynamics (NEMD) and direct simulation Monte Carlo (DSMC) data than the Burnett equations and the modified Navier-Stokes theory.
Zhang, Yiyan; Xin, Yi; Li, Qin; Ma, Jianshe; Li, Shuai; Lv, Xiaodan; Lv, Weiqi
2017-11-02
Various kinds of data mining algorithms are continuously raised with the development of related disciplines. The applicable scopes and their performances of these algorithms are different. Hence, finding a suitable algorithm for a dataset is becoming an important emphasis for biomedical researchers to solve practical problems promptly. In this paper, seven kinds of sophisticated active algorithms, namely, C4.5, support vector machine, AdaBoost, k-nearest neighbor, naïve Bayes, random forest, and logistic regression, were selected as the research objects. The seven algorithms were applied to the 12 top-click UCI public datasets with the task of classification, and their performances were compared through induction and analysis. The sample size, number of attributes, number of missing values, and the sample size of each class, correlation coefficients between variables, class entropy of task variable, and the ratio of the sample size of the largest class to the least class were calculated to character the 12 research datasets. The two ensemble algorithms reach high accuracy of classification on most datasets. Moreover, random forest performs better than AdaBoost on the unbalanced dataset of the multi-class task. Simple algorithms, such as the naïve Bayes and logistic regression model are suitable for a small dataset with high correlation between the task and other non-task attribute variables. K-nearest neighbor and C4.5 decision tree algorithms perform well on binary- and multi-class task datasets. Support vector machine is more adept on the balanced small dataset of the binary-class task. No algorithm can maintain the best performance in all datasets. The applicability of the seven data mining algorithms on the datasets with different characteristics was summarized to provide a reference for biomedical researchers or beginners in different fields.
Hu, Guohong; Wang, Hui-Yun; Greenawalt, Danielle M.; Azaro, Marco A.; Luo, Minjie; Tereshchenko, Irina V.; Cui, Xiangfeng; Yang, Qifeng; Gao, Richeng; Shen, Li; Li, Honghua
2006-01-01
Microarray-based analysis of single nucleotide polymorphisms (SNPs) has many applications in large-scale genetic studies. To minimize the influence of experimental variation, microarray data usually need to be processed in different aspects including background subtraction, normalization and low-signal filtering before genotype determination. Although many algorithms are sophisticated for these purposes, biases are still present. In the present paper, new algorithms for SNP microarray data analysis and the software, AccuTyping, developed based on these algorithms are described. The algorithms take advantage of a large number of SNPs included in each assay, and the fact that the top and bottom 20% of SNPs can be safely treated as homozygous after sorting based on their ratios between the signal intensities. These SNPs are then used as controls for color channel normalization and background subtraction. Genotype calls are made based on the logarithms of signal intensity ratios using two cutoff values, which were determined after training the program with a dataset of ∼160 000 genotypes and validated by non-microarray methods. AccuTyping was used to determine >300 000 genotypes of DNA and sperm samples. The accuracy was shown to be >99%. AccuTyping can be downloaded from . PMID:16982644
Gatos, Ilias; Tsantis, Stavros; Spiliopoulos, Stavros; Skouroliakou, Aikaterini; Theotokas, Ioannis; Zoumpoulis, Pavlos; Hazle, John D; Kagadis, George C
2015-07-01
Detect and classify focal liver lesions (FLLs) from contrast-enhanced ultrasound (CEUS) imaging by means of an automated quantification algorithm. The proposed algorithm employs a sophisticated segmentation method to detect and contour focal lesions from 52 CEUS video sequences (30 benign and 22 malignant). Lesion detection involves wavelet transform zero crossings utilization as an initialization step to the Markov random field model toward the lesion contour extraction. After FLL detection across frames, time intensity curve (TIC) is computed which provides the contrast agents' behavior at all vascular phases with respect to adjacent parenchyma for each patient. From each TIC, eight features were automatically calculated and employed into the support vector machines (SVMs) classification algorithm in the design of the image analysis model. With regard to FLLs detection accuracy, all lesions detected had an average overlap value of 0.89 ± 0.16 with manual segmentations for all CEUS frame-subsets included in the study. Highest classification accuracy from the SVM model was 90.3%, misdiagnosing three benign and two malignant FLLs with sensitivity and specificity values of 93.1% and 86.9%, respectively. The proposed quantification system that employs FLLs detection and classification algorithms may be of value to physicians as a second opinion tool for avoiding unnecessary invasive procedures.
DSMC simulation of two-phase plume flow with UV radiation
NASA Astrophysics Data System (ADS)
Li, Jie; Liu, Ying; Wang, Ning; Jin, Ling
2014-12-01
Rarefied gas-particle two-phase plume in which the phase of particles is liquid or solid flows from a solid propellant rocket of hypersonic vehicle flying at high altitudes, the aluminum oxide particulates not only impact the rarefied gas flow properties, but also make a great difference to plume radiation signature, so the radiation prediction of the rarefied gas-particle two-phase plume flow is very important for space target detection of hypersonic vehicles. Accordingly, this project aims to study the rarefied gas-particle two-phase flow and ultraviolet radiation (UV) characteristics. Considering a two-way interphase coupling of momentum and energy, the direct simulation Monte Carlo (DSMC) method is developed for particle phase change and the particle flow, including particulate collision, coalescence as well as separation, and a Monte Carlo ray trace model is implemented for the particulate UV radiation. A program for the numerical simulation of the gas-particle two-phase flow and radiation in which the gas flow nonequilibrium is strong is implemented as well. Ultraviolet radiation characteristics of the particle phase is studied based on the calculation of the flow field coupled with the radiation calculation, the radiation model for different size particles is analyzed, focusing on the effects of particle emission, absorption, scattering as well as the searchlight emission of the nozzle. A new approach may be proposed to describe the rarefied gas-particle two-phase plume flow and radiation transfer characteristics in this project.
DSMC simulations of leading edge flat-plate boundary layer flows at high Mach number
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev, , Dr.
2017-04-01
The flow over a 2D leading-edge flat plate is studied at Mach number Ma =(Uinf / \\setmn √{kBTinf / m}) in the range
DSMC simulations of leading edge flat-plate boundary layer flows at high Mach number
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev, , Dr.
2016-11-01
The flow over a 2D leading-edge flat plate is studied at Mach number Ma = (Uinf /√{kBTinf / m }) in the range
DSMC simulations of leading edge flat-plate boundary layer flows at high Mach number
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev, , Dr.
2017-01-01
The flow over a 2D leading-edge flat plate is studied at Mach number Ma = (Uinf /√{kBTinf / m }) in the range
DSMC simulations of leading edge flat-plate boundary layer flows at high Mach number
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev
2016-10-01
The flow over a 2D leading-edge flat plate is studied at Mach number Ma = (Uinf / {kBTinf /m}) in the range
DSMC simulations of leading edge flat-plate boundary layer flows at high Mach number
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev, , Dr.
The flow over a 2D leading-edge flat plate is studied at Mach number Ma = (Uinf / ∖ sqrt{kBTinf / m})in the range
NASA Astrophysics Data System (ADS)
Dickson, S.; Gausa, M. A.; Robertson, S. H.; Sternovsky, Z.
2012-12-01
We demonstrate that a channel electron multiplier (CEM) can be operated on a sounding rocket in the pulse-counting mode from 120 km to 75 km altitude without the cryogenic evacuation used in the past. Evacuation of the CEM is provided only by aerodynamic flow around the rocket. This demonstration is motivated by the need for additional flights of mass spectrometers to clarify the fate of metallic compounds and ions ablated from micrometeorites and their possible role in the nucleation of noctilucent clouds. The CEMs were flown as guest instruments on the two sounding rockets of the CHAMPS (CHarge And mass of Meteoritic smoke ParticleS) rocket campaign which were launched into the mesosphere in October 2011 from Andøya Rocket Range, Norway. Modeling of the aerodynamic flow around the payload with Direct Simulation Monte-Carlo (DSMC) code showed that the pressure is reduced below ambient in the void beneath an aft-facing surface. An enclosure containing the CEM was placed above an aft-facing deck and a valve was opened on the downleg to expose the CEM to the aerodynamically evacuated region below. The CEM operated successfully from apogee down to ~75 km. A Pirani gauge confirmed pressures reduced to as low as 20% of ambient with the extent of reduction dependent upon altitude and velocity. Additional DSMC simulations indicate that there are alternate payload designs with improved aerodynamic pumping for forward mounted instruments such as mass spectrometers.
A paradigm for modeling and computation of gas dynamics
NASA Astrophysics Data System (ADS)
Xu, Kun; Liu, Chang
2017-02-01
In the continuum flow regime, the Navier-Stokes (NS) equations are usually used for the description of gas dynamics. On the other hand, the Boltzmann equation is applied for the rarefied flow. These two equations are based on distinguishable modeling scales for flow physics. Fortunately, due to the scale separation, i.e., the hydrodynamic and kinetic ones, both the Navier-Stokes equations and the Boltzmann equation are applicable in their respective domains. However, in real science and engineering applications, they may not have such a distinctive scale separation. For example, around a hypersonic flying vehicle, the flow physics at different regions may correspond to different regimes, where the local Knudsen number can be changed significantly in several orders of magnitude. With a variation of flow physics, theoretically a continuous governing equation from the kinetic Boltzmann modeling to the hydrodynamic Navier-Stokes dynamics should be used for its efficient description. However, due to the difficulties of a direct modeling of flow physics in the scale between the kinetic and hydrodynamic ones, there is basically no reliable theory or valid governing equations to cover the whole transition regime, except resolving flow physics always down to the mean free path scale, such as the direct Boltzmann solver and the Direct Simulation Monte Carlo (DSMC) method. In fact, it is an unresolved problem about the exact scale for the validity of the NS equations, especially in the small Reynolds number cases. The computational fluid dynamics (CFD) is usually based on the numerical solution of partial differential equations (PDEs), and it targets on the recovering of the exact solution of the PDEs as mesh size and time step converging to zero. This methodology can be hardly applied to solve the multiple scale problem efficiently because there is no such a complete PDE for flow physics through a continuous variation of scales. For the non-equilibrium flow study, the direct modeling methods, such as DSMC, particle in cell, and smooth particle hydrodynamics, play a dominant role to incorporate the flow physics into the algorithm construction directly. It is fully legitimate to combine the modeling and computation together without going through the process of constructing PDEs. In other words, the CFD research is not only to obtain the numerical solution of governing equations but to model flow dynamics as well. This methodology leads to the unified gas-kinetic scheme (UGKS) for flow simulation in all flow regimes. Based on UGKS, the boundary for the validation of the Navier-Stokes equations can be quantitatively evaluated. The combination of modeling and computation provides a paradigm for the description of multiscale transport process.
Interband coding extension of the new lossless JPEG standard
NASA Astrophysics Data System (ADS)
Memon, Nasir D.; Wu, Xiaolin; Sippy, V.; Miller, G.
1997-01-01
Due to the perceived inadequacy of current standards for lossless image compression, the JPEG committee of the International Standards Organization (ISO) has been developing a new standard. A baseline algorithm, called JPEG-LS, has already been completed and is awaiting approval by national bodies. The JPEG-LS baseline algorithm despite being simple is surprisingly efficient, and provides compression performance that is within a few percent of the best and more sophisticated techniques reported in the literature. Extensive experimentations performed by the authors seem to indicate that an overall improvement by more than 10 percent in compression performance will be difficult to obtain even at the cost of great complexity; at least not with traditional approaches to lossless image compression. However, if we allow inter-band decorrelation and modeling in the baseline algorithm, nearly 30 percent improvement in compression gains for specific images in the test set become possible with a modest computational cost. In this paper we propose and investigate a few techniques for exploiting inter-band correlations in multi-band images. These techniques have been designed within the framework of the baseline algorithm, and require minimal changes to the basic architecture of the baseline, retaining its essential simplicity.
A structure adapted multipole method for electrostatic interactions in protein dynamics
NASA Astrophysics Data System (ADS)
Niedermeier, Christoph; Tavan, Paul
1994-07-01
We present an algorithm for rapid approximate evaluation of electrostatic interactions in molecular dynamics simulations of proteins. Traditional algorithms require computational work of the order O(N2) for a system of N particles. Truncation methods which try to avoid that effort entail untolerably large errors in forces, energies and other observables. Hierarchical multipole expansion algorithms, which can account for the electrostatics to numerical accuracy, scale with O(N log N) or even with O(N) if they become augmented by a sophisticated scheme for summing up forces. To further reduce the computational effort we propose an algorithm that also uses a hierarchical multipole scheme but considers only the first two multipole moments (i.e., charges and dipoles). Our strategy is based on the consideration that numerical accuracy may not be necessary to reproduce protein dynamics with sufficient correctness. As opposed to previous methods, our scheme for hierarchical decomposition is adjusted to structural and dynamical features of the particular protein considered rather than chosen rigidly as a cubic grid. As compared to truncation methods we manage to reduce errors in the computation of electrostatic forces by a factor of 10 with only marginal additional effort.
Semantics of directly manipulating spatializations.
Hu, Xinran; Bradel, Lauren; Maiti, Dipayan; House, Leanna; North, Chris; Leman, Scotland
2013-12-01
When high-dimensional data is visualized in a 2D plane by using parametric projection algorithms, users may wish to manipulate the layout of the data points to better reflect their domain knowledge or to explore alternative structures. However, few users are well-versed in the algorithms behind the visualizations, making parameter tweaking more of a guessing game than a series of decisive interactions. Translating user interactions into algorithmic input is a key component of Visual to Parametric Interaction (V2PI) [13]. Instead of adjusting parameters, users directly move data points on the screen, which then updates the underlying statistical model. However, we have found that some data points that are not moved by the user are just as important in the interactions as the data points that are moved. Users frequently move some data points with respect to some other 'unmoved' data points that they consider as spatially contextual. However, in current V2PI interactions, these points are not explicitly identified when directly manipulating the moved points. We design a richer set of interactions that makes this context more explicit, and a new algorithm and sophisticated weighting scheme that incorporates the importance of these unmoved data points into V2PI.
Using adaptive grid in modeling rocket nozzle flow
NASA Technical Reports Server (NTRS)
Chow, Alan S.; Jin, Kang-Ren
1992-01-01
The mechanical behavior of a rocket motor internal flow field results in a system of nonlinear partial differential equations which cannot be solved analytically. However, this system of equations called the Navier-Stokes equations can be solved numerically. The accuracy and the convergence of the solution of the system of equations will depend largely on how precisely the sharp gradients in the domain of interest can be resolved. With the advances in computer technology, more sophisticated algorithms are available to improve the accuracy and convergence of the solutions. An adaptive grid generation is one of the schemes which can be incorporated into the algorithm to enhance the capability of numerical modeling. It is equivalent to putting intelligence into the algorithm to optimize the use of computer memory. With this scheme, the finite difference domain of the flow field called the grid does neither have to be very fine nor strategically placed at the location of sharp gradients. The grid is self adapting as the solution evolves. This scheme significantly improves the methodology of solving flow problems in rocket nozzles by taking the refinement part of grid generation out of the hands of computational fluid dynamics (CFD) specialists and place it into the computer algorithm itself.
NASA Technical Reports Server (NTRS)
Anderson, W. W.; Will, R. W.; Grantham, C.
1972-01-01
A concept for automating the control of air traffic in the terminal area in which the primary man-machine interface is the cockpit is described. The ground and airborne inputs required for implementing this concept are discussed. Digital data link requirements of 10,000 bits per second are explained. A particular implementation of this concept including a sequencing and separation algorithm which generates flight paths and implements a natural order landing sequence is presented. Onboard computer/display avionics utilizing a traffic situation display is described. A preliminary simulation of this concept has been developed which includes a simple, efficient sequencing algorithm and a complete aircraft dynamics model. This simulated jet transport was flown through automated terminal-area traffic situations by pilots using relatively sophisticated displays, and pilot performance and observations are discussed.
Implementation of Multipattern String Matching Accelerated with GPU for Intrusion Detection System
NASA Astrophysics Data System (ADS)
Nehemia, Rangga; Lim, Charles; Galinium, Maulahikmah; Rinaldi Widianto, Ahmad
2017-04-01
As Internet-related security threats continue to increase in terms of volume and sophistication, existing Intrusion Detection System is also being challenged to cope with the current Internet development. Multi Pattern String Matching algorithm accelerated with Graphical Processing Unit is being utilized to improve the packet scanning performance of the IDS. This paper implements a Multi Pattern String Matching algorithm, also called Parallel Failureless Aho Corasick accelerated with GPU to improve the performance of IDS. OpenCL library is used to allow the IDS to support various GPU, including popular GPU such as NVIDIA and AMD, used in our research. The experiment result shows that the application of Multi Pattern String Matching using GPU accelerated platform provides a speed up, by up to 141% in term of throughput compared to the previous research.
Type-2 fuzzy logic control of a 2-DOF helicopter (TRMS system)
NASA Astrophysics Data System (ADS)
Zeghlache, Samir; Kara, Kamel; Saigaa, Djamel
2014-09-01
The helicopter dynamic includes nonlinearities, parametric uncertainties and is subject to unknown external disturbances. Such complicated dynamics involve designing sophisticated control algorithms that can deal with these difficulties. In this paper, a type 2 fuzzy logic PID controller is proposed for TRMS (twin rotor mimo system) control problem. Using triangular membership functions and based on a human operator experience, two controllers are designed to control the position of the yaw and the pitch angles of the TRMS. Simulation results are given to illustrate the effectiveness of the proposed control scheme.
NASA Technical Reports Server (NTRS)
Vlahopoulos, Nickolas; Lyle, Karen H.; Burley, Casey L.
1998-01-01
An algorithm for generating appropriate velocity boundary conditions for an acoustic boundary element analysis from the kinematics of an operating propeller is presented. It constitutes the initial phase of Integrating sophisticated rotorcraft models into a conventional boundary element analysis. Currently, the pressure field is computed by a linear approximation. An initial validation of the developed process was performed by comparing numerical results to test data for the external acoustic pressure on the surface of a tilt-rotor aircraft for one flight condition.
Simulating the Rayleigh-Taylor instability with the Ising model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ball, Justin R.; Elliott, James B.
2011-08-26
The Ising model, implemented with the Metropolis algorithm and Kawasaki dynamics, makes a system with its own physics, distinct from the real world. These physics are sophisticated enough to model behavior similar to the Rayleigh-Taylor instability and by better understanding these physics, we can learn how to modify the system to better re ect reality. For example, we could add a v x and a v y to each spin and modify the exchange rules to incorporate them, possibly using two body scattering laws to construct a more realistic system.
Advanced fingerprint verification software
NASA Astrophysics Data System (ADS)
Baradarani, A.; Taylor, J. R. B.; Severin, F.; Maev, R. Gr.
2016-05-01
We have developed a fingerprint software package that can be used in a wide range of applications from law enforcement to public and private security systems, and to personal devices such as laptops, vehicles, and door- locks. The software and processing units are a unique implementation of new and sophisticated algorithms that compete with the current best systems in the world. Development of the software package has been in line with the third generation of our ultrasonic fingerprinting machine1. Solid and robust performance is achieved in the presence of misplaced and low quality fingerprints.
Numerical realization of the variational method for generating self-trapped beams.
Duque, Erick I; Lopez-Aguayo, Servando; Malomed, Boris A
2018-03-19
We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schrödinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.
Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation
Ruotsalainen, Laura; Kirkko-Jaakkola, Martti; Rantanen, Jesperi; Mäkelä, Maija
2018-01-01
The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM) and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU), sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS) sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF), which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf) in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is tested via two experiments, one at a university’s premises and another in realistic tactical conditions. The results show significant improvement on the horizontal localization when the measurement errors are carefully modelled and their inclusion into the particle filtering implementation correctly realized. PMID:29443918
2014-11-21
cover in the region where gas expands all the way round the nozzle exit in the vacuum of space. This geome- try is investigated using hybrid NS/DSMC with...Final 3. DATES COVERED (From - To) 19 May 2014 – 18 Oct 2014 4. TITLE AND SUBTITLE Report on Rarefied Gas Dynamics Research Status 5a...Air Force about the current status of research in rarefied gas dynamics and related fields, primarily via the 29th International Symposium on Rarefied
Rarefaction and Non-equilibrium Effects in Hypersonic Flows about Leading Edges of Small Bluntness
NASA Astrophysics Data System (ADS)
Ivanov, Mikhail; Khotyanovsky, Dmitry; Kudryavtsev, Alexey; Shershnev, Anton; Bondar, Yevgeniy; Yonemura, Shigeru
2011-05-01
A hypersonic flow about a cylindrically blunted thick plate at a zero angle of attack is numerically studied with the kinetic (DSMC) and continuum (Navier-Stokes equations) approaches. The Navier-Stokes equations with velocity slip and temperature jump boundary conditions correctly predict the flow fields and surface parameters for values of the Knudsen number (based on the radius of leading edge curvature) smaller than 0.1. The results of computations demonstrate significant effects of the entropy layer on the boundary layer characteristics.
Nonequilibrium diffusive gas dynamics: Poiseuille microflow
NASA Astrophysics Data System (ADS)
Abramov, Rafail V.; Otto, Jasmine T.
2018-05-01
We test the recently developed hierarchy of diffusive moment closures for gas dynamics together with the near-wall viscosity scaling on the Poiseuille flow of argon and nitrogen in a one micrometer wide channel, and compare it against the corresponding Direct Simulation Monte Carlo computations. We find that the diffusive regularized Grad equations with viscosity scaling provide the most accurate approximation to the benchmark DSMC results. At the same time, the conventional Navier-Stokes equations without the near-wall viscosity scaling are found to be the least accurate among the tested closures.
The Program Manager’s Support System (PMSS). An Executive Overview and System Description,
1987-01-01
process. The PMSS tool will, when completed, support the program management process in all stages of program nanagement; that is, birth of the...module, developed as a template on LOTUS 1-2-3, is an application of the Constructive Cost Model (COCOMO) developed by B. Boehm. The DSMC SWCE module, a...developed for a specific program office but can be modified for use by others. It is a "template" system designed to operate on a Zenith Z-150 using Lotus 1
1992-05-01
one manager -to-player inter- coaching styles are being used in tions do best with structured and actions, which diminish as each these outside...May-june 1992’ MANAGER Journal of the Defense Systems Management College Program management ,teI hIN be pl~ vrb~c aeese and sole; its 92-19864 92 7...23 l 9~3 PROGRAM MANAGER Journal of the Defense Systems Management College Vol. XXI, No. 3, DSMC 108 2 8 Is There Going to Be a High- Rebuilding the
DSMC modeling of flows with recombination reactions
NASA Astrophysics Data System (ADS)
Gimelshein, Sergey; Wysong, Ingrid
2017-06-01
An empirical microscopic recombination model is developed for the direct simulation Monte Carlo method that complements the extended weak vibrational bias model of dissociation. The model maintains the correct equilibrium reaction constant in a wide range of temperatures by using the collision theory to enforce the number of recombination events. It also strictly follows the detailed balance requirement for equilibrium gas. The model and its implementation are verified with oxygen and nitrogen heat bath relaxation and compared with available experimental data on atomic oxygen recombination in argon and molecular nitrogen.
Route towards cylindrical cloaking at visible frequencies using an optimization algorithm
NASA Astrophysics Data System (ADS)
Rottler, Andreas; Krüger, Benjamin; Heitmann, Detlef; Pfannkuche, Daniela; Mendach, Stefan
2012-12-01
We derive a model based on the Maxwell-Garnett effective-medium theory that describes a cylindrical cloaking shell composed of metal rods which are radially aligned in a dielectric host medium. We propose and demonstrate a minimization algorithm that calculates for given material parameters the optimal geometrical parameters of the cloaking shell such that its effective optical parameters fit the best to the required permittivity distribution for cylindrical cloaking. By means of sophisticated full-wave simulations we find that a cylindrical cloak with good performance using silver as the metal can be designed with our algorithm for wavelengths in the red part of the visible spectrum (623nm <λ<773nm). We also present a full-wave simulation of such a cloak at an exemplary wavelength of λ=729nm (ℏω=1.7eV) which indicates that our model is useful to find design rules of cloaks with good cloaking performance. Our calculations investigate a structure that is easy to fabricate using standard preparation techniques and therefore pave the way to a realization of guiding light around an object at visible frequencies, thus rendering it invisible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amestoy, Patrick R.; Duff, Iain S.; L'Excellent, Jean-Yves
2001-10-10
We examine the mechanics of the send and receive mechanism of MPI and in particular how we can implement message passing in a robust way so that our performance is not significantly affected by changes to the MPI system. This leads us to using the Isend/Irecv protocol which will entail sometimes significant algorithmic changes. We discuss this within the context of two different algorithms for sparse Gaussian elimination that we have parallelized. One is a multifrontal solver called MUMPS, the other is a supernodal solver called SuperLU. Both algorithms are difficult to parallelize on distributed memory machines. Our initial strategiesmore » were based on simple MPI point-to-point communication primitives. With such approaches, the parallel performance of both codes are very sensitive to the MPI implementation, the way MPI internal buffers are used in particular. We then modified our codes to use more sophisticated nonblocking versions of MPI communication. This significantly improved the performance robustness (independent of the MPI buffering mechanism) and scalability, but at the cost of increased code complexity.« less
Can we predict failure in couple therapy early enough to enhance outcome?
Pepping, Christopher A; Halford, W Kim; Doss, Brian D
2015-02-01
Feedback to therapists based on systematic monitoring of individual therapy progress reliably enhances therapy outcome. An implicit assumption of therapy progress feedback is that clients unlikely to benefit from therapy can be detected early enough in the course of therapy for corrective action to be taken. To explore the possibility of using feedback of therapy progress to enhance couple therapy outcome, the current study tested whether weekly therapy progress could detect off-track clients early in couple therapy. In an effectiveness trial of couple therapy, 136 couples were monitored weekly on relationship satisfaction and an expert derived algorithm was used to attempt to predict eventual therapy outcome. As expected, the algorithm detected a significant proportion of couples who did not benefit from couple therapy at Session 3, but prediction was substantially improved at Session 4 so that eventual outcome was accurately predicted for 70% of couples, with little improvement of prediction thereafter. More sophisticated algorithms might enhance prediction accuracy, and a trial of the effects of therapy progress feedback on couple therapy outcome is needed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Peak picking NMR spectral data using non-negative matrix factorization.
Tikole, Suhas; Jaravine, Victor; Rogov, Vladimir; Dötsch, Volker; Güntert, Peter
2014-02-11
Simple peak-picking algorithms, such as those based on lineshape fitting, perform well when peaks are completely resolved in multidimensional NMR spectra, but often produce wrong intensities and frequencies for overlapping peak clusters. For example, NOESY-type spectra have considerable overlaps leading to significant peak-picking intensity errors, which can result in erroneous structural restraints. Precise frequencies are critical for unambiguous resonance assignments. To alleviate this problem, a more sophisticated peaks decomposition algorithm, based on non-negative matrix factorization (NMF), was developed. We produce peak shapes from Fourier-transformed NMR spectra. Apart from its main goal of deriving components from spectra and producing peak lists automatically, the NMF approach can also be applied if the positions of some peaks are known a priori, e.g. from consistently referenced spectral dimensions of other experiments. Application of the NMF algorithm to a three-dimensional peak list of the 23 kDa bi-domain section of the RcsD protein (RcsD-ABL-HPt, residues 688-890) as well as to synthetic HSQC data shows that peaks can be picked accurately also in spectral regions with strong overlap.
Theory of Remote Image Formation
NASA Astrophysics Data System (ADS)
Blahut, Richard E.
2004-11-01
In many applications, images, such as ultrasonic or X-ray signals, are recorded and then analyzed with digital or optical processors in order to extract information. Such processing requires the development of algorithms of great precision and sophistication. This book presents a unified treatment of the mathematical methods that underpin the various algorithms used in remote image formation. The author begins with a review of transform and filter theory. He then discusses two- and three-dimensional Fourier transform theory, the ambiguity function, image construction and reconstruction, tomography, baseband surveillance systems, and passive systems (where the signal source might be an earthquake or a galaxy). Information-theoretic methods in image formation are also covered, as are phase errors and phase noise. Throughout the book, practical applications illustrate theoretical concepts, and there are many homework problems. The book is aimed at graduate students of electrical engineering and computer science, and practitioners in industry. Presents a unified treatment of the mathematical methods that underpin the algorithms used in remote image formation Illustrates theoretical concepts with reference to practical applications Provides insights into the design parameters of real systems
Classifier dependent feature preprocessing methods
NASA Astrophysics Data System (ADS)
Rodriguez, Benjamin M., II; Peterson, Gilbert L.
2008-04-01
In mobile applications, computational complexity is an issue that limits sophisticated algorithms from being implemented on these devices. This paper provides an initial solution to applying pattern recognition systems on mobile devices by combining existing preprocessing algorithms for recognition. In pattern recognition systems, it is essential to properly apply feature preprocessing tools prior to training classification models in an attempt to reduce computational complexity and improve the overall classification accuracy. The feature preprocessing tools extended for the mobile environment are feature ranking, feature extraction, data preparation and outlier removal. Most desktop systems today are capable of processing a majority of the available classification algorithms without concern of processing while the same is not true on mobile platforms. As an application of pattern recognition for mobile devices, the recognition system targets the problem of steganalysis, determining if an image contains hidden information. The measure of performance shows that feature preprocessing increases the overall steganalysis classification accuracy by an average of 22%. The methods in this paper are tested on a workstation and a Nokia 6620 (Symbian operating system) camera phone with similar results.
A Hybrid Procedural/Deductive Executive for Autonomous Spacecraft
NASA Technical Reports Server (NTRS)
Pell, Barney; Gamble, Edward B.; Gat, Erann; Kessing, Ron; Kurien, James; Millar, William; Nayak, P. Pandurang; Plaunt, Christian; Williams, Brian C.; Lau, Sonie (Technical Monitor)
1998-01-01
The New Millennium Remote Agent (NMRA) will be the first AI system to control an actual spacecraft. The spacecraft domain places a strong premium on autonomy and requires dynamic recoveries and robust concurrent execution, all in the presence of tight real-time deadlines, changing goals, scarce resource constraints, and a wide variety of possible failures. To achieve this level of execution robustness, we have integrated a procedural executive based on generic procedures with a deductive model-based executive. A procedural executive provides sophisticated control constructs such as loops, parallel activity, locks, and synchronization which are used for robust schedule execution, hierarchical task decomposition, and routine configuration management. A deductive executive provides algorithms for sophisticated state inference and optimal failure recover), planning. The integrated executive enables designers to code knowledge via a combination of procedures and declarative models, yielding a rich modeling capability suitable to the challenges of real spacecraft control. The interface between the two executives ensures both that recovery sequences are smoothly merged into high-level schedule execution and that a high degree of reactivity is retained to effectively handle additional failures during recovery.
Steady flow model user's guide
NASA Astrophysics Data System (ADS)
Doughty, C.; Hellstrom, G.; Tsang, C. F.; Claesson, J.
1984-07-01
Sophisticated numerical models that solve the coupled mass and energy transport equations for nonisothermal fluid flow in a porous medium were used to match analytical results and field data for aquifer thermal energy storage (ATES) systems. As an alternative to the ATES problem the Steady Flow Model (SFM), a simplified but fast numerical model was developed. A steady purely radial flow field is prescribed in the aquifer, and incorporated into the heat transport equation which is then solved numerically. While the radial flow assumption limits the range of ATES systems that can be studied using the SFM, it greatly simplifies use of this code. The preparation of input is quite simple compared to that for a sophisticated coupled mass and energy model, and the cost of running the SFM is far cheaper. The simple flow field allows use of a special calculational mesh that eliminates the numerical dispersion usually associated with the numerical solution of convection problems. The problem is defined, the algorithm used to solve it are outllined, and the input and output for the SFM is described.
Autoreject: Automated artifact rejection for MEG and EEG data.
Jas, Mainak; Engemann, Denis A; Bekhti, Yousra; Raimondo, Federico; Gramfort, Alexandre
2017-10-01
We present an automated algorithm for unified rejection and repair of bad trials in magnetoencephalography (MEG) and electroencephalography (EEG) signals. Our method capitalizes on cross-validation in conjunction with a robust evaluation metric to estimate the optimal peak-to-peak threshold - a quantity commonly used for identifying bad trials in M/EEG. This approach is then extended to a more sophisticated algorithm which estimates this threshold for each sensor yielding trial-wise bad sensors. Depending on the number of bad sensors, the trial is then repaired by interpolation or by excluding it from subsequent analysis. All steps of the algorithm are fully automated thus lending itself to the name Autoreject. In order to assess the practical significance of the algorithm, we conducted extensive validation and comparisons with state-of-the-art methods on four public datasets containing MEG and EEG recordings from more than 200 subjects. The comparisons include purely qualitative efforts as well as quantitatively benchmarking against human supervised and semi-automated preprocessing pipelines. The algorithm allowed us to automate the preprocessing of MEG data from the Human Connectome Project (HCP) going up to the computation of the evoked responses. The automated nature of our method minimizes the burden of human inspection, hence supporting scalability and reliability demanded by data analysis in modern neuroscience. Copyright © 2017 Elsevier Inc. All rights reserved.
Load Balancing Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pearce, Olga Tkachyshyn
2014-12-01
The largest supercomputers have millions of independent processors, and concurrency levels are rapidly increasing. For ideal efficiency, developers of the simulations that run on these machines must ensure that computational work is evenly balanced among processors. Assigning work evenly is challenging because many large modern parallel codes simulate behavior of physical systems that evolve over time, and their workloads change over time. Furthermore, the cost of imbalanced load increases with scale because most large-scale scientific simulations today use a Single Program Multiple Data (SPMD) parallel programming model, and an increasing number of processors will wait for the slowest one atmore » the synchronization points. To address load imbalance, many large-scale parallel applications use dynamic load balance algorithms to redistribute work evenly. The research objective of this dissertation is to develop methods to decide when and how to load balance the application, and to balance it effectively and affordably. We measure and evaluate the computational load of the application, and develop strategies to decide when and how to correct the imbalance. Depending on the simulation, a fast, local load balance algorithm may be suitable, or a more sophisticated and expensive algorithm may be required. We developed a model for comparison of load balance algorithms for a specific state of the simulation that enables the selection of a balancing algorithm that will minimize overall runtime.« less
Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing
Liu, Jiayin; Tang, Zhenmin; Cui, Ying; Wu, Guoxing
2017-01-01
Remote sensing technologies have been widely applied in urban environments’ monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the “salt and pepper” phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC), which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF), which is estimated by Kernel Density Estimation (KDE) with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD) and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive. PMID:28604641
Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing.
Liu, Jiayin; Tang, Zhenmin; Cui, Ying; Wu, Guoxing
2017-06-12
Remote sensing technologies have been widely applied in urban environments' monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the "salt and pepper" phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC), which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF), which is estimated by Kernel Density Estimation (KDE) with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD) and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive.
Introductory review on `Flying Triangulation': a motion-robust optical 3D measurement principle
NASA Astrophysics Data System (ADS)
Ettl, Svenja
2015-04-01
'Flying Triangulation' (FlyTri) is a recently developed principle which allows for a motion-robust optical 3D measurement of rough surfaces. It combines a simple sensor with sophisticated algorithms: a single-shot sensor acquires 2D camera images. From each camera image, a 3D profile is generated. The series of 3D profiles generated are aligned to one another by algorithms, without relying on any external tracking device. It delivers real-time feedback of the measurement process which enables an all-around measurement of objects. The principle has great potential for small-space acquisition environments, such as the measurement of the interior of a car, and motion-sensitive measurement tasks, such as the intraoral measurement of teeth. This article gives an overview of the basic ideas and applications of FlyTri. The main challenges and their solutions are discussed. Measurement examples are also given to demonstrate the potential of the measurement principle.
An ordinal classification approach for CTG categorization.
Georgoulas, George; Karvelis, Petros; Gavrilis, Dimitris; Stylios, Chrysostomos D; Nikolakopoulos, George
2017-07-01
Evaluation of cardiotocogram (CTG) is a standard approach employed during pregnancy and delivery. But, its interpretation requires high level expertise to decide whether the recording is Normal, Suspicious or Pathological. Therefore, a number of attempts have been carried out over the past three decades for development automated sophisticated systems. These systems are usually (multiclass) classification systems that assign a category to the respective CTG. However most of these systems usually do not take into consideration the natural ordering of the categories associated with CTG recordings. In this work, an algorithm that explicitly takes into consideration the ordering of CTG categories, based on binary decomposition method, is investigated. Achieved results, using as a base classifier the C4.5 decision tree classifier, prove that the ordinal classification approach is marginally better than the traditional multiclass classification approach, which utilizes the standard C4.5 algorithm for several performance criteria.
Real-time generation of infrared ocean scene based on GPU
NASA Astrophysics Data System (ADS)
Jiang, Zhaoyi; Wang, Xun; Lin, Yun; Jin, Jianqiu
2007-12-01
Infrared (IR) image synthesis for ocean scene has become more and more important nowadays, especially for remote sensing and military application. Although a number of works present ready-to-use simulations, those techniques cover only a few possible ways of water interacting with the environment. And the detail calculation of ocean temperature is rarely considered by previous investigators. With the advance of programmable features of graphic card, many algorithms previously limited to offline processing have become feasible for real-time usage. In this paper, we propose an efficient algorithm for real-time rendering of infrared ocean scene using the newest features of programmable graphics processors (GPU). It differs from previous works in three aspects: adaptive GPU-based ocean surface tessellation, sophisticated balance equation of thermal balance for ocean surface, and GPU-based rendering for infrared ocean scene. Finally some results of infrared image are shown, which are in good accordance with real images.
Bunyak, Filiz; Palaniappan, Kannappan; Chagin, Vadim; Cardoso, M
2009-01-01
Fluorescently tagged proteins such as GFP-PCNA produce rich dynamically varying textural patterns of foci distributed in the nucleus. This enables the behavioral study of sub-cellular structures during different phases of the cell cycle. The varying punctuate patterns of fluorescence, drastic changes in SNR, shape and position during mitosis and abundance of touching cells, however, require more sophisticated algorithms for reliable automatic cell segmentation and lineage analysis. Since the cell nuclei are non-uniform in appearance, a distribution-based modeling of foreground classes is essential. The recently proposed graph partitioning active contours (GPAC) algorithm supports region descriptors and flexible distance metrics. We extend GPAC for fluorescence-based cell segmentation using regional density functions and dramatically improve its efficiency for segmentation from O(N(4)) to O(N(2)), for an image with N(2) pixels, making it practical and scalable for high throughput microscopy imaging studies.
A self-tuning automatic voltage regulator designed for an industrial environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flynn, D.; Hogg, B.W.; Swidenbank, E.
Examination of the performance of fixed parameter controllers has resulted in the development of self-tuning strategies for excitation control of turbogenerator systems. In conjunction with the advanced control algorithms, sophisticated measurement techniques have previously been adopted on micromachine systems to provide generator terminal quantities. In power stations, however, a minimalist hardware arrangement would be selected leading to relatively simple measurement techniques. The performance of a range of self-tuning schemes is investigated on an industrial test-bed, employing a typical industrial hardware measurement system. Individual controllers are implemented on a standard digital automatic voltage regulator, as installed in power stations. This employsmore » a VME platform, and the self-tuning algorithms are introduced by linking to a transputer network. The AVR includes all normal features, such as field forcing, VAR limiting and overflux protection. Self-tuning controller performance is compared with that of a fixed gain digital AVR.« less
Analysis of high-speed rotating flow inside gas centrifuge casing
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev, , Dr.
2017-10-01
The generalized analytical model for the radial boundary layer inside the gas centrifuge casing in which the inner cylinder is rotating at a constant angular velocity Ω_i while the outer one is stationary, is formulated for studying the secondary gas flow field due to wall thermal forcing, inflow/outflow of light gas along the boundaries, as well as due to the combination of the above two external forcing. The analytical model includes the sixth order differential equation for the radial boundary layer at the cylindrical curved surface in terms of master potential (χ) , which is derived from the equations of motion in an axisymmetric (r - z) plane. The linearization approximation is used, where the equations of motion are truncated at linear order in the velocity and pressure disturbances to the base flow, which is a solid-body rotation. Additional approximations in the analytical model include constant temperature in the base state (isothermal compressible Couette flow), high aspect ratio (length is large compared to the annular gap), high Reynolds number, but there is no limitation on the Mach number. The discrete eigenvalues and eigenfunctions of the linear operators (sixth-order in the radial direction for the generalized analytical equation) are obtained. The solutions for the secondary flow is determined in terms of these eigenvalues and eigenfunctions. These solutions are compared with direct simulation Monte Carlo (DSMC) simulations and found excellent agreement (with a difference of less than 15%) between the predictions of the analytical model and the DSMC simulations, provided the boundary conditions in the analytical model are accurately specified.
Analysis of high-speed rotating flow inside gas centrifuge casing
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev, , Dr.
2017-09-01
The generalized analytical model for the radial boundary layer inside the gas centrifuge casing in which the inner cylinder is rotating at a constant angular velocity Ωi while the outer one is stationary, is formulated for studying the secondary gas flow field due to wall thermal forcing, inflow/outflow of light gas along the boundaries, as well as due to the combination of the above two external forcing. The analytical model includes the sixth order differential equation for the radial boundary layer at the cylindrical curved surface in terms of master potential (χ) , which is derived from the equations of motion in an axisymmetric (r - z) plane. The linearization approximation is used, where the equations of motion are truncated at linear order in the velocity and pressure disturbances to the base flow, which is a solid-body rotation. Additional approximations in the analytical model include constant temperature in the base state (isothermal compressible Couette flow), high aspect ratio (length is large compared to the annular gap), high Reynolds number, but there is no limitation on the Mach number. The discrete eigenvalues and eigenfunctions of the linear operators (sixth-order in the radial direction for the generalized analytical equation) are obtained. The solutions for the secondary flow is determined in terms of these eigenvalues and eigenfunctions. These solutions are compared with direct simulation Monte Carlo (DSMC) simulations and found excellent agreement (with a difference of less than 15%) between the predictions of the analytical model and the DSMC simulations, provided the boundary conditions in the analytical model are accurately specified.
Analysis of high-speed rotating flow inside gas centrifuge casing
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev
2017-11-01
The generalized analytical model for the radial boundary layer inside the gas centrifuge casing in which the inner cylinder is rotating at a constant angular velocity Ωi while the outer one is stationary, is formulated for studying the secondary gas flow field due to wall thermal forcing, inflow/outflow of light gas along the boundaries, as well as due to the combination of the above two external forcing. The analytical model includes the sixth order differential equation for the radial boundary layer at the cylindrical curved surface in terms of master potential (χ) , which is derived from the equations of motion in an axisymmetric (r - z) plane. The linearization approximation is used, where the equations of motion are truncated at linear order in the velocity and pressure disturbances to the base flow, which is a solid-body rotation. Additional approximations in the analytical model include constant temperature in the base state (isothermal compressible Couette flow), high aspect ratio (length is large compared to the annular gap), high Reynolds number, but there is no limitation on the Mach number. The discrete eigenvalues and eigenfunctions of the linear operators (sixth-order in the radial direction for the generalized analytical equation) are obtained. The solutions for the secondary flow is determined in terms of these eigenvalues and eigenfunctions. These solutions are compared with direct simulation Monte Carlo (DSMC) simulations and found excellent agreement (with a difference of less than 15%) between the predictions of the analytical model and the DSMC simulations, provided the boundary conditions in the analytical model are accurately specified.
Full System Model of Magnetron Sputter Chamber - Proof-of-Principle Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walton, C; Gilmer, G; Zepeda-Ruiz, L
2007-05-04
The lack of detailed knowledge of internal process conditions remains a key challenge in magnetron sputtering, both for chamber design and for process development. Fundamental information such as the pressure and temperature distribution of the sputter gas, and the energies and arrival angles of the sputtered atoms and other energetic species is often missing, or is only estimated from general formulas. However, open-source or low-cost tools are available for modeling most steps of the sputter process, which can give more accurate and complete data than textbook estimates, using only desktop computations. To get a better understanding of magnetron sputtering, wemore » have collected existing models for the 5 major process steps: the input and distribution of the neutral background gas using Direct Simulation Monte Carlo (DSMC), dynamics of the plasma using Particle In Cell-Monte Carlo Collision (PIC-MCC), impact of ions on the target using molecular dynamics (MD), transport of sputtered atoms to the substrate using DSMC, and growth of the film using hybrid Kinetic Monte Carlo (KMC) and MD methods. Models have been tested against experimental measurements. For example, gas rarefaction as observed by Rossnagel and others has been reproduced, and it is associated with a local pressure increase of {approx}50% which may strongly influence film properties such as stress. Results on energies and arrival angles of sputtered atoms and reflected gas neutrals are applied to the Kinetic Monte Carlo simulation of film growth. Model results and applications to growth of dense Cu and Be films are presented.« less
Thin film deposition using rarefied gas jet
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev, , Dr.
2017-01-01
The rarefied gas jet of aluminium is studied at Mach number Ma =(U_j /√{ kbTj / m }) in the range .01
Aerodynamic characterization of the jet of an arc wind tunnel
NASA Astrophysics Data System (ADS)
Zuppardi, Gennaro; Esposito, Antonio
2016-11-01
It is well known that, due to a very aggressive environment and to a rather high rarefaction level of the arc wind tunnel jet, the measurement of fluid-dynamic parameters is difficult. For this reason, the aerodynamic characterization of the jet relies also on computer codes, simulating the operation of the tunnel. The present authors already used successfully such a kind of computing procedure for the tests in the arc wind tunnel (SPES) in Naples (Italy). In the present work an improved procedure is proposed. Like the former procedure also the present procedure relies on two codes working in tandem: 1) one-dimensional code simulating the inviscid and thermally not-conducting flow field in the torch, in the mix-chamber and in the nozzle up to the position, along the nozzle axis, of the continuum breakdown, 2) Direct Simulation Monte Carlo (DSMC) code simulating the flow field in the remaining part of the nozzle. In the present procedure, the DSMC simulation includes the simulation both in the nozzle and in the test chamber. An interesting problem, considered in this paper by means of the present procedure, has been the simulation of the flow field around a Pitot tube and of the related measurement of the stagnation pressure. The measured stagnation pressure, under rarefied conditions, may be even four times the theoretical value. Therefore a substantial correction has to be applied to the measured pressure. In the present paper a correction factor for the stagnation pressure measured in SPES is proposed. The analysis relies on twelve tests made in SPES.
DSMC simulations of leading edge flat-plate boundary layer flows at high Mach number
NASA Astrophysics Data System (ADS)
Pradhan, Sahadev
2016-09-01
The flow over a 2D leading-edge flat plate is studied at Mach number Ma = (Uinf /√{kBTinf / m }) in the range
A new JPEG-based steganographic algorithm for mobile devices
NASA Astrophysics Data System (ADS)
Agaian, Sos S.; Cherukuri, Ravindranath C.; Schneider, Erik C.; White, Gregory B.
2006-05-01
Currently, cellular phones constitute a significant portion of the global telecommunications market. Modern cellular phones offer sophisticated features such as Internet access, on-board cameras, and expandable memory which provide these devices with excellent multimedia capabilities. Because of the high volume of cellular traffic, as well as the ability of these devices to transmit nearly all forms of data. The need for an increased level of security in wireless communications is becoming a growing concern. Steganography could provide a solution to this important problem. In this article, we present a new algorithm for JPEG-compressed images which is applicable to mobile platforms. This algorithm embeds sensitive information into quantized discrete cosine transform coefficients obtained from the cover JPEG. These coefficients are rearranged based on certain statistical properties and the inherent processing and memory constraints of mobile devices. Based on the energy variation and block characteristics of the cover image, the sensitive data is hidden by using a switching embedding technique proposed in this article. The proposed system offers high capacity while simultaneously withstanding visual and statistical attacks. Based on simulation results, the proposed method demonstrates an improved retention of first-order statistics when compared to existing JPEG-based steganographic algorithms, while maintaining a capacity which is comparable to F5 for certain cover images.
NASA Astrophysics Data System (ADS)
Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.
2015-02-01
The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high energy density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.
EDNA: Expert fault digraph analysis using CLIPS
NASA Technical Reports Server (NTRS)
Dixit, Vishweshwar V.
1990-01-01
Traditionally fault models are represented by trees. Recently, digraph models have been proposed (Sack). Digraph models closely imitate the real system dependencies and hence are easy to develop, validate and maintain. However, they can also contain directed cycles and analysis algorithms are hard to find. Available algorithms tend to be complicated and slow. On the other hand, the tree analysis (VGRH, Tayl) is well understood and rooted in vast research effort and analytical techniques. The tree analysis algorithms are sophisticated and orders of magnitude faster. Transformation of a digraph (cyclic) into trees (CLP, LP) is a viable approach to blend the advantages of the representations. Neither the digraphs nor the trees provide the ability to handle heuristic knowledge. An expert system, to capture the engineering knowledge, is essential. We propose an approach here, namely, expert network analysis. We combine the digraph representation and tree algorithms. The models are augmented by probabilistic and heuristic knowledge. CLIPS, an expert system shell from NASA-JSC will be used to develop a tool. The technique provides the ability to handle probabilities and heuristic knowledge. Mixed analysis, some nodes with probabilities, is possible. The tool provides graphics interface for input, query, and update. With the combined approach it is expected to be a valuable tool in the design process as well in the capture of final design knowledge.
Connecting Numerical Relativity and Data Analysis of Gravitational Wave Detectors
NASA Astrophysics Data System (ADS)
Shoemaker, Deirdre; Jani, Karan; London, Lionel; Pekowsky, Larne
Gravitational waves deliver information in exquisite detail about astrophysical phenomena, among them the collision of two black holes, a system completely invisible to the eyes of electromagnetic telescopes. Models that predict gravitational wave signals from likely sources are crucial for the success of this endeavor. Modeling binary black hole sources of gravitational radiation requires solving the Einstein equations of General Relativity using powerful computer hardware and sophisticated numerical algorithms. This proceeding presents where we are in understanding ground-based gravitational waves resulting from the merger of black holes and the implications of these sources for the advent of gravitational-wave astronomy.
Natural three-qubit interactions in one-way quantum computing
NASA Astrophysics Data System (ADS)
Tame, M. S.; Paternostro, M.; Kim, M. S.; Vedral, V.
2006-02-01
We address the effects of natural three-qubit interactions on the computational power of one-way quantum computation. A benefit of using more sophisticated entanglement structures is the ability to construct compact and economic simulations of quantum algorithms with limited resources. We show that the features of our study are embodied by suitably prepared optical lattices, where effective three-spin interactions have been theoretically demonstrated. We use this to provide a compact construction for the Toffoli gate. Information flow and two-qubit interactions are also outlined, together with a brief analysis of relevant sources of imperfection.
Introduction to autonomous mobile robotics using Lego Mindstorms NXT
NASA Astrophysics Data System (ADS)
Akın, H. Levent; Meriçli, Çetin; Meriçli, Tekin
2013-12-01
Teaching the fundamentals of robotics to computer science undergraduates requires designing a well-balanced curriculum that is complemented with hands-on applications on a platform that allows rapid construction of complex robots, and implementation of sophisticated algorithms. This paper describes such an elective introductory course where the Lego Mindstorms NXT kits are used as the robot platform. The aims, scope and contents of the course are presented, and the design of the laboratory sessions as well as the term projects, which address several core problems of robotics and artificial intelligence simultaneously, are explained in detail.
Methanococcus jannaschii genome: revisited
NASA Technical Reports Server (NTRS)
Kyrpides, N. C.; Olsen, G. J.; Klenk, H. P.; White, O.; Woese, C. R.
1996-01-01
Analysis of genomic sequences is necessarily an ongoing process. Initial gene assignments tend (wisely) to be on the conservative side (Venter, 1996). The analysis of the genome then grows in an iterative fashion as additional data and more sophisticated algorithms are brought to bear on the data. The present report is an emendation of the original gene list of Methanococcus jannaschii (Bult et al., 1996). By using a somewhat more updated database and more relaxed (and operator-intensive) pattern matching methods, we were able to add significantly to, and in a few cases amend, the gene identification table originally published by Bult et al. (1996).
NASA Technical Reports Server (NTRS)
Zhou, Yaping; Kratz, David P.; Wilber, Anne C.; Gupta, Shashi K.; Cess, Robert D.
2006-01-01
Retrieving surface longwave radiation from space has been a difficult task since the surface downwelling longwave radiation (SDLW) are integrations from radiation emitted by the entire atmosphere, while those emitted from the upper atmosphere are absorbed before reaching the surface. It is particularly problematic when thick clouds are present since thick clouds will virtually block all the longwave radiation from above, while satellites observe atmosphere emissions mostly from above the clouds. Zhou and Cess developed an algorithm for retrieving SDLW based upon detailed studies using radiative transfer model calculations and surface radiometric measurements. Their algorithm linked clear sky SDLW with surface upwelling longwave flux and column precipitable water vapor. For cloudy sky cases, they used cloud liquid water path as an additional parameter to account for the effects of clouds. Despite the simplicity of their algorithm, it performed very well for most geographical regions except for those regions where the atmospheric conditions near the surface tend to be extremely cold and dry. Systematic errors were also found for areas that were covered with ice clouds. An improved version of the algorithm was developed that prevents the large errors in the SDLW at low water vapor amounts. The new algorithm also utilizes cloud fraction and cloud liquid and ice water paths measured from the Cloud and the Earth's Radiant Energy System (CERES) satellites to separately compute the clear and cloudy portions of the fluxes. The new algorithm has been validated against surface measurements at 29 stations around the globe for the Terra and Aqua satellites. The results show significant improvement over the original version. The revised Zhou-Cess algorithm is also slightly better or comparable to more sophisticated algorithms currently implemented in the CERES processing. It will be incorporated in the CERES project as one of the empirical surface radiation algorithms.
NASA Astrophysics Data System (ADS)
Lai, Ian-Lin; Su, Cheng-Chin; Ip, Wing-Huen; Wei, Chen-En; Wu, Jong-Shinn; Lo, Ming-Chung; Liao, Ying; Thomas, Nicolas
2016-03-01
With a combination of the Direct Simulation Monte Carlo (DSMC) calculation and test particle computation, the ballistic transport process of the hydroxyl radicals and oxygen atoms produced by photodissociation of water molecules in the coma of comet 67P/Churyumov-Gerasimenko is modelled. We discuss the key elements and essential features of such simulations which results can be compared with the remote-sensing and in situ measurements of cometary gas coma from the Rosetta mission at different orbital phases of this comet.
Accuracy Analysis of DSMC Chemistry Models Applied to a Normal Shock Wave
2012-06-20
CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18 . NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON A. Ketsdever a. REPORT Unclassified b. ABSTRACT...coefficient from [4] is assumed to be 2×10−19 m3/s at 5000 K and 7− 18 m3/s at 10,000K ; the QK prediction using the present VHS collision parameters...is 9−20 m3/s at 5000 K and 2− 18 m3/s at 10000K. Note that the QK for the present work was modified for use with AHO energy levels for consistency
Neuroprosthetic Decoder Training as Imitation Learning.
Merel, Josh; Carlson, David; Paninski, Liam; Cunningham, John P
2016-05-01
Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector.
Sola, J; Braun, F; Muntane, E; Verjus, C; Bertschi, M; Hugon, F; Manzano, S; Benissa, M; Gervaix, A
2016-08-01
Pneumonia remains the worldwide leading cause of children mortality under the age of five, with every year 1.4 million deaths. Unfortunately, in low resource settings, very limited diagnostic support aids are provided to point-of-care practitioners. Current UNICEF/WHO case management algorithm relies on the use of a chronometer to manually count breath rates on pediatric patients: there is thus a major need for more sophisticated tools to diagnose pneumonia that increase sensitivity and specificity of breath-rate-based algorithms. These tools should be low cost, and adapted to practitioners with limited training. In this work, a novel concept of unsupervised tool for the diagnosis of childhood pneumonia is presented. The concept relies on the automated analysis of respiratory sounds as recorded by a point-of-care electronic stethoscope. By identifying the presence of auscultation sounds at different chest locations, this diagnostic tool is intended to estimate a pneumonia likelihood score. After presenting the overall architecture of an algorithm to estimate pneumonia scores, the importance of a robust unsupervised method to identify inspiratory and expiratory phases of a respiratory cycle is highlighted. Based on data from an on-going study involving pediatric pneumonia patients, a first algorithm to segment respiratory sounds is suggested. The unsupervised algorithm relies on a Mel-frequency filter bank, a two-step Gaussian Mixture Model (GMM) description of data, and a final Hidden Markov Model (HMM) interpretation of inspiratory-expiratory sequences. Finally, illustrative results on first recruited patients are provided. The presented algorithm opens the doors to a new family of unsupervised respiratory sound analyzers that could improve future versions of case management algorithms for the diagnosis of pneumonia in low-resources settings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iwai, P; Lins, L Nadler
Purpose: There is a lack of studies with significant cohort data about patients using pacemaker (PM), implanted cardioverter defibrillator (ICD) or cardiac resynchronization therapy (CRT) device undergoing radiotherapy. There is no literature comparing the cumulative doses delivered to those cardiac implanted electronic devices (CIED) calculated by different algorithms neither studies comparing doses with heterogeneity correction or not. The aim of this study was to evaluate the influence of the algorithms Pencil Beam Convolution (PBC), Analytical Anisotropic Algorithm (AAA) and Acuros XB (AXB) as well as heterogeneity correction on risk categorization of patients. Methods: A retrospective analysis of 19 3DCRT ormore » IMRT plans of 17 patients was conducted, calculating the dose delivered to CIED using three different calculation algorithms. Doses were evaluated with and without heterogeneity correction for comparison. Risk categorization of the patients was based on their CIED dependency and cumulative dose in the devices. Results: Total estimated doses at CIED calculated by AAA or AXB were higher than those calculated by PBC in 56% of the cases. In average, the doses at CIED calculated by AAA and AXB were higher than those calculated by PBC (29% and 4% higher, respectively). The maximum difference of doses calculated by each algorithm was about 1 Gy, either using heterogeneity correction or not. Values of maximum dose calculated with heterogeneity correction showed that dose at CIED was at least equal or higher in 84% of the cases with PBC, 77% with AAA and 67% with AXB than dose obtained with no heterogeneity correction. Conclusion: The dose calculation algorithm and heterogeneity correction did not change the risk categorization. Since higher estimated doses delivered to CIED do not compromise treatment precautions to be taken, it’s recommend that the most sophisticated algorithm available should be used to predict dose at the CIED using heterogeneity correction.« less
Koblavi-Dème, Stéphania; Maurice, Chantal; Yavo, Daniel; Sibailly, Toussaint S.; N′guessan, Kabran; Kamelan-Tano, Yvonne; Wiktor, Stefan Z.; Roels, Thierry H.; Chorba, Terence; Nkengasong, John N.
2001-01-01
To evaluate serologic testing algorithms for human immunodeficiency virus (HIV) based on a combination of rapid assays among persons with HIV-1 (non-B subtypes) infection, HIV-2 infection, and HIV-1–HIV-2 dual infections in Abidjan, Ivory Coast, a total of 1,216 sera with known HIV serologic status were used to evaluate the sensitivity and specificity of four rapid assays: Determine HIV-1/2, Capillus HIV-1/HIV-2, HIV-SPOT, and Genie II HIV-1/HIV-2. Two serum panels obtained from patients recently infected with HIV-1 subtypes B and non-B were also included. Based on sensitivity and specificity, three of the four rapid assays were evaluated prospectively in parallel (serum samples tested by two simultaneous rapid assays) and serial (serum samples tested by two consecutive rapid assays) testing algorithms. All assays were 100% sensitive, and specificities ranged from 99.4 to 100%. In the prospective evaluation, both the parallel and serial algorithms were 100% sensitive and specific. Our results suggest that rapid assays have high sensitivity and specificity and, when used in parallel or serial testing algorithms, yield results similar to those of enzyme-linked immunosorbent assay-based testing strategies. HIV serodiagnosis based on rapid assays may be a valuable alternative in implementing HIV prevention and surveillance programs in areas where sophisticated laboratories are difficult to establish. PMID:11325995
The systems biology simulation core algorithm
2013-01-01
Background With the increasing availability of high dimensional time course data for metabolites, genes, and fluxes, the mathematical description of dynamical systems has become an essential aspect of research in systems biology. Models are often encoded in formats such as SBML, whose structure is very complex and difficult to evaluate due to many special cases. Results This article describes an efficient algorithm to solve SBML models that are interpreted in terms of ordinary differential equations. We begin our consideration with a formal representation of the mathematical form of the models and explain all parts of the algorithm in detail, including several preprocessing steps. We provide a flexible reference implementation as part of the Systems Biology Simulation Core Library, a community-driven project providing a large collection of numerical solvers and a sophisticated interface hierarchy for the definition of custom differential equation systems. To demonstrate the capabilities of the new algorithm, it has been tested with the entire SBML Test Suite and all models of BioModels Database. Conclusions The formal description of the mathematics behind the SBML format facilitates the implementation of the algorithm within specifically tailored programs. The reference implementation can be used as a simulation backend for Java™-based programs. Source code, binaries, and documentation can be freely obtained under the terms of the LGPL version 3 from http://simulation-core.sourceforge.net. Feature requests, bug reports, contributions, or any further discussion can be directed to the mailing list simulation-core-development@lists.sourceforge.net. PMID:23826941
Denoising of gravitational wave signals via dictionary learning algorithms
NASA Astrophysics Data System (ADS)
Torres-Forné, Alejandro; Marquina, Antonio; Font, José A.; Ibáñez, José M.
2016-12-01
Gravitational wave astronomy has become a reality after the historical detections accomplished during the first observing run of the two advanced LIGO detectors. In the following years, the number of detections is expected to increase significantly with the full commissioning of the advanced LIGO, advanced Virgo and KAGRA detectors. The development of sophisticated data analysis techniques to improve the opportunities of detection for low signal-to-noise-ratio events is, hence, a most crucial effort. In this paper, we present one such technique, dictionary-learning algorithms, which have been extensively developed in the last few years and successfully applied mostly in the context of image processing. However, to the best of our knowledge, such algorithms have not yet been employed to denoise gravitational wave signals. By building dictionaries from numerical relativity templates of both binary black holes mergers and bursts of rotational core collapse, we show how machine-learning algorithms based on dictionaries can also be successfully applied for gravitational wave denoising. We use a subset of signals from both catalogs, embedded in nonwhite Gaussian noise, to assess our techniques with a large sample of tests and to find the best model parameters. The application of our method to the actual signal GW150914 shows promising results. Dictionary-learning algorithms could be a complementary addition to the gravitational wave data analysis toolkit. They may be used to extract signals from noise and to infer physical parameters if the data are in good enough agreement with the morphology of the dictionary atoms.
Estimating rare events in biochemical systems using conditional sampling.
Sundar, V S
2017-01-28
The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.
Data Analytics for Smart Parking Applications.
Piovesan, Nicola; Turi, Leo; Toigo, Enrico; Martinez, Borja; Rossi, Michele
2016-09-23
We consider real-life smart parking systems where parking lot occupancy data are collected from field sensor devices and sent to backend servers for further processing and usage for applications. Our objective is to make these data useful to end users, such as parking managers, and, ultimately, to citizens. To this end, we concoct and validate an automated classification algorithm having two objectives: (1) outlier detection: to detect sensors with anomalous behavioral patterns, i.e., outliers; and (2) clustering: to group the parking sensors exhibiting similar patterns into distinct clusters. We first analyze the statistics of real parking data, obtaining suitable simulation models for parking traces. We then consider a simple classification algorithm based on the empirical complementary distribution function of occupancy times and show its limitations. Hence, we design a more sophisticated algorithm exploiting unsupervised learning techniques (self-organizing maps). These are tuned following a supervised approach using our trace generator and are compared against other clustering schemes, namely expectation maximization, k-means clustering and DBSCAN, considering six months of data from a real sensor deployment. Our approach is found to be superior in terms of classification accuracy, while also being capable of identifying all of the outliers in the dataset.
Electric machine differential for vehicle traction control and stability control
NASA Astrophysics Data System (ADS)
Kuruppu, Sandun Shivantha
Evolving requirements in energy efficiency and tightening regulations for reliable electric drivetrains drive the advancement of the hybrid electric (HEV) and full electric vehicle (EV) technology. Different configurations of EV and HEV architectures are evaluated for their performance. The future technology is trending towards utilizing distinctive properties in electric machines to not only to improve efficiency but also to realize advanced road adhesion controls and vehicle stability controls. Electric machine differential (EMD) is such a concept under current investigation for applications in the near future. Reliability of a power train is critical. Therefore, sophisticated fault detection schemes are essential in guaranteeing reliable operation of a complex system such as an EMD. The research presented here emphasize on implementation of a 4kW electric machine differential, a novel single open phase fault diagnostic scheme, an implementation of a real time slip optimization algorithm and an electric machine differential based yaw stability improvement study. The proposed d-q current signature based SPO fault diagnostic algorithm detects the fault within one electrical cycle. The EMD based extremum seeking slip optimization algorithm reduces stopping distance by 30% compared to hydraulic braking based ABS.
Peak picking NMR spectral data using non-negative matrix factorization
2014-01-01
Background Simple peak-picking algorithms, such as those based on lineshape fitting, perform well when peaks are completely resolved in multidimensional NMR spectra, but often produce wrong intensities and frequencies for overlapping peak clusters. For example, NOESY-type spectra have considerable overlaps leading to significant peak-picking intensity errors, which can result in erroneous structural restraints. Precise frequencies are critical for unambiguous resonance assignments. Results To alleviate this problem, a more sophisticated peaks decomposition algorithm, based on non-negative matrix factorization (NMF), was developed. We produce peak shapes from Fourier-transformed NMR spectra. Apart from its main goal of deriving components from spectra and producing peak lists automatically, the NMF approach can also be applied if the positions of some peaks are known a priori, e.g. from consistently referenced spectral dimensions of other experiments. Conclusions Application of the NMF algorithm to a three-dimensional peak list of the 23 kDa bi-domain section of the RcsD protein (RcsD-ABL-HPt, residues 688-890) as well as to synthetic HSQC data shows that peaks can be picked accurately also in spectral regions with strong overlap. PMID:24511909
On Applying the Prognostic Performance Metrics
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai
2009-01-01
Prognostics performance evaluation has gained significant attention in the past few years. As prognostics technology matures and more sophisticated methods for prognostic uncertainty management are developed, a standardized methodology for performance evaluation becomes extremely important to guide improvement efforts in a constructive manner. This paper is in continuation of previous efforts where several new evaluation metrics tailored for prognostics were introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. Several shortcomings identified, while applying these metrics to a variety of real applications, are also summarized along with discussions that attempt to alleviate these problems. Further, these metrics have been enhanced to include the capability of incorporating probability distribution information from prognostic algorithms as opposed to evaluation based on point estimates only. Several methods have been suggested and guidelines have been provided to help choose one method over another based on probability distribution characteristics. These approaches also offer a convenient and intuitive visualization of algorithm performance with respect to some of these new metrics like prognostic horizon and alpha-lambda performance, and also quantify the corresponding performance while incorporating the uncertainty information.
Scalable splitting algorithms for big-data interferometric imaging in the SKA era
NASA Astrophysics Data System (ADS)
Onose, Alexandru; Carrillo, Rafael E.; Repetti, Audrey; McEwen, Jason D.; Thiran, Jean-Philippe; Pesquet, Jean-Christophe; Wiaux, Yves
2016-11-01
In the context of next-generation radio telescopes, like the Square Kilometre Array (SKA), the efficient processing of large-scale data sets is extremely important. Convex optimization tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus herein mainly on scalability and propose two new convex optimization algorithmic structures able to solve the convex optimization tasks arising in radio-interferometric imaging. They rely on proximal splitting and forward-backward iterations and can be seen, by analogy, with the CLEAN major-minor cycle, as running sophisticated CLEAN-like iterations in parallel in multiple data, prior, and image spaces. Both methods support any convex regularization function, in particular, the well-studied ℓ1 priors promoting image sparsity in an adequate domain. Tailored for big-data, they employ parallel and distributed computations to achieve scalability, in terms of memory and computational requirements. One of them also exploits randomization, over data blocks at each iteration, offering further flexibility. We present simulation results showing the feasibility of the proposed methods as well as their advantages compared to state-of-the-art algorithmic solvers. Our MATLAB code is available online on GitHub.
Data Analytics for Smart Parking Applications
Piovesan, Nicola; Turi, Leo; Toigo, Enrico; Martinez, Borja; Rossi, Michele
2016-01-01
We consider real-life smart parking systems where parking lot occupancy data are collected from field sensor devices and sent to backend servers for further processing and usage for applications. Our objective is to make these data useful to end users, such as parking managers, and, ultimately, to citizens. To this end, we concoct and validate an automated classification algorithm having two objectives: (1) outlier detection: to detect sensors with anomalous behavioral patterns, i.e., outliers; and (2) clustering: to group the parking sensors exhibiting similar patterns into distinct clusters. We first analyze the statistics of real parking data, obtaining suitable simulation models for parking traces. We then consider a simple classification algorithm based on the empirical complementary distribution function of occupancy times and show its limitations. Hence, we design a more sophisticated algorithm exploiting unsupervised learning techniques (self-organizing maps). These are tuned following a supervised approach using our trace generator and are compared against other clustering schemes, namely expectation maximization, k-means clustering and DBSCAN, considering six months of data from a real sensor deployment. Our approach is found to be superior in terms of classification accuracy, while also being capable of identifying all of the outliers in the dataset. PMID:27669259
Motion adaptive Kalman filter for super-resolution
NASA Astrophysics Data System (ADS)
Richter, Martin; Nasse, Fabian; Schröder, Hartmut
2011-01-01
Superresolution is a sophisticated strategy to enhance image quality of both low and high resolution video, performing tasks like artifact reduction, scaling and sharpness enhancement in one algorithm, all of them reconstructing high frequency components (above Nyquist frequency) in some way. Especially recursive superresolution algorithms can fulfill high quality aspects because they control the video output using a feed-back loop and adapt the result in the next iteration. In addition to excellent output quality, temporal recursive methods are very hardware efficient and therefore even attractive for real-time video processing. A very promising approach is the utilization of Kalman filters as proposed by Farsiu et al. Reliable motion estimation is crucial for the performance of superresolution. Therefore, robust global motion models are mainly used, but this also limits the application of superresolution algorithm. Thus, handling sequences with complex object motion is essential for a wider field of application. Hence, this paper proposes improvements by extending the Kalman filter approach using motion adaptive variance estimation and segmentation techniques. Experiments confirm the potential of our proposal for ideal and real video sequences with complex motion and further compare its performance to state-of-the-art methods like trainable filters.
Visualization of Sources in the Universe
NASA Astrophysics Data System (ADS)
Kafatos, M.; Cebral, J. R.
1993-12-01
We have begun to develop a series of visualization tools of importance to the display of astronomical data and have applied these to the visualization of cosmological sources in the recently formed Institute for Computational Sciences and Informatics at GMU. One can use a three-dimensional perspective plot of the density surface for three dimensional data and in this case the iso-level contours are three- dimensional surfaces. Sophisticated rendering algorithms combined with multiple source lighting allow us to look carefully at such density contours and to see fine structure on the surface of the density contours. Stereoscopic and transparent rendering can give an even more sophisticated approach with multi-layered surfaces providing information at different levels. We have applied these methods to looking at density surfaces of 3-D data such as 100 clusters of galaxies and 2500 galaxies in the CfA redshift survey. Our plots presented are based on three variables, right ascension, declination and redshift. We have also obtained density structures in 2-D for the distribution of gamma-ray bursts (where distances are unknown) and the distribution of a variety of sources such as clusters of galaxies. Our techniques allow for correlations to be done visually.
Undecidability and Irreducibility Conditions for Open-Ended Evolution and Emergence.
Hernández-Orozco, Santiago; Hernández-Quiroz, Francisco; Zenil, Hector
2018-01-01
Is undecidability a requirement for open-ended evolution (OEE)? Using methods derived from algorithmic complexity theory, we propose robust computational definitions of open-ended evolution and the adaptability of computable dynamical systems. Within this framework, we show that decidability imposes absolute limits on the stable growth of complexity in computable dynamical systems. Conversely, systems that exhibit (strong) open-ended evolution must be undecidable, establishing undecidability as a requirement for such systems. Complexity is assessed in terms of three measures: sophistication, coarse sophistication, and busy beaver logical depth. These three complexity measures assign low complexity values to random (incompressible) objects. As time grows, the stated complexity measures allow for the existence of complex states during the evolution of a computable dynamical system. We show, however, that finding these states involves undecidable computations. We conjecture that for similar complexity measures that assign low complexity values, decidability imposes comparable limits on the stable growth of complexity, and that such behavior is necessary for nontrivial evolutionary systems. We show that the undecidability of adapted states imposes novel and unpredictable behavior on the individuals or populations being modeled. Such behavior is irreducible. Finally, we offer an example of a system, first proposed by Chaitin, that exhibits strong OEE.
DSMC Simulations of Irregular Source Geometries for Io's Pele Plume
NASA Astrophysics Data System (ADS)
McDoniel, William; Goldstein, D. B.; Varghese, P. L.; Trafton, L. M.; Buchta, D. A.; Freund, J.; Kieffer, S. W.
2010-10-01
Volcanic plumes on Io represent a complex rarefied flow into a near-vacuum in the presence of gravity. A 3D rarefied gas dynamics method (DSMC) is used to investigate the gas dynamics of such plumes, with a focus on the effects of source geometry on far-field deposition patterns. These deposition patterns, such as the deposition ring's shape and orientation, as well as the presence and shape of ash deposits around the vent, are linked to the shape of the vent from which the plume material arises. We will present three-dimensional simulations for a variety of possible vent geometries for Pele based on observations of the volcano's caldera. One is a curved line source corresponding to a Galileo IR image of a particularly hot region in the volcano's caldera and the other is a large area source corresponding to the entire lava lake at the center of the plume. The curvature of the former is seen to be sufficient to produce the features seen in observations of Pele's deposition pattern, but the particular orientation of the source is found to be such that it cannot match the orientation of these features on Io's surface. The latter corrects the error in orientation while losing some of the structure, suggesting that the actual source may correspond well with part of the shore of the lava lake. In addition, we are collaborating with a group at the University of Illinois at Urbana-Champaign to develop a hybrid method to link the continuum flow beneath Io's surface and very close to the vent to the more rarefied flow in the large volcanic plumes. This work was funded by NASA-PATM grant NNX08AE72G.
In Depth Analysis of AVCOAT TPS Response to a Reentry Flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Titov, E. V.; Kumar, Rakesh; Levin, D. A.
2011-05-20
Modeling of the high altitude portion of reentry vehicle trajectories with DSMC or statistical BGK solvers requires accurate evaluation of the boundary conditions at the ablating TPS surface. Presented in this article is a model which takes into account the complex ablation physics including the production of pyrolysis gases, and chemistry at the TPS surface. Since the ablation process is time dependent the modeling of the material response to the high energy reentry flow starts with the solution of the rarefied flow over the vehicle and then loosely couples with the material response. The objective of the present work ismore » to carry out conjugate thermal analysis by weakly coupling a flow solver to a material thermal response model. The latter model solves the one dimensional heat conduction equation accounting for the pyrolysis process that takes place in the reaction zone of an ablative thermal protection system (TPS) material. An estimate of the temperature range within which the pyrolysis reaction (decomposition and volatilization) takes place is obtained from Ref. [1]. The pyrolysis reaction results in the formation of char and the release of gases through the porous charred material. These gases remove additional amount of heat as they pass through the material, thus cooling the material (the process known as transpiration cooling). In the present work, we incorporate the transpiration cooling model in the material thermal response code in addition to the pyrolysis model. The flow in the boundary layer and in the vicinity of the TPS material is in the transitional flow regime. Therefore, we use a previously validated statistical BGK method to model the flow physics in the vicinity of the micro-cracks, since the BGK method allows simulations of flow at pressures higher than can be computed using DSMC.« less
Heavy particle transport in sputtering systems
NASA Astrophysics Data System (ADS)
Trieschmann, Jan
2015-09-01
This contribution aims to discuss the theoretical background of heavy particle transport in plasma sputtering systems such as direct current magnetron sputtering (dcMS), high power impulse magnetron sputtering (HiPIMS), or multi frequency capacitively coupled plasmas (MFCCP). Due to inherently low process pressures below one Pa only kinetic simulation models are suitable. In this work a model appropriate for the description of the transport of film forming particles sputtered of a target material has been devised within the frame of the OpenFOAM software (specifically dsmcFoam). The three dimensional model comprises of ejection of sputtered particles into the reactor chamber, their collisional transport through the volume, as well as deposition of the latter onto the surrounding surfaces (i.e. substrates, walls). An angular dependent Thompson energy distribution fitted to results from Monte-Carlo simulations is assumed initially. Binary collisions are treated via the M1 collision model, a modified variable hard sphere (VHS) model. The dynamics of sputtered and background gas species can be resolved self-consistently following the direct simulation Monte-Carlo (DSMC) approach or, whenever possible, simplified based on the test particle method (TPM) with the assumption of a constant, non-stationary background at a given temperature. At the example of an MFCCP research reactor the transport of sputtered aluminum is specifically discussed. For the peculiar configuration and under typical process conditions with argon as process gas the transport of aluminum sputtered of a circular target is shown to be governed by a one dimensional interaction of the imposed and backscattered particle fluxes. The results are analyzed and discussed on the basis of the obtained velocity distribution functions (VDF). This work is supported by the German Research Foundation (DFG) in the frame of the Collaborative Research Centre TRR 87.
Radiolytic Model for Chemical Composition of Europa's Atmosphere and Surface
NASA Technical Reports Server (NTRS)
Cooper, John F.
2004-01-01
The overall objective of the present effort is to produce models for major and selected minor components of Europa s neutral atmosphere in 1-D versus altitude and in 2-D versus altitude and longitude or latitude. A 3-D model versus all three coordinates (alt, long, lat) will be studied but development on this is at present limited by computing facilities available to the investigation team. In this first year we have focused on 1-D modeling with Co-I Valery Shematovich s Direct Simulation Monte Carlo (DSMC) code for water group species (H2O, O2, O, OH) and on 2-D with Co-I Mau Wong's version of a similar code for O2, O, CO, CO2, and Na. Surface source rates of H2O and O2 from sputtering and radiolysis are used in the 1-D model, while observations for CO2 at the Europa surface and Na detected in a neutral cloud ejected from Europa are used, along with the O2 sputtering rate, to constrain source rates in the 2-D version. With these separate approaches we are investigating a range of processes important to eventual implementation of a comprehensive 3-D atmospheric model which could be used to understand present observations and develop science requirements for future observations, e.g. from Earth and in Europa orbit. Within the second year we expect to merge the full water group calculations into the 2-D version of the DSMC code which can then be extended to 3-D, pending availability of computing resources. Another important goal in the second year would be the inclusion of sulk and its more volatile oxides (SO, SO2).
NASA Astrophysics Data System (ADS)
Finklenburg, S.; Thomas, N.; Su, C. C.; Wu, J.-S.
2014-07-01
The near nucleus coma of Comet 9P/Tempel 1 has been simulated with the 3D Direct Simulation Monte Carlo (DSMC) code PDSC++ (Su, C.-C. [2013]. Parallel Direct Simulation Monte Carlo (DSMC) Methods for Modeling Rarefied Gas Dynamics. PhD Thesis, National Chiao Tung University, Taiwan) and the derived column densities have been compared to observations of the water vapour distribution found by using infrared imaging spectrometer on the Deep Impact spacecraft (Feaga, L.M., A’Hearn, M.F., Sunshine, J.M., Groussin, O., Farnham, T.L. [2007]. Icarus 191(2), 134-145. http://dx.doi.org/10.1016/j.icarus.2007.04.038). Modelled total production rates are also compared to various observations made at the time of the Deep Impact encounter. Three different models were tested. For all models, the shape model constructed from the Deep Impact observations by Thomas et al. (Thomas, P.C., Veverka, J., Belton, M.J.S., Hidy, A., A’Hearn, M.F., Farnham, T.L., et al. [2007]. Icarus, 187(1), 4-15. http://dx.doi.org/10.1016/j.icarus.2006.12.013) was used. Outgassing depending only on the cosine of the solar insolation angle on each shape model facet is shown to provide an unsatisfactory model. Models constructed on the basis of active areas suggested by Kossacki and Szutowicz (Kossacki, K., Szutowicz, S. [2008]. Icarus, 195(2), 705-724. http://dx.doi.org/10.1016/j.icarus.2007.12.014) are shown to be superior. The Kossacki and Szutowicz model, however, also shows deficits which we have sought to improve upon. For the best model we investigate the properties of the outflow.
Aerodynamic characteristics of the upper stages of a launch vehicle in low-density regime
NASA Astrophysics Data System (ADS)
Oh, Bum Seok; Lee, Joon Ho
2016-11-01
Aerodynamic characteristics of the orbital block (remaining configuration after separation of nose fairing and 1st and 2nd stages of the launch vehicle) and the upper 2-3stage (configuration after separation of 1st stage) of the 3 stages launch vehicle (KSLV-II, Korea Space Launch Vehicle) at high altitude of low-density regime are analyzed by SMILE code which is based on DSMC (Direct Simulation Monte-Carlo) method. To validating of the SMILE code, coefficients of axial force and normal forces of Apollo capsule are also calculated and the results agree very well with the data predicted by others. For the additional validations and applications of the DSMC code, aerodynamic calculation results of simple shapes of plate and wedge in low-density regime are also introduced. Generally, aerodynamic characteristics in low-density regime differ from those of continuum regime. To understand those kinds of differences, aerodynamic coefficients of the upper stages (including upper 2-3 stage and the orbital block) of the launch vehicle in low-density regime are analyzed as a function of Mach numbers and altitudes. The predicted axial force coefficients of the upper stages of the launch vehicle are very high compared to those in continuum regime. In case of the orbital block which flies at very high altitude (higher than 250km), all aerodynamic coefficients are more dependent on velocity variations than altitude variations. In case of the upper 2-3 stage which flies at high altitude (80km-150km), while the axial force coefficients and the locations of center of pressure are less changed with the variations of Knudsen numbers (altitudes), the normal force coefficients and pitching moment coefficients are more affected by variations of Knudsen numbers (altitude).
Single image super-resolution reconstruction algorithm based on eage selection
NASA Astrophysics Data System (ADS)
Zhang, Yaolan; Liu, Yijun
2017-05-01
Super-resolution (SR) has become more important, because it can generate high-quality high-resolution (HR) images from low-resolution (LR) input images. At present, there are a lot of work is concentrated on developing sophisticated image priors to improve the image quality, while taking much less attention to estimating and incorporating the blur model that can also impact the reconstruction results. We present a new reconstruction method based on eager selection. This method takes full account of the factors that affect the blur kernel estimation and accurately estimating the blur process. When comparing with the state-of-the-art methods, our method has comparable performance.
A guidance and control assessment of three vertical landing options for RLV
NASA Technical Reports Server (NTRS)
Gallaher, M.; Coughlin, D.; Krupp, D
1995-01-01
The National Aeronautics and Space Administration is considering a vertical lander as a candidate concept for a single-stage-to-orbit reusable launch vehicle (RLV). Three strategies for guiding and controlling the inversion of a reentering RLV from a nose-first attitude to a vertical landing attitude are suggested. Each option is simulated from a common reentry state to touchdown, using a common guidance algorithm and different controllers. Results demonstrate the characteristics that typify and distinguish each concept and help to identify peculiar problems, level of guidance and control sophistication required, feasibility concerns, and areas in which stringent subsystem requirements will be imposed by guidance and control.
Aspen: A microsimulation model of the economy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basu, N.; Pryor, R.J.; Quint, T.
1996-10-01
This report presents, Aspen. Sandia National Laboratories is developing this new agent-based microeconomic simulation model of the U.S. economy. The model is notable because it allows a large number of individual economic agents to be modeled at a high level of detail and with a great degree of freedom. Some features of Aspen are (a) a sophisticated message-passing system that allows individual pairs of agents to communicate, (b) the use of genetic algorithms to simulate the learning of certain agents, and (c) a detailed financial sector that includes a banking system and a bond market. Results from runs of themore » model are also presented.« less
The CMS electron and photon trigger for the LHC Run 2
NASA Astrophysics Data System (ADS)
Dezoort, Gage; Xia, Fan
2017-01-01
The CMS experiment implements a sophisticated two-level triggering system composed of Level-1, instrumented by custom-design hardware boards, and a software High-Level-Trigger. A new Level-1 trigger architecture with improved performance is now being used to maintain the thresholds that were used in LHC Run I for the more challenging luminosity conditions experienced during Run II. The upgrades to the calorimetry trigger will be described along with performance data. The algorithms for the selection of final states with electrons and photons, both for precision measurements and for searches of new physics beyond the Standard Model, will be described in detail.
Microbiome Tools for Forensic Science.
Metcalf, Jessica L; Xu, Zhenjiang Z; Bouslimani, Amina; Dorrestein, Pieter; Carter, David O; Knight, Rob
2017-09-01
Microbes are present at every crime scene and have been used as physical evidence for over a century. Advances in DNA sequencing and computational approaches have led to recent breakthroughs in the use of microbiome approaches for forensic science, particularly in the areas of estimating postmortem intervals (PMIs), locating clandestine graves, and obtaining soil and skin trace evidence. Low-cost, high-throughput technologies allow us to accumulate molecular data quickly and to apply sophisticated machine-learning algorithms, building generalizable predictive models that will be useful in the criminal justice system. In particular, integrating microbiome and metabolomic data has excellent potential to advance microbial forensics. Copyright © 2017. Published by Elsevier Ltd.
The clinical value of large neuroimaging data sets in Alzheimer's disease.
Toga, Arthur W
2012-02-01
Rapid advances in neuroimaging and cyberinfrastructure technologies have brought explosive growth in the Web-based warehousing, availability, and accessibility of imaging data on a variety of neurodegenerative and neuropsychiatric disorders and conditions. There has been a prolific development and emergence of complex computational infrastructures that serve as repositories of databases and provide critical functionalities such as sophisticated image analysis algorithm pipelines and powerful three-dimensional visualization and statistical tools. The statistical and operational advantages of collaborative, distributed team science in the form of multisite consortia push this approach in a diverse range of population-based investigations. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Iyer, Sridhar
2016-12-01
The ever-increasing global Internet traffic will inevitably lead to a serious upgrade of the current optical networks' capacity. The legacy infrastructure can be enhanced not only by increasing the capacity but also by adopting advance modulation formats, having increased spectral efficiency at higher data rate. In a transparent mixed-line-rate (MLR) optical network, different line rates, on different wavelengths, can coexist on the same fiber. Migration to data rates higher than 10 Gbps requires the implementation of phase modulation schemes. However, the co-existing on-off keying (OOK) channels cause critical physical layer impairments (PLIs) to the phase modulated channels, mainly due to cross-phase modulation (XPM), which in turn limits the network's performance. In order to mitigate this effect, a more sophisticated PLI-Routing and Wavelength Assignment (PLI-RWA) scheme needs to be adopted. In this paper, we investigate the critical impairment for each data rate and the way it affects the quality of transmission (QoT). In view of the aforementioned, we present a novel dynamic PLI-RWA algorithm for MLR optical networks. The proposed algorithm is compared through simulations with the shortest path and minimum hop routing schemes. The simulation results show that performance of the proposed algorithm is better than the existing schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.
2015-01-12
The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However,more » the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.« less
Self-adaptive MOEA feature selection for classification of bankruptcy prediction data.
Gaspar-Cunha, A; Recio, G; Costa, L; Estébanez, C
2014-01-01
Bankruptcy prediction is a vast area of finance and accounting whose importance lies in the relevance for creditors and investors in evaluating the likelihood of getting into bankrupt. As companies become complex, they develop sophisticated schemes to hide their real situation. In turn, making an estimation of the credit risks associated with counterparts or predicting bankruptcy becomes harder. Evolutionary algorithms have shown to be an excellent tool to deal with complex problems in finances and economics where a large number of irrelevant features are involved. This paper provides a methodology for feature selection in classification of bankruptcy data sets using an evolutionary multiobjective approach that simultaneously minimise the number of features and maximise the classifier quality measure (e.g., accuracy). The proposed methodology makes use of self-adaptation by applying the feature selection algorithm while simultaneously optimising the parameters of the classifier used. The methodology was applied to four different sets of data. The obtained results showed the utility of using the self-adaptation of the classifier.
Image segmentation using hidden Markov Gauss mixture models.
Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M
2007-07-01
Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.
Optimizing LX-17 Thermal Decomposition Model Parameters with Evolutionary Algorithms
NASA Astrophysics Data System (ADS)
Moore, Jason; McClelland, Matthew; Tarver, Craig; Hsu, Peter; Springer, H. Keo
2017-06-01
We investigate and model the cook-off behavior of LX-17 because this knowledge is critical to understanding system response in abnormal thermal environments. Thermal decomposition of LX-17 has been explored in conventional ODTX (One-Dimensional Time-to-eXplosion), PODTX (ODTX with pressure-measurement), TGA (thermogravimetric analysis), and DSC (differential scanning calorimetry) experiments using varied temperature profiles. These experimental data are the basis for developing multiple reaction schemes with coupled mechanics in LLNL's multi-physics hydrocode, ALE3D (Arbitrary Lagrangian-Eulerian code in 2D and 3D). We employ evolutionary algorithms to optimize reaction rate parameters on high performance computing clusters. Once experimentally validated, this model will be scalable to a number of applications involving LX-17 and can be used to develop more sophisticated experimental methods. Furthermore, the optimization methodology developed herein should be applicable to other high explosive materials. This work was performed under the auspices of the U.S. DOE by LLNL under contract DE-AC52-07NA27344. LLNS, LLC.
pyblocxs: Bayesian Low-Counts X-ray Spectral Analysis in Sherpa
NASA Astrophysics Data System (ADS)
Siemiginowska, A.; Kashyap, V.; Refsdal, B.; van Dyk, D.; Connors, A.; Park, T.
2011-07-01
Typical X-ray spectra have low counts and should be modeled using the Poisson distribution. However, χ2 statistic is often applied as an alternative and the data are assumed to follow the Gaussian distribution. A variety of weights to the statistic or a binning of the data is performed to overcome the low counts issues. However, such modifications introduce biases or/and a loss of information. Standard modeling packages such as XSPEC and Sherpa provide the Poisson likelihood and allow computation of rudimentary MCMC chains, but so far do not allow for setting a full Bayesian model. We have implemented a sophisticated Bayesian MCMC-based algorithm to carry out spectral fitting of low counts sources in the Sherpa environment. The code is a Python extension to Sherpa and allows to fit a predefined Sherpa model to high-energy X-ray spectral data and other generic data. We present the algorithm and discuss several issues related to the implementation, including flexible definition of priors and allowing for variations in the calibration information.
Self-Adaptive MOEA Feature Selection for Classification of Bankruptcy Prediction Data
Gaspar-Cunha, A.; Recio, G.; Costa, L.; Estébanez, C.
2014-01-01
Bankruptcy prediction is a vast area of finance and accounting whose importance lies in the relevance for creditors and investors in evaluating the likelihood of getting into bankrupt. As companies become complex, they develop sophisticated schemes to hide their real situation. In turn, making an estimation of the credit risks associated with counterparts or predicting bankruptcy becomes harder. Evolutionary algorithms have shown to be an excellent tool to deal with complex problems in finances and economics where a large number of irrelevant features are involved. This paper provides a methodology for feature selection in classification of bankruptcy data sets using an evolutionary multiobjective approach that simultaneously minimise the number of features and maximise the classifier quality measure (e.g., accuracy). The proposed methodology makes use of self-adaptation by applying the feature selection algorithm while simultaneously optimising the parameters of the classifier used. The methodology was applied to four different sets of data. The obtained results showed the utility of using the self-adaptation of the classifier. PMID:24707201
Efficient discovery of overlapping communities in massive networks
Gopalan, Prem K.; Blei, David M.
2013-01-01
Detecting overlapping communities is essential to analyzing and exploring natural networks such as social networks, biological networks, and citation networks. However, most existing approaches do not scale to the size of networks that we regularly observe in the real world. In this paper, we develop a scalable approach to community detection that discovers overlapping communities in massive real-world networks. Our approach is based on a Bayesian model of networks that allows nodes to participate in multiple communities, and a corresponding algorithm that naturally interleaves subsampling from the network and updating an estimate of its communities. We demonstrate how we can discover the hidden community structure of several real-world networks, including 3.7 million US patents, 575,000 physics articles from the arXiv preprint server, and 875,000 connected Web pages from the Internet. Furthermore, we demonstrate on large simulated networks that our algorithm accurately discovers the true community structure. This paper opens the door to using sophisticated statistical models to analyze massive networks. PMID:23950224
A review of machine learning in obesity.
DeGregory, K W; Kuiper, P; DeSilvio, T; Pleuss, J D; Miller, R; Roginski, J W; Fisher, C B; Harness, D; Viswanath, S; Heymsfield, S B; Dungan, I; Thomas, D M
2018-05-01
Rich sources of obesity-related data arising from sensors, smartphone apps, electronic medical health records and insurance data can bring new insights for understanding, preventing and treating obesity. For such large datasets, machine learning provides sophisticated and elegant tools to describe, classify and predict obesity-related risks and outcomes. Here, we review machine learning methods that predict and/or classify such as linear and logistic regression, artificial neural networks, deep learning and decision tree analysis. We also review methods that describe and characterize data such as cluster analysis, principal component analysis, network science and topological data analysis. We introduce each method with a high-level overview followed by examples of successful applications. The algorithms were then applied to National Health and Nutrition Examination Survey to demonstrate methodology, utility and outcomes. The strengths and limitations of each method were also evaluated. This summary of machine learning algorithms provides a unique overview of the state of data analysis applied specifically to obesity. © 2018 World Obesity Federation.
Spline Trajectory Algorithm Development: Bezier Curve Control Point Generation for UAVs
NASA Technical Reports Server (NTRS)
Howell, Lauren R.; Allen, B. Danette
2016-01-01
A greater need for sophisticated autonomous piloting systems has risen in direct correlation with the ubiquity of Unmanned Aerial Vehicle (UAV) technology. Whether surveying unknown or unexplored areas of the world, collecting scientific data from regions in which humans are typically incapable of entering, locating lost or wanted persons, or delivering emergency supplies, an unmanned vehicle moving in close proximity to people and other vehicles, should fly smoothly and predictably. The mathematical application of spline interpolation can play an important role in autopilots' on-board trajectory planning. Spline interpolation allows for the connection of Three-Dimensional Euclidean Space coordinates through a continuous set of smooth curves. This paper explores the motivation, application, and methodology used to compute the spline control points, which shape the curves in such a way that the autopilot trajectory is able to meet vehicle-dynamics limitations. The spline algorithms developed used to generate these curves supply autopilots with the information necessary to compute vehicle paths through a set of coordinate waypoints.
(3+1)D hydrodynamic simulation of relativistic heavy-ion collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schenke, Bjoern; Jeon, Sangyong; Gale, Charles
2010-07-15
We present music, an implementation of the Kurganov-Tadmor algorithm for relativistic 3+1 dimensional fluid dynamics in heavy-ion collision scenarios. This Riemann-solver-free, second-order, high-resolution scheme is characterized by a very small numerical viscosity and its ability to treat shocks and discontinuities very well. We also incorporate a sophisticated algorithm for the determination of the freeze-out surface using a three dimensional triangulation of the hypersurface. Implementing a recent lattice based equation of state, we compute p{sub T}-spectra and pseudorapidity distributions for Au+Au collisions at sq root(s)=200 GeV and present results for the anisotropic flow coefficients v{sub 2} and v{sub 4} as amore » function of both p{sub T} and pseudorapidity eta. We were able to determine v{sub 4} with high numerical precision, finding that it does not strongly depend on the choice of initial condition or equation of state.« less
Track finding in ATLAS using GPUs
NASA Astrophysics Data System (ADS)
Mattmann, J.; Schmitt, C.
2012-12-01
The reconstruction and simulation of collision events is a major task in modern HEP experiments involving several ten thousands of standard CPUs. On the other hand the graphics processors (GPUs) have become much more powerful and are by far outperforming the standard CPUs in terms of floating point operations due to their massive parallel approach. The usage of these GPUs could therefore significantly reduce the overall reconstruction time per event or allow for the usage of more sophisticated algorithms. In this paper the track finding in the ATLAS experiment will be used as an example on how the GPUs can be used in this context: the implementation on the GPU requires a change in the algorithmic flow to allow the code to work in the rather limited environment on the GPU in terms of memory, cache, and transfer speed from and to the GPU and to make use of the massive parallel computation. Both, the specific implementation of parts of the ATLAS track reconstruction chain and the performance improvements obtained will be discussed.
Nawaz, Tabassam; Mehmood, Zahid; Rashid, Muhammad; Habib, Hafiz Adnan
2018-01-01
Recent research on speech segregation and music fingerprinting has led to improvements in speech segregation and music identification algorithms. Speech and music segregation generally involves the identification of music followed by speech segregation. However, music segregation becomes a challenging task in the presence of noise. This paper proposes a novel method of speech segregation for unlabelled stationary noisy audio signals using the deep belief network (DBN) model. The proposed method successfully segregates a music signal from noisy audio streams. A recurrent neural network (RNN)-based hidden layer segregation model is applied to remove stationary noise. Dictionary-based fisher algorithms are employed for speech classification. The proposed method is tested on three datasets (TIMIT, MIR-1K, and MusicBrainz), and the results indicate the robustness of proposed method for speech segregation. The qualitative and quantitative analysis carried out on three datasets demonstrate the efficiency of the proposed method compared to the state-of-the-art speech segregation and classification-based methods. PMID:29558485
The HMMER Web Server for Protein Sequence Similarity Search.
Prakash, Ananth; Jeffryes, Matt; Bateman, Alex; Finn, Robert D
2017-12-08
Protein sequence similarity search is one of the most commonly used bioinformatics methods for identifying evolutionarily related proteins. In general, sequences that are evolutionarily related share some degree of similarity, and sequence-search algorithms use this principle to identify homologs. The requirement for a fast and sensitive sequence search method led to the development of the HMMER software, which in the latest version (v3.1) uses a combination of sophisticated acceleration heuristics and mathematical and computational optimizations to enable the use of profile hidden Markov models (HMMs) for sequence analysis. The HMMER Web server provides a common platform by linking the HMMER algorithms to databases, thereby enabling the search for homologs, as well as providing sequence and functional annotation by linking external databases. This unit describes three basic protocols and two alternate protocols that explain how to use the HMMER Web server using various input formats and user defined parameters. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.
Dosimetric verification of radiotherapy treatment planning systems in Serbia: national audit
2012-01-01
Background Independent external audits play an important role in quality assurance programme in radiation oncology. The audit supported by the IAEA in Serbia was designed to review the whole chain of activities in 3D conformal radiotherapy (3D-CRT) workflow, from patient data acquisition to treatment planning and dose delivery. The audit was based on the IAEA recommendations and focused on dosimetry part of the treatment planning and delivery processes. Methods The audit was conducted in three radiotherapy departments of Serbia. An anthropomorphic phantom was scanned with a computed tomography unit (CT) and treatment plans for eight different test cases involving various beam configurations suggested by the IAEA were prepared on local treatment planning systems (TPSs). The phantom was irradiated following the treatment plans for these test cases and doses in specific points were measured with an ionization chamber. The differences between the measured and calculated doses were reported. Results The measurements were conducted for different photon beam energies and TPS calculation algorithms. The deviation between the measured and calculated values for all test cases made with advanced algorithms were within the agreement criteria, while the larger deviations were observed for simpler algorithms. The number of measurements with results outside the agreement criteria increased with the increase of the beam energy and decreased with TPS calculation algorithm sophistication. Also, a few errors in the basic dosimetry data in TPS were detected and corrected. Conclusions The audit helped the users to better understand the operational features and limitations of their TPSs and resulted in increased confidence in dose calculation accuracy using TPSs. The audit results indicated the shortcomings of simpler algorithms for the test cases performed and, therefore the transition to more advanced algorithms is highly desirable. PMID:22971539
Dosimetric verification of radiotherapy treatment planning systems in Serbia: national audit.
Rutonjski, Laza; Petrović, Borislava; Baucal, Milutin; Teodorović, Milan; Cudić, Ozren; Gershkevitsh, Eduard; Izewska, Joanna
2012-09-12
Independent external audits play an important role in quality assurance programme in radiation oncology. The audit supported by the IAEA in Serbia was designed to review the whole chain of activities in 3D conformal radiotherapy (3D-CRT) workflow, from patient data acquisition to treatment planning and dose delivery. The audit was based on the IAEA recommendations and focused on dosimetry part of the treatment planning and delivery processes. The audit was conducted in three radiotherapy departments of Serbia. An anthropomorphic phantom was scanned with a computed tomography unit (CT) and treatment plans for eight different test cases involving various beam configurations suggested by the IAEA were prepared on local treatment planning systems (TPSs). The phantom was irradiated following the treatment plans for these test cases and doses in specific points were measured with an ionization chamber. The differences between the measured and calculated doses were reported. The measurements were conducted for different photon beam energies and TPS calculation algorithms. The deviation between the measured and calculated values for all test cases made with advanced algorithms were within the agreement criteria, while the larger deviations were observed for simpler algorithms. The number of measurements with results outside the agreement criteria increased with the increase of the beam energy and decreased with TPS calculation algorithm sophistication. Also, a few errors in the basic dosimetry data in TPS were detected and corrected. The audit helped the users to better understand the operational features and limitations of their TPSs and resulted in increased confidence in dose calculation accuracy using TPSs. The audit results indicated the shortcomings of simpler algorithms for the test cases performed and, therefore the transition to more advanced algorithms is highly desirable.
Neuroprosthetic Decoder Training as Imitation Learning
Merel, Josh; Paninski, Liam; Cunningham, John P.
2016-01-01
Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user’s intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user’s intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector. PMID:27191387
A novel constructive-optimizer neural network for the traveling salesman problem.
Saadatmand-Tarzjan, Mahdi; Khademi, Morteza; Akbarzadeh-T, Mohammad-R; Moghaddam, Hamid Abrishami
2007-08-01
In this paper, a novel constructive-optimizer neural network (CONN) is proposed for the traveling salesman problem (TSP). CONN uses a feedback structure similar to Hopfield-type neural networks and a competitive training algorithm similar to the Kohonen-type self-organizing maps (K-SOMs). Consequently, CONN is composed of a constructive part, which grows the tour and an optimizer part to optimize it. In the training algorithm, an initial tour is created first and introduced to CONN. Then, it is trained in the constructive phase for adding a number of cities to the tour. Next, the training algorithm switches to the optimizer phase for optimizing the current tour by displacing the tour cities. After convergence in this phase, the training algorithm switches to the constructive phase anew and is continued until all cities are added to the tour. Furthermore, we investigate a relationship between the number of TSP cities and the number of cities to be added in each constructive phase. CONN was tested on nine sets of benchmark TSPs from TSPLIB to demonstrate its performance and efficiency. It performed better than several typical Neural networks (NNs), including KNIES_TSP_Local, KNIES_TSP_Global, Budinich's SOM, Co-Adaptive Net, and multivalued Hopfield network as wall as computationally comparable variants of the simulated annealing algorithm, in terms of both CPU time and accuracy. Furthermore, CONN converged considerably faster than expanding SOM and evolved integrated SOM and generated shorter tours compared to KNIES_DECOMPOSE. Although CONN is not yet comparable in terms of accuracy with some sophisticated computationally intensive algorithms, it converges significantly faster than they do. Generally speaking, CONN provides the best compromise between CPU time and accuracy among currently reported NNs for TSP.
DSMC Simulations in Support of the Columbia Shuttle Orbiter Accident Investigation
NASA Technical Reports Server (NTRS)
Boyles, Katie; LeBeau, Gerald J.; Gallis, Michael A.
2004-01-01
Three-dimensional Direct Simulation Monte Carlo simulations of Columbia Shuttle Orbiter flight STS-107 are presented. The aim of this work is to determine the aerodynamic and heating behavior of the Orbiter during aerobraking maneuvers and to provide piecewise integration of key scenario events to assess the plausibility of the candidate failure scenarios. The flight of the Orbiter is examined at two altitudes: 350-kft and 300-kft. The flowfield around the Orbiter and the heat transfer to it are calculated for the undamaged configuration. The flow inside the wing for an assumed damage to the leading edge in the form of a 10- inch hole is studied.
Direct simulation with vibration-dissociation coupling
NASA Technical Reports Server (NTRS)
Hash, David B.; Hassan, H. A.
1992-01-01
The majority of implementations of the Direct Simulation Monte Carlo (DSMC) method of Bird do not account for vibration-dissociation coupling. Haas and Boyd have proposed the vibrationally-favored dissociation model to accomplish this task. This model requires measurements of induction distance to determine model constants. A more general expression has been derived that does not require any experimental input. The model is used to calculate one-dimensional shock waves in nitrogen and the flow past a lunar transfer vehicle (LTV). For the conditions considered in the simulation, the influence of vibration-dissociation coupling on heat transfer in the stagnation region of the LTV can be significant.
N2 Temperature of Vibration instrument for sounding rocket observation in the lower thermosphere
NASA Astrophysics Data System (ADS)
Kurihara, J.; Iwagami, N.; Oyama, K.-I.
2013-11-01
The N2 Temperature of Vibration (NTV) instrument was developed to study energetics and structure of the lower thermosphere, applying the Electron Beam Fluorescence (EBF) technique to measurements of vibrational temperature, rotational temperature and number density of atmospheric N2. The sounding rocket experiments using this instrument have been conducted four times, including one failure of the electron gun. Aerodynamic effects on the measurement caused by the supersonic motion of the rocket were analyzed quantitatively using three-dimensional simulation of Direct Simulation Monte Carlo (DSMC) method, and the absolute density profile was obtained through the correction of the spin modulation.
Neo-Sophistic Rhetorical Theory: Sophistic Precedents for Contemporary Epistemic Rhetoric.
ERIC Educational Resources Information Center
McComiskey, Bruce
Interest in the sophists has recently intensified among rhetorical theorists, culminating in the notion that rhetoric is epistemic. Epistemic rhetoric has its first and deepest roots in sophistic epistemological and rhetorical traditions, so that the view of rhetoric as epistemic is now being dubbed "neo-sophistic." In epistemic…
NASA Technical Reports Server (NTRS)
Swayze, Gregg A.; Clark, Roger N.
1995-01-01
The rapid development of sophisticated imaging spectrometers and resulting flood of imaging spectrometry data has prompted a rapid parallel development of spectral-information extraction technology. Even though these extraction techniques have evolved along different lines (band-shape fitting, endmember unmixing, near-infrared analysis, neural-network fitting, and expert systems to name a few), all are limited by the spectrometer's signal to noise (S/N) and spectral resolution in producing useful information. This study grew from a need to quantitatively determine what effects these parameters have on our ability to differentiate between mineral absorption features using a band-shape fitting algorithm. We chose to evaluate the AVIRIS, HYDICE, MIVIS, GERIS, VIMS, NIMS, and ASTER instruments because they collect data over wide S/N and spectral-resolution ranges. The study evaluates the performance of the Tricorder algorithm, in differentiating between mineral spectra in the 0.4-2.5 micrometer spectral region. The strength of the Tricorder algorithm is in its ability to produce an easily understood comparison of band shape that can concentrate on small relevant portions of the spectra, giving it an advantage over most unmixing schemes, and in that it need not spend large amounts of time reoptimizing each time a new mineral component is added to its reference library, as is the case with neural-network schemes. We believe the flexibility of the Tricorder algorithm is unparalleled among spectral-extraction techniques and that the results from this study, although dealing with minerals, will have direct applications to spectral identification in other disciplines.
Sorokine, Alexandre; Schlicher, Bob G.; Ward, Richard C.; ...
2015-05-22
This paper describes an original approach to generating scenarios for the purpose of testing the algorithms used to detect special nuclear materials (SNM) that incorporates the use of ontologies. Separating the signal of SNM from the background requires sophisticated algorithms. To assist in developing such algorithms, there is a need for scenarios that capture a very wide range of variables affecting the detection process, depending on the type of detector being used. To provide such a cpability, we developed an ontology-driven information system (ODIS) for generating scenarios that can be used in creating scenarios for testing of algorithms for SNMmore » detection. The ontology-driven scenario generator (ODSG) is an ODIS based on information supplied by subject matter experts and other documentation. The details of the creation of the ontology, the development of the ontology-driven information system, and the design of the web user interface (UI) are presented along with specific examples of scenarios generated using the ODSG. We demonstrate that the paradigm behind the ODSG is capable of addressing the problem of semantic complexity at both the user and developer levels. Compared to traditional approaches, an ODIS provides benefits such as faithful representation of the users' domain conceptualization, simplified management of very large and semantically diverse datasets, and the ability to handle frequent changes to the application and the UI. Furthermore, the approach makes possible the generation of a much larger number of specific scenarios based on limited user-supplied information« less
NASA Technical Reports Server (NTRS)
Challa, M.; Natanson, G.
1998-01-01
Two different algorithms - a deterministic magnetic-field-only algorithm and a Kalman filter for gyroless spacecraft - are used to estimate the attitude and rates of the Rossi X-Ray Timing Explorer (RXTE) using only measurements from a three-axis magnetometer. The performance of these algorithms is examined using in-flight data from various scenarios. In particular, significant enhancements in accuracies are observed when' the telemetered magnetometer data are accurately calibrated using a recently developed calibration algorithm. Interesting features observed in these studies of the inertial-pointing RXTE include a remarkable sensitivity of the filter to the numerical values of the noise parameters and relatively long convergence time spans. By analogy, the accuracy of the deterministic scheme is noticeably lower as a result of reduced rates of change of the body-fixed geomagnetic field. Preliminary results show the filter-per-axis attitude accuracies ranging between 0.1 and 0.5 deg and rate accuracies between 0.001 deg/sec and 0.005 deg./sec, whereas the deterministic method needs a more sophisticated techniques for smoothing time derivatives of the measured geomagnetic field to clearly distinguish both attitude and rate solutions from the numerical noise. Also included is a new theoretical development in the deterministic algorithm: the transformation of a transcendental equation in the original theory into an 8th-order polynomial equation. It is shown that this 8th-order polynomial reduces to quadratic equations in the two limiting cases-infinitely high wheel momentum, and constant rates-discussed in previous publications.
NASA Astrophysics Data System (ADS)
Al-Hallaq, H. A.; Reft, C. S.; Roeske, J. C.
2006-03-01
The dosimetric effects of bone and air heterogeneities in head and neck IMRT treatments were quantified. An anthropomorphic RANDO phantom was CT-scanned with 16 thermoluminescent dosimeter (TLD) chips placed in and around the target volume. A standard IMRT plan generated with CORVUS was used to irradiate the phantom five times. On average, measured dose was 5.1% higher than calculated dose. Measurements were higher by 7.1% near the heterogeneities and by 2.6% in tissue. The dose difference between measurement and calculation was outside the 95% measurement confidence interval for six TLDs. Using CORVUS' heterogeneity correction algorithm, the average difference between measured and calculated doses decreased by 1.8% near the heterogeneities and by 0.7% in tissue. Furthermore, dose differences lying outside the 95% confidence interval were eliminated for five of the six TLDs. TLD doses recalculated by Pinnacle3's convolution/superposition algorithm were consistently higher than CORVUS doses, a trend that matched our measured results. These results indicate that the dosimetric effects of air cavities are larger than those of bone heterogeneities, thereby leading to a higher delivered dose compared to CORVUS calculations. More sophisticated algorithms such as convolution/superposition or Monte Carlo should be used for accurate tailoring of IMRT dose in head and neck tumours.
The successively temporal error concealment algorithm using error-adaptive block matching principle
NASA Astrophysics Data System (ADS)
Lee, Yu-Hsuan; Wu, Tsai-Hsing; Chen, Chao-Chyun
2014-09-01
Generally, the temporal error concealment (TEC) adopts the blocks around the corrupted block (CB) as the search pattern to find the best-match block in previous frame. Once the CB is recovered, it is referred to as the recovered block (RB). Although RB can be the search pattern to find the best-match block of another CB, RB is not the same as its original block (OB). The error between the RB and its OB limits the performance of TEC. The successively temporal error concealment (STEC) algorithm is proposed to alleviate this error. The STEC procedure consists of tier-1 and tier-2. The tier-1 divides a corrupted macroblock into four corrupted 8 × 8 blocks and generates a recovering order for them. The corrupted 8 × 8 block with the first place of recovering order is recovered in tier-1, and remaining 8 × 8 CBs are recovered in tier-2 along the recovering order. In tier-2, the error-adaptive block matching principle (EA-BMP) is proposed for the RB as the search pattern to recover remaining corrupted 8 × 8 blocks. The proposed STEC outperforms sophisticated TEC algorithms on average PSNR by 0.3 dB on the packet error rate of 20% at least.
Reliable fusion of control and sensing in intelligent machines. Thesis
NASA Technical Reports Server (NTRS)
Mcinroy, John E.
1991-01-01
Although robotics research has produced a wealth of sophisticated control and sensing algorithms, very little research has been aimed at reliably combining these control and sensing strategies so that a specific task can be executed. To improve the reliability of robotic systems, analytic techniques are developed for calculating the probability that a particular combination of control and sensing algorithms will satisfy the required specifications. The probability can then be used to assess the reliability of the design. An entropy formulation is first used to quickly eliminate designs not capable of meeting the specifications. Next, a framework for analyzing reliability based on the first order second moment methods of structural engineering is proposed. To ensure performance over an interval of time, lower bounds on the reliability of meeting a set of quadratic specifications with a Gaussian discrete time invariant control system are derived. A case study analyzing visual positioning in robotic system is considered. The reliability of meeting timing and positioning specifications in the presence of camera pixel truncation, forward and inverse kinematic errors, and Gaussian joint measurement noise is determined. This information is used to select a visual sensing strategy, a kinematic algorithm, and a discrete compensator capable of accomplishing the desired task. Simulation results using PUMA 560 kinematic and dynamic characteristics are presented.
Trends in data processing of comprehensive two-dimensional chromatography: state of the art.
Matos, João T V; Duarte, Regina M B O; Duarte, Armando C
2012-12-01
The operation of advanced chromatographic systems, namely comprehensive two-dimensional (2D) chromatography coupled to multidimensional detectors, allows achieving a great deal of data that need special care to be processed in order to characterize and quantify as much as possible the analytes under study. The aim of this review is to identify the main trends, research needs and gaps on the techniques for data processing of multidimensional data sets obtained from comprehensive 2D chromatography. The following topics have been identified as the most promising for new developments in the near future: data acquisition and handling, peak detection and quantification, measurement of overlapping of 2D peaks, and data analysis software for 2D chromatography. The rational supporting most of the data processing techniques is based on the generalization of one-dimensional (1D) chromatography although algorithms, such as the inverted watershed algorithm, use the 2D chromatographic data as such. However, for processing more complex N-way data there is a need for using more sophisticated techniques. Apart from using other concepts from 1D chromatography, which have not been tested for 2D chromatography, there is still room for new improvements and developments in algorithms and software for dealing with 2D comprehensive chromatographic data. Copyright © 2012 Elsevier B.V. All rights reserved.
An efficient self-organizing map designed by genetic algorithms for the traveling salesman problem.
Jin, Hui-Dong; Leung, Kwong-Sak; Wong, Man-Leung; Xu, Z B
2003-01-01
As a typical combinatorial optimization problem, the traveling salesman problem (TSP) has attracted extensive research interest. In this paper, we develop a self-organizing map (SOM) with a novel learning rule. It is called the integrated SOM (ISOM) since its learning rule integrates the three learning mechanisms in the SOM literature. Within a single learning step, the excited neuron is first dragged toward the input city, then pushed to the convex hull of the TSP, and finally drawn toward the middle point of its two neighboring neurons. A genetic algorithm is successfully specified to determine the elaborate coordination among the three learning mechanisms as well as the suitable parameter setting. The evolved ISOM (eISOM) is examined on three sets of TSP to demonstrate its power and efficiency. The computation complexity of the eISOM is quadratic, which is comparable to other SOM-like neural networks. Moreover, the eISOM can generate more accurate solutions than several typical approaches for TSP including the SOM developed by Budinich, the expanding SOM, the convex elastic net, and the FLEXMAP algorithm. Though its solution accuracy is not yet comparable to some sophisticated heuristics, the eISOM is one of the most accurate neural networks for the TSP.
Variables selection methods in near-infrared spectroscopy.
Xiaobo, Zou; Jiewen, Zhao; Povey, Malcolm J W; Holmes, Mel; Hanpin, Mao
2010-05-14
Near-infrared (NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields, such as the petrochemical, pharmaceutical, environmental, clinical, agricultural, food and biomedical sectors during the past 15 years. A NIR spectrum of a sample is typically measured by modern scanning instruments at hundreds of equally spaced wavelengths. The large number of spectral variables in most data sets encountered in NIR spectral chemometrics often renders the prediction of a dependent variable unreliable. Recently, considerable effort has been directed towards developing and evaluating different procedures that objectively identify variables which contribute useful information and/or eliminate variables containing mostly noise. This review focuses on the variable selection methods in NIR spectroscopy. Selection methods include some classical approaches, such as manual approach (knowledge based selection), "Univariate" and "Sequential" selection methods; sophisticated methods such as successive projections algorithm (SPA) and uninformative variable elimination (UVE), elaborate search-based strategies such as simulated annealing (SA), artificial neural networks (ANN) and genetic algorithms (GAs) and interval base algorithms such as interval partial least squares (iPLS), windows PLS and iterative PLS. Wavelength selection with B-spline, Kalman filtering, Fisher's weights and Bayesian are also mentioned. Finally, the websites of some variable selection software and toolboxes for non-commercial use are given. Copyright 2010 Elsevier B.V. All rights reserved.
Fast human pose estimation using 3D Zernike descriptors
NASA Astrophysics Data System (ADS)
Berjón, Daniel; Morán, Francisco
2012-03-01
Markerless video-based human pose estimation algorithms face a high-dimensional problem that is frequently broken down into several lower-dimensional ones by estimating the pose of each limb separately. However, in order to do so they need to reliably locate the torso, for which they typically rely on time coherence and tracking algorithms. Their losing track usually results in catastrophic failure of the process, requiring human intervention and thus precluding their usage in real-time applications. We propose a very fast rough pose estimation scheme based on global shape descriptors built on 3D Zernike moments. Using an articulated model that we configure in many poses, a large database of descriptor/pose pairs can be computed off-line. Thus, the only steps that must be done on-line are the extraction of the descriptors for each input volume and a search against the database to get the most likely poses. While the result of such process is not a fine pose estimation, it can be useful to help more sophisticated algorithms to regain track or make more educated guesses when creating new particles in particle-filter-based tracking schemes. We have achieved a performance of about ten fps on a single computer using a database of about one million entries.
[Development of Nanotechnology for X-Ray Astronomy Instrumentation
NASA Technical Reports Server (NTRS)
Schattenburg, Mark L.
2004-01-01
This Research Grant provides support for development of nanotechnology for x-ray astronomy instrumentation. MIT has made significant progress in several development areas. In the last year we have made considerable progress in demonstrating the high-fidelity patterning and replication of x-ray reflection gratings. We developed a process for fabricating blazed gratings in silicon with extremely smooth and sharp sawtooth profiles, and developed a nanoimprint process for replication. We also developed sophisticated new fixturing for holding thin optics during metrology without causing distortion. We developed a new image processing algorithm for our Shack-Hartmann tool that uses Zernike polynomials. This has resulted in much more accurate and repeatable measurements on thin optics.
NASA Astrophysics Data System (ADS)
Emter, Thomas; Petereit, Janko
2014-05-01
An integrated multi-sensor fusion framework for localization and mapping for autonomous navigation in unstructured outdoor environments based on extended Kalman filters (EKF) is presented. The sensors for localization include an inertial measurement unit, a GPS, a fiber optic gyroscope, and wheel odometry. Additionally a 3D LIDAR is used for simultaneous localization and mapping (SLAM). A 3D map is built while concurrently a localization in a so far established 2D map is estimated with the current scan of the LIDAR. Despite of longer run-time of the SLAM algorithm compared to the EKF update, a high update rate is still guaranteed by sophisticatedly joining and synchronizing two parallel localization estimators.
NASA Astrophysics Data System (ADS)
Tsalamengas, John L.
2018-07-01
We study plane-wave electromagnetic scattering by radially and strongly inhomogeneous dielectric cylinders at oblique incidence. The method of analysis relies on an exact reformulation of the underlying field equations as a first-order 4 × 4 system of differential equations and on the ability to restate the associated initial-value problem in the form of a system of coupled linear Volterra integral equations of the second kind. The integral equations so derived are discretized via a sophisticated variant of the Nyström method. The proposed method yields results accurate up to machine precision without relying on approximations. Numerical results and case studies ably demonstrate the efficiency and high accuracy of the algorithms.
Smartphone based monitoring system for long-term sleep assessment.
Domingues, Alexandre
2015-01-01
The diagnosis of sleep disorders, highly prevalent in Western countries, typically involves sophisticated procedures and equipment that are highly intrusive to the patient. The high processing capabilities and storage capacity of current portable devices, together with a big range of available sensors, many of them with wireless capabilities, create new opportunities and change the paradigms in sleep studies. In this work, a smartphone based sleep monitoring system is presented along with the details of the hardware, software and algorithm implementation. The aim of this system is to provide a way for subjects, with no pre-diagnosed sleep disorders, to monitor their sleep habits, and on the initial screening of abnormal sleep patterns.
Using genetic information while protecting the privacy of the soul.
Moor, J H
1999-01-01
Computing plays an important role in genetics (and vice versa). Theoretically, computing provides a conceptual model for the function and malfunction of our genetic machinery. Practically, contemporary computers and robots equipped with advanced algorithms make the revelation of the complete human genome imminent--computers are about to reveal our genetic souls for the first time. Ethically, computers help protect privacy by restricting access in sophisticated ways to genetic information. But the inexorable fact that computers will increasingly collect, analyze, and disseminate abundant amounts of genetic information made available through the genetic revolution, not to mention that inexpensive computing devices will make genetic information gathering easier, underscores the need for strong and immediate privacy legislation.
Melanoma detection using smartphone and multimode hyperspectral imaging
NASA Astrophysics Data System (ADS)
MacKinnon, Nicholas; Vasefi, Fartash; Booth, Nicholas; Farkas, Daniel L.
2016-04-01
This project's goal is to determine how to effectively implement a technology continuum from a low cost, remotely deployable imaging device to a more sophisticated multimode imaging system within a standard clinical practice. In this work a smartphone is used in conjunction with an optical attachment to capture cross-polarized and collinear color images of a nevus that are analyzed to quantify chromophore distribution. The nevus is also imaged by a multimode hyperspectral system, our proprietary SkinSpect™ device. Relative accuracy and biological plausibility of the two systems algorithms are compared to assess aspects of feasibility of in-home or primary care practitioner smartphone screening prior to rigorous clinical analysis via the SkinSpect.
Aslam, Tariq Mehmood; Shakir, Savana; Wong, James; Au, Leon; Ashworth, Jane
2012-12-01
Mucopolysaccharidoses (MPS) can cause corneal opacification that is currently difficult to objectively quantify. With newer treatments for MPS comes an increased need for a more objective, valid and reliable index of disease severity for clinical and research use. Clinical evaluation by slit lamp is very subjective and techniques based on colour photography are difficult to standardise. In this article the authors present evidence for the utility of dedicated image analysis algorithms applied to images obtained by a highly sophisticated iris recognition camera that is small, manoeuvrable and adapted to achieve rapid, reliable and standardised objective imaging in a wide variety of patients while minimising artefactual interference in image quality.
Chemotaxis can provide biological organisms with good solutions to the travelling salesman problem.
Reynolds, A M
2011-05-01
The ability to find good solutions to the traveling salesman problem can benefit some biological organisms. Bacterial infection would, for instance, be eradicated most promptly if cells of the immune system minimized the total distance they traveled when moving between bacteria. Similarly, foragers would maximize their net energy gain if the distance that they traveled between multiple dispersed prey items was minimized. The traveling salesman problem is one of the most intensively studied problems in combinatorial optimization. There are no efficient algorithms for even solving the problem approximately (within a guaranteed constant factor from the optimum) because the problem is nondeterministic polynomial time complete. The best approximate algorithms can typically find solutions within 1%-2% of the optimal, but these are computationally intensive and can not be implemented by biological organisms. Biological organisms could, in principle, implement the less efficient greedy nearest-neighbor algorithm, i.e., always move to the nearest surviving target. Implementation of this strategy does, however, require quite sophisticated cognitive abilities and prior knowledge of the target locations. Here, with the aid of numerical simulations, it is shown that biological organisms can simply use chemotaxis to solve, or at worst provide good solutions (comparable to those found by the greedy algorithm) to, the traveling salesman problem when the targets are sources of a chemoattractant and are modest in number (n < 10). This applies to neutrophils and macrophages in microbial defense and to some predators.
Short superstrings and the structure of overlapping strings.
Armen, C; Stein, C
1995-01-01
Given a collection of strings S = [s1,...,sn] over an alphabet sigma, a superstring alpha of S is a string containing each si as a substring, that is, for each i, 1 < or = i < or = n, alpha contains a block of magnitude of si consecutive characters that match si exactly. The shortest superstring problem is the problem of finding a superstring alpha of minimum length. The shortest superstring problem has applications in both computational biology and data compression. The shortest superstring problem is NP-hard (Gallant et al., 1980); in fact, it was recently shown to be MAX SNP-hard (Blum et al., 1994). Given the importance of the applications, several heuristics and approximation algorithms have been proposed. Constant factor approximation algorithms have been given in Blum et al. (1994) (factor of 3), Teng and Yao (1993) (factor of 2 8/9), Czumaj et al. (1994) (factor of 2 5/6), and Kosaraju et al. (1994) (factor of 2 50/63). Informally, the key to any algorithm for the shortest superstring problem is to identify sets of strings with large amounts of similarity, or overlap. Although the previous algorithms and their analyses have grown increasingly sophisticated, they reveal remarkably little about the structure of strings with large amounts of overlap. In this sense, they are solving a more general problem than the one at hand. In this paper, we study the structure of strings with large amounts of overlap and use our understanding to give an algorithm that finds a superstring whose length is no more than 2 3/4 times that of the optimal superstring. Our algorithm runs in O(magnitude of S + n3) time, which matches that of previous algorithms. We prove several interesting properties about short periodic strings, allowing us to answer questions of the following form: Given a string with some periodic structure, characterize all the possible periodic strings that can have a large amount of overlap with the first string.
NASA Astrophysics Data System (ADS)
Safari, A.; Sharifi, M. A.; Amjadiparvar, B.
2010-05-01
The GRACE mission has substantiated the low-low satellite-to-satellite tracking (LL-SST) concept. The LL-SST configuration can be combined with the previously realized high-low SST concept in the CHAMP mission to provide a much higher accuracy. The line of sight (LOS) acceleration difference between the GRACE satellite pair is the mostly used observable for mapping the global gravity field of the Earth in terms of spherical harmonic coefficients. In this paper, mathematical formulae for LOS acceleration difference observations have been derived and the corresponding linear system of equations has been set up for spherical harmonic up to degree and order 120. The total number of unknowns is 14641. Such a linear equation system can be solved with iterative solvers or direct solvers. However, the runtime of direct methods or that of iterative solvers without a suitable preconditioner increases tremendously. This is the reason why we need a more sophisticated method to solve the linear system of problems with a large number of unknowns. Multiplicative variant of the Schwarz alternating algorithm is a domain decomposition method, which allows it to split the normal matrix of the system into several smaller overlaped submatrices. In each iteration step the multiplicative variant of the Schwarz alternating algorithm solves linear systems with the matrices obtained from the splitting successively. It reduces both runtime and memory requirements drastically. In this paper we propose the Multiplicative Schwarz Alternating Algorithm (MSAA) for solving the large linear system of gravity field recovery. The proposed algorithm has been tested on the International Association of Geodesy (IAG)-simulated data of the GRACE mission. The achieved results indicate the validity and efficiency of the proposed algorithm in solving the linear system of equations from accuracy and runtime points of view. Keywords: Gravity field recovery, Multiplicative Schwarz Alternating Algorithm, Low-Low Satellite-to-Satellite Tracking
IPRT polarized radiative transfer model intercomparison project - Phase A
NASA Astrophysics Data System (ADS)
Emde, Claudia; Barlakas, Vasileios; Cornet, Céline; Evans, Frank; Korkin, Sergey; Ota, Yoshifumi; Labonnote, Laurent C.; Lyapustin, Alexei; Macke, Andreas; Mayer, Bernhard; Wendisch, Manfred
2015-10-01
The polarization state of electromagnetic radiation scattered by atmospheric particles such as aerosols, cloud droplets, or ice crystals contains much more information about the optical and microphysical properties than the total intensity alone. For this reason an increasing number of polarimetric observations are performed from space, from the ground and from aircraft. Polarized radiative transfer models are required to interpret and analyse these measurements and to develop retrieval algorithms exploiting polarimetric observations. In the last years a large number of new codes have been developed, mostly for specific applications. Benchmark results are available for specific cases, but not for more sophisticated scenarios including polarized surface reflection and multi-layer atmospheres. The International Polarized Radiative Transfer (IPRT) working group of the International Radiation Commission (IRC) has initiated a model intercomparison project in order to fill this gap. This paper presents the results of the first phase A of the IPRT project which includes ten test cases, from simple setups with only one layer and Rayleigh scattering to rather sophisticated setups with a cloud embedded in a standard atmosphere above an ocean surface. All scenarios in the first phase A of the intercomparison project are for a one-dimensional plane-parallel model geometry. The commonly established benchmark results are available at the IPRT website.
Modeling of the Human - Operator in a Complex System Functioning Under Extreme Conditions
NASA Astrophysics Data System (ADS)
Getzov, Peter; Hubenova, Zoia; Yordanov, Dimitar; Popov, Wiliam
2013-12-01
Problems, related to the explication of sophisticated control systems of objects, operating under extreme conditions, have been examined and the impact of the effectiveness of the operator's activity on the systems as a whole. The necessity of creation of complex simulation models, reflecting operator's activity, is discussed. Organizational and technical system of an unmanned aviation complex is described as a sophisticated ergatic system. Computer realization of main subsystems of algorithmic system of the man as a controlling system is implemented and specialized software for data processing and analysis is developed. An original computer model of a Man as a tracking system has been implemented. Model of unmanned complex for operators training and formation of a mental model in emergency situation, implemented in "matlab-simulink" environment, has been synthesized. As a unit of the control loop, the pilot (operator) is simplified viewed as an autocontrol system consisting of three main interconnected subsystems: sensitive organs (perception sensors); central nervous system; executive organs (muscles of the arms, legs, back). Theoretical-data model of prediction the level of operator's information load in ergatic systems is proposed. It allows the assessment and prediction of the effectiveness of a real working operator. Simulation model of operator's activity in takeoff based on the Petri nets has been synthesized.
An extended CFD model to predict the pumping curve in low pressure plasma etch chamber
NASA Astrophysics Data System (ADS)
Zhou, Ning; Wu, Yuanhao; Han, Wenbin; Pan, Shaowu
2014-12-01
Continuum based CFD model is extended with slip wall approximation and rarefaction effect on viscosity, in an attempt to predict the pumping flow characteristics in low pressure plasma etch chambers. The flow regime inside the chamber ranges from slip wall (Kn ˜ 0.01), and up to free molecular (Kn = 10). Momentum accommodation coefficient and parameters for Kn-modified viscosity are first calibrated against one set of measured pumping curve. Then the validity of this calibrated CFD models are demonstrated in comparison with additional pumping curves measured in chambers of different geometry configurations. More detailed comparison against DSMC model for flow conductance over slits with contraction and expansion sections is also discussed.
1988-11-01
library . o Air Force Tech Order Management System - Final Report, library o DLA CALS 1988 Implementation Plan, library . Where to go for Additional...0. ~l l LU; 0. 0 0 2. o 3 0 I) 0 U no V) 4- C 4. U) 00 u c.C0 Cco C Cl) cc m~0-CU . d" CD 0 m mooc Er.C 0 .0> s -w 2 c IM CO (aC wi E 0 r. X 0-0 a 0...as well as wider application. The Air Force AFTOMS Automation Plan, a copy of which is in the library , has excellent discussions of the expected
Assessment of predictive capabilities for aerodynamic heating in hypersonic flow
NASA Astrophysics Data System (ADS)
Knight, Doyle; Chazot, Olivier; Austin, Joanna; Badr, Mohammad Ali; Candler, Graham; Celik, Bayram; Rosa, Donato de; Donelli, Raffaele; Komives, Jeffrey; Lani, Andrea; Levin, Deborah; Nompelis, Ioannis; Panesi, Marco; Pezzella, Giuseppe; Reimann, Bodo; Tumuklu, Ozgur; Yuceil, Kemal
2017-04-01
The capability for CFD prediction of hypersonic shock wave laminar boundary layer interaction was assessed for a double wedge model at Mach 7.1 in air and nitrogen at 2.1 MJ/kg and 8 MJ/kg. Simulations were performed by seven research organizations encompassing both Navier-Stokes and Direct Simulation Monte Carlo (DSMC) methods as part of the NATO STO AVT Task Group 205 activity. Comparison of the CFD simulations with experimental heat transfer and schlieren visualization suggest the need for accurate modeling of the tunnel startup process in short-duration hypersonic test facilities, and the importance of fully 3-D simulations of nominally 2-D (i.e., non-axisymmmetric) experimental geometries.
Open Source Software Openfoam as a New Aerodynamical Simulation Tool for Rocket-Borne Measurements
NASA Astrophysics Data System (ADS)
Staszak, T.; Brede, M.; Strelnikov, B.
2015-09-01
The only way to do in-situ measurements, which are very important experimental studies for atmospheric science, in the mesoshere/lower thermosphere (MLT) is to use sounding rockets. The drawback of using rockets is the shock wave appearing because of the very high speed of the rocket motion (typically about 1000 mIs). This shock wave disturbs the density, the temperature and the velocity fields in the vicinity of the rocket, compared to undisturbed values of the atmosphere. This effect, however, can be quantified and the measured data has to be corrected not just to make it more precise but simply usable. The commonly accepted and widely used tool for this calculations is the Direct Simulation Monte Carlo (DSMC) technique developed by GA. Bird which is available as stand-alone program limited to use a single processor. Apart from complications with simulations of flows around bodies related to different flow regimes in the altitude range of MLT, that rise due to exponential density change by several orders of magnitude, a particular hardware configuration introduces significant difficulty for aerodynamical calculations due to choice of the grid sizes mainly depending on the demands on adequate DSMCs and good resolution of geometries with scale differences of factor of iO~. This makes either the calculation time unreasonably long or even prevents the calculation algorithm from converging. In this paper we apply the free open source software OpenFOAM (licensed under GNU GPL) for a three-dimensional CFD-Simulation of a flow around a sounding rocket instrumentation. An advantage of this software package, among other things, is that it can run on high performance clusters, which are easily scalable. We present the first results and discuss the potential of the new tool in applications for sounding rockets.
Yi, S.; Li, N.; Xiang, B.; Wang, X.; Ye, B.; McGuire, A.D.
2013-01-01
Soil surface temperature is a critical boundary condition for the simulation of soil temperature by environmental models. It is influenced by atmospheric and soil conditions and by vegetation cover. In sophisticated land surface models, it is simulated iteratively by solving surface energy budget equations. In ecosystem, permafrost, and hydrology models, the consideration of soil surface temperature is generally simple. In this study, we developed a methodology for representing the effects of vegetation cover and atmospheric factors on the estimation of soil surface temperature for alpine grassland ecosystems on the Qinghai-Tibetan Plateau. Our approach integrated measurements from meteorological stations with simulations from a sophisticated land surface model to develop an equation set for estimating soil surface temperature. After implementing this equation set into an ecosystem model and evaluating the performance of the ecosystem model in simulating soil temperature at different depths in the soil profile, we applied the model to simulate interactions among vegetation cover, freeze-thaw cycles, and soil erosion to demonstrate potential applications made possible through the implementation of the methodology developed in this study. Results showed that (1) to properly estimate daily soil surface temperature, algorithms should use air temperature, downward solar radiation, and vegetation cover as independent variables; (2) the equation set developed in this study performed better than soil surface temperature algorithms used in other models; and (3) the ecosystem model performed well in simulating soil temperature throughout the soil profile using the equation set developed in this study. Our application of the model indicates that the representation in ecosystem models of the effects of vegetation cover on the simulation of soil thermal dynamics has the potential to substantially improve our understanding of the vulnerability of alpine grassland ecosystems to changes in climate and grazing regimes.
NASA Astrophysics Data System (ADS)
Yi, S.; Li, N.; Xiang, B.; Wang, X.; Ye, B.; McGuire, A. D.
2013-07-01
surface temperature is a critical boundary condition for the simulation of soil temperature by environmental models. It is influenced by atmospheric and soil conditions and by vegetation cover. In sophisticated land surface models, it is simulated iteratively by solving surface energy budget equations. In ecosystem, permafrost, and hydrology models, the consideration of soil surface temperature is generally simple. In this study, we developed a methodology for representing the effects of vegetation cover and atmospheric factors on the estimation of soil surface temperature for alpine grassland ecosystems on the Qinghai-Tibetan Plateau. Our approach integrated measurements from meteorological stations with simulations from a sophisticated land surface model to develop an equation set for estimating soil surface temperature. After implementing this equation set into an ecosystem model and evaluating the performance of the ecosystem model in simulating soil temperature at different depths in the soil profile, we applied the model to simulate interactions among vegetation cover, freeze-thaw cycles, and soil erosion to demonstrate potential applications made possible through the implementation of the methodology developed in this study. Results showed that (1) to properly estimate daily soil surface temperature, algorithms should use air temperature, downward solar radiation, and vegetation cover as independent variables; (2) the equation set developed in this study performed better than soil surface temperature algorithms used in other models; and (3) the ecosystem model performed well in simulating soil temperature throughout the soil profile using the equation set developed in this study. Our application of the model indicates that the representation in ecosystem models of the effects of vegetation cover on the simulation of soil thermal dynamics has the potential to substantially improve our understanding of the vulnerability of alpine grassland ecosystems to changes in climate and grazing regimes.
Toward a molecular programming language for algorithmic self-assembly
NASA Astrophysics Data System (ADS)
Patitz, Matthew John
Self-assembly is the process whereby relatively simple components autonomously combine to form more complex objects. Nature exhibits self-assembly to form everything from microscopic crystals to living cells to galaxies. With a desire to both form increasingly sophisticated products and to understand the basic components of living systems, scientists have developed and studied artificial self-assembling systems. One such framework is the Tile Assembly Model introduced by Erik Winfree in 1998. In this model, simple two-dimensional square 'tiles' are designed so that they self-assemble into desired shapes. The work in this thesis consists of a series of results which build toward the future goal of designing an abstracted, high-level programming language for designing the molecular components of self-assembling systems which can perform powerful computations and form into intricate structures. The first two sets of results demonstrate self-assembling systems which perform infinite series of computations that characterize computably enumerable and decidable languages, and exhibit tools for algorithmically generating the necessary sets of tiles. In the next chapter, methods for generating tile sets which self-assemble into complicated shapes, namely a class of discrete self-similar fractal structures, are presented. Next, a software package for graphically designing tile sets, simulating their self-assembly, and debugging designed systems is discussed. Finally, a high-level programming language which abstracts much of the complexity and tedium of designing such systems, while preventing many of the common errors, is presented. The summation of this body of work presents a broad coverage of the spectrum of desired outputs from artificial self-assembling systems and a progression in the sophistication of tools used to design them. By creating a broader and deeper set of modular tools for designing self-assembling systems, we hope to increase the complexity which is attainable. These tools provide a solid foundation for future work in both the Tile Assembly Model and explorations into more advanced models.
Schimpl, Michaela; Lederer, Christian; Daumer, Martin
2011-01-01
Walking speed is a fundamental indicator for human well-being. In a clinical setting, walking speed is typically measured by means of walking tests using different protocols. However, walking speed obtained in this way is unlikely to be representative of the conditions in a free-living environment. Recently, mobile accelerometry has opened up the possibility to extract walking speed from long-time observations in free-living individuals, but the validity of these measurements needs to be determined. In this investigation, we have developed algorithms for walking speed prediction based on 3D accelerometry data (actibelt®) and created a framework using a standardized data set with gold standard annotations to facilitate the validation and comparison of these algorithms. For this purpose 17 healthy subjects operated a newly developed mobile gold standard while walking/running on an indoor track. Subsequently, the validity of 12 candidate algorithms for walking speed prediction ranging from well-known simple approaches like combining step length with frequency to more sophisticated algorithms such as linear and non-linear models was assessed using statistical measures. As a result, a novel algorithm employing support vector regression was found to perform best with a concordance correlation coefficient of 0.93 (95%CI 0.92–0.94) and a coverage probability CP1 of 0.46 (95%CI 0.12–0.70) for a deviation of 0.1 m/s (CP2 0.78, CP3 0.94) when compared to the mobile gold standard while walking indoors. A smaller outdoor experiment confirmed those results with even better coverage probability. We conclude that walking speed thus obtained has the potential to help establish walking speed in free-living environments as a patient-oriented outcome measure. PMID:21850254
2012-01-01
Background Chaos Game Representation (CGR) is an iterated function that bijectively maps discrete sequences into a continuous domain. As a result, discrete sequences can be object of statistical and topological analyses otherwise reserved to numerical systems. Characteristically, CGR coordinates of substrings sharing an L-long suffix will be located within 2-L distance of each other. In the two decades since its original proposal, CGR has been generalized beyond its original focus on genomic sequences and has been successfully applied to a wide range of problems in bioinformatics. This report explores the possibility that it can be further extended to approach algorithms that rely on discrete, graph-based representations. Results The exploratory analysis described here consisted of selecting foundational string problems and refactoring them using CGR-based algorithms. We found that CGR can take the role of suffix trees and emulate sophisticated string algorithms, efficiently solving exact and approximate string matching problems such as finding all palindromes and tandem repeats, and matching with mismatches. The common feature of these problems is that they use longest common extension (LCE) queries as subtasks of their procedures, which we show to have a constant time solution with CGR. Additionally, we show that CGR can be used as a rolling hash function within the Rabin-Karp algorithm. Conclusions The analysis of biological sequences relies on algorithmic foundations facing mounting challenges, both logistic (performance) and analytical (lack of unifying mathematical framework). CGR is found to provide the latter and to promise the former: graph-based data structures for sequence analysis operations are entailed by numerical-based data structures produced by CGR maps, providing a unifying analytical framework for a diversity of pattern matching problems. PMID:22551152
Fitting ordinary differential equations to short time course data.
Brewer, Daniel; Barenco, Martino; Callard, Robin; Hubank, Michael; Stark, Jaroslav
2008-02-28
Ordinary differential equations (ODEs) are widely used to model many systems in physics, chemistry, engineering and biology. Often one wants to compare such equations with observed time course data, and use this to estimate parameters. Surprisingly, practical algorithms for doing this are relatively poorly developed, particularly in comparison with the sophistication of numerical methods for solving both initial and boundary value problems for differential equations, and for locating and analysing bifurcations. A lack of good numerical fitting methods is particularly problematic in the context of systems biology where only a handful of time points may be available. In this paper, we present a survey of existing algorithms and describe the main approaches. We also introduce and evaluate a new efficient technique for estimating ODEs linear in parameters particularly suited to situations where noise levels are high and the number of data points is low. It employs a spline-based collocation scheme and alternates linear least squares minimization steps with repeated estimates of the noise-free values of the variables. This is reminiscent of expectation-maximization methods widely used for problems with nuisance parameters or missing data.
Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange.
Hula, Andreas; Montague, P Read; Dayan, Peter
2015-06-01
Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent's preference for equity with their partner, beliefs about the partner's appetite for equity, beliefs about the partner's model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference.
The Hico Image Processing System: A Web-Accessible Hyperspectral Remote Sensing Toolbox
NASA Astrophysics Data System (ADS)
Harris, A. T., III; Goodman, J.; Justice, B.
2014-12-01
As the quantity of Earth-observation data increases, the use-case for hosting analytical tools in geospatial data centers becomes increasingly attractive. To address this need, HySpeed Computing and Exelis VIS have developed the HICO Image Processing System, a prototype cloud computing system that provides online, on-demand, scalable remote sensing image processing capabilities. The system provides a mechanism for delivering sophisticated image processing analytics and data visualization tools into the hands of a global user community, who will only need a browser and internet connection to perform analysis. Functionality of the HICO Image Processing System is demonstrated using imagery from the Hyperspectral Imager for the Coastal Ocean (HICO), an imaging spectrometer located on the International Space Station (ISS) that is optimized for acquisition of aquatic targets. Example applications include a collection of coastal remote sensing algorithms that are directed at deriving critical information on water and habitat characteristics of our vulnerable coastal environment. The project leverages the ENVI Services Engine as the framework for all image processing tasks, and can readily accommodate the rapid integration of new algorithms, datasets and processing tools.
NASA Technical Reports Server (NTRS)
Pratt, D. T.; Radhakrishnan, K.
1986-01-01
The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.
Multiple confidence estimates as indices of eyewitness memory.
Sauer, James D; Brewer, Neil; Weber, Nathan
2008-08-01
Eyewitness identification decisions are vulnerable to various influences on witnesses' decision criteria that contribute to false identifications of innocent suspects and failures to choose perpetrators. An alternative procedure using confidence estimates to assess the degree of match between novel and previously viewed faces was investigated. Classification algorithms were applied to participants' confidence data to determine when a confidence value or pattern of confidence values indicated a positive response. Experiment 1 compared confidence group classification accuracy with a binary decision control group's accuracy on a standard old-new face recognition task and found superior accuracy for the confidence group for target-absent trials but not for target-present trials. Experiment 2 used a face mini-lineup task and found reduced target-present accuracy offset by large gains in target-absent accuracy. Using a standard lineup paradigm, Experiments 3 and 4 also found improved classification accuracy for target-absent lineups and, with a more sophisticated algorithm, for target-present lineups. This demonstrates the accessibility of evidence for recognition memory decisions and points to a more sensitive index of memory quality than is afforded by binary decisions.
Matrix algorithms for solving (in)homogeneous bound state equations
Blank, M.; Krassnigg, A.
2011-01-01
In the functional approach to quantum chromodynamics, the properties of hadronic bound states are accessible via covariant integral equations, e.g. the Bethe–Salpeter equation for mesons. In particular, one has to deal with linear, homogeneous integral equations which, in sophisticated model setups, use numerical representations of the solutions of other integral equations as part of their input. Analogously, inhomogeneous equations can be constructed to obtain off-shell information in addition to bound-state masses and other properties obtained from the covariant analogue to a wave function of the bound state. These can be solved very efficiently using well-known matrix algorithms for eigenvalues (in the homogeneous case) and the solution of linear systems (in the inhomogeneous case). We demonstrate this by solving the homogeneous and inhomogeneous Bethe–Salpeter equations and find, e.g. that for the calculation of the mass spectrum it is as efficient or even advantageous to use the inhomogeneous equation as compared to the homogeneous. This is valuable insight, in particular for the study of baryons in a three-quark setup and more involved systems. PMID:21760640
Visualising higher order Brillouin zones with applications
NASA Astrophysics Data System (ADS)
Andrew, R. C.; Salagaram, T.; Chetty, N.
2017-05-01
A key concept in material science is the relationship between the Bravais lattice, the reciprocal lattice and the resulting Brillouin zones (BZ). These zones are often complicated shapes that are hard to construct and visualise without the use of sophisticated software, even by professional scientists. We have used a simple sorting algorithm to construct BZ of any order for a chosen Bravais lattice that is easy to implement in any scientific programming language. The resulting zones can then be visualised using freely available plotting software. This method has pedagogical value for upper-level undergraduate students since, along with other computational methods, it can be used to illustrate how constant-energy surfaces combine with these zones to create van Hove singularities in the density of states. In this paper we apply our algorithm along with the empirical pseudopotential method and the 2D equivalent of the tetrahedron method to show how they can be used in a simple software project to investigate this interaction for a 2D crystal. This project not only enhances students’ fundamental understanding of the principles involved but also improves transferable coding skills.
Construction of CASCI-type wave functions for very large active spaces.
Boguslawski, Katharina; Marti, Konrad H; Reiher, Markus
2011-06-14
We present a procedure to construct a configuration-interaction expansion containing arbitrary excitations from an underlying full-configuration-interaction-type wave function defined for a very large active space. Our procedure is based on the density-matrix renormalization group (DMRG) algorithm that provides the necessary information in terms of the eigenstates of the reduced density matrices to calculate the coefficient of any basis state in the many-particle Hilbert space. Since the dimension of the Hilbert space scales binomially with the size of the active space, a sophisticated Monte Carlo sampling routine is employed. This sampling algorithm can also construct such configuration-interaction-type wave functions from any other type of tensor network states. The configuration-interaction information obtained serves several purposes. It yields a qualitatively correct description of the molecule's electronic structure, it allows us to analyze DMRG wave functions converged for the same molecular system but with different parameter sets (e.g., different numbers of active-system (block) states), and it can be considered a balanced reference for the application of a subsequent standard multi-reference configuration-interaction method.
Conceptual Design of the ITER Plasma Control System
NASA Astrophysics Data System (ADS)
Snipes, J. A.
2013-10-01
The conceptual design of the ITER Plasma Control System (PCS) has been approved and the preliminary design has begun for the 1st plasma PCS. This is a collaboration of many plasma control experts from existing devices to design and test plasma control techniques applicable to ITER on existing machines. The conceptual design considered all phases of plasma operation, ranging from non-active H/He plasmas through high fusion gain inductive DT plasmas to fully non-inductive steady-state operation, to ensure that the PCS control functionality and architecture can satisfy the demands of the ITER Research Plan. The PCS will control plasma equilibrium and density, plasma heat exhaust, a range of MHD instabilities (including disruption mitigation), and the non-inductive current profile required to maintain stable steady-state scenarios. The PCS architecture requires sophisticated shared actuator management and event handling systems to prioritize control goals, algorithms, and actuators according to dynamic control needs and monitor plasma and plant system events to trigger automatic changes in the control algorithms or operational scenario, depending on real-time operating limits and conditions.
NASA Astrophysics Data System (ADS)
Yepes-Calderon, Fernando; Brun, Caroline; Sant, Nishita; Thompson, Paul; Lepore, Natasha
2015-01-01
Tensor-Based Morphometry (TBM) is an increasingly popular method for group analysis of brain MRI data. The main steps in the analysis consist of a nonlinear registration to align each individual scan to a common space, and a subsequent statistical analysis to determine morphometric differences, or difference in fiber structure between groups. Recently, we implemented the Statistically-Assisted Fluid Registration Algorithm or SAFIRA,1 which is designed for tracking morphometric differences among populations. To this end, SAFIRA allows the inclusion of statistical priors extracted from the populations being studied as regularizers in the registration. This flexibility and degree of sophistication limit the tool to expert use, even more so considering that SAFIRA was initially implemented in command line mode. Here, we introduce a new, intuitive, easy to use, Matlab-based graphical user interface for SAFIRA's multivariate TBM. The interface also generates different choices for the TBM statistics, including both the traditional univariate statistics on the Jacobian matrix, and comparison of the full deformation tensors.2 This software will be freely disseminated to the neuroimaging research community.
Markov Chain Monte Carlo in the Analysis of Single-Molecule Experimental Data
NASA Astrophysics Data System (ADS)
Kou, S. C.; Xie, X. Sunney; Liu, Jun S.
2003-11-01
This article provides a Bayesian analysis of the single-molecule fluorescence lifetime experiment designed to probe the conformational dynamics of a single DNA hairpin molecule. The DNA hairpin's conformational change is initially modeled as a two-state Markov chain, which is not observable and has to be indirectly inferred. The Brownian diffusion of the single molecule, in addition to the hidden Markov structure, further complicates the matter. We show that the analytical form of the likelihood function can be obtained in the simplest case and a Metropolis-Hastings algorithm can be designed to sample from the posterior distribution of the parameters of interest and to compute desired estiamtes. To cope with the molecular diffusion process and the potentially oscillating energy barrier between the two states of the DNA hairpin, we introduce a data augmentation technique to handle both the Brownian diffusion and the hidden Ornstein-Uhlenbeck process associated with the fluctuating energy barrier, and design a more sophisticated Metropolis-type algorithm. Our method not only increases the estimating resolution by several folds but also proves to be successful for model discrimination.
Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange
Hula, Andreas; Montague, P. Read; Dayan, Peter
2015-01-01
Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent’s preference for equity with their partner, beliefs about the partner’s appetite for equity, beliefs about the partner’s model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference. PMID:26053429
Ganapathiraju, Madhavi K; Orii, Naoki
2013-08-30
Advances in biotechnology have created "big-data" situations in molecular and cellular biology. Several sophisticated algorithms have been developed that process big data to generate hundreds of biomedical hypotheses (or predictions). The bottleneck to translating this large number of biological hypotheses is that each of them needs to be studied by experimentation for interpreting its functional significance. Even when the predictions are estimated to be very accurate, from a biologist's perspective, the choice of which of these predictions is to be studied further is made based on factors like availability of reagents and resources and the possibility of formulating some reasonable hypothesis about its biological relevance. When viewed from a global perspective, say from that of a federal funding agency, ideally the choice of which prediction should be studied would be made based on which of them can make the most translational impact. We propose that algorithms be developed to identify which of the computationally generated hypotheses have potential for high translational impact; this way, funding agencies and scientific community can invest resources and drive the research based on a global view of biomedical impact without being deterred by local view of feasibility. In short, data-analytic algorithms analyze big-data and generate hypotheses; in contrast, the proposed inference-analytic algorithms analyze these hypotheses and rank them by predicted biological impact. We demonstrate this through the development of an algorithm to predict biomedical impact of protein-protein interactions (PPIs) which is estimated by the number of future publications that cite the paper which originally reported the PPI. This position paper describes a new computational problem that is relevant in the era of big-data and discusses the challenges that exist in studying this problem, highlighting the need for the scientific community to engage in this line of research. The proposed class of algorithms, namely inference-analytic algorithms, is necessary to ensure that resources are invested in translating those computational outcomes that promise maximum biological impact. Application of this concept to predict biomedical impact of PPIs illustrates not only the concept, but also the challenges in designing these algorithms.
Plume-Free Stream Interaction Heating Effects During Orion Crew Module Reentry
NASA Technical Reports Server (NTRS)
Marichalar, J.; Lumpkin, F.; Boyles, K.
2012-01-01
During reentry of the Orion Crew Module (CM), vehicle attitude control will be performed by firing reaction control system (RCS) thrusters. Simulation of RCS plumes and their interaction with the oncoming flow has been difficult for the analysis community due to the large scarf angles of the RCS thrusters and the unsteady nature of the Orion capsule backshell environments. The model for the aerothermal database has thus relied on wind tunnel test data to capture the heating effects of thruster plume interactions with the freestream. These data are only valid for the continuum flow regime of the reentry trajectory. A Direct Simulation Monte Carlo (DSMC) analysis was performed to study the vehicle heating effects that result from the RCS thruster plume interaction with the oncoming freestream flow at high altitudes during Orion CM reentry. The study was performed with the DSMC Analysis Code (DAC). The inflow boundary conditions for the jets were obtained from Data Parallel Line Relaxation (DPLR) computational fluid dynamics (CFD) solutions. Simulations were performed for the roll, yaw, pitch-up and pitch-down jets at altitudes of 105 km, 125 km and 160 km as well as vacuum conditions. For comparison purposes (see Figure 1), the freestream conditions were based on previous DAC simulations performed without active RCS to populate the aerodynamic database for the Orion CM. Other inputs to the analysis included a constant Orbital reentry velocity of 7.5 km/s and angle of attack of 160 degrees. The results of the study showed that the interaction effects decrease quickly with increasing altitude. Also, jets with highly scarfed nozzles cause more severe heating compared to the nozzles with lower scarf angles. The difficulty of performing these simulations was based on the maximum number density and the ratio of number densities between the freestream and the plume for each simulation. The lowest altitude solutions required a substantial amount of computational resources (up to 1800 processors) to simulate approximately 2 billion molecules for the refined (adapted) solutions.
Aero-thermo-dynamic analysis of the Spaceliner-7.1 vehicle in high altitude flight
NASA Astrophysics Data System (ADS)
Zuppardi, Gennaro; Morsa, Luigi; Sippel, Martin; Schwanekamp, Tobias
2014-12-01
SpaceLiner, designed by DLR, is a visionary, extremely fast passenger transportation concept. It consists of two stages: a winged booster, a vehicle. After separation of the two stages, the booster makes a controlled re-entry and returns to the launch site. According to the current project, version 7-1 of SpaceLiner (SpaceLiner-7.1), the vehicle should be brought at an altitude of 75 km and then released, undertaking the descent path. In the perspective that the vehicle of SpaceLiner-7.1 could be brought to altitudes higher than 75 km, e.g. 100 km or above and also for a speculative purpose, in this paper the aerodynamic parameters of the SpaceLiner-7.1 vehicle are calculated in the whole transition regime, from continuum low density to free molecular flows. Computer simulations have been carried out by three codes: two DSMC codes, DS3V in the altitude interval 100-250 km for the evaluation of the global aerodynamic coefficients and DS2V at the altitude of 60 km for the evaluation of the heat flux and pressure distributions along the vehicle nose, and the DLR HOTSOSE code for the evaluation of the global aerodynamic coefficients in continuum, hypersonic flow at the altitude of 44.6 km. The effectiveness of the flaps with deflection angle of -35 deg. was evaluated in the above mentioned altitude interval. The vehicle showed longitudinal stability in the whole altitude interval even with no flap. The global bridging formulae verified to be proper for the evaluation of the aerodynamic coefficients in the altitude interval 80-100 km where the computations cannot be fulfilled either by CFD, because of the failure of the classical equations computing the transport coefficients, or by DSMC because of the requirement of very high computer resources both in terms of the core storage (a high number of simulated molecules is needed) and to the very long processing time.
Sophistry, the Sophists and modern medical education.
Macsuibhne, S P
2010-01-01
The term 'sophist' has become a term of intellectual abuse in both general discourse and that of educational theory. However the actual thought of the fifth century BC Athenian-based philosophers who were the original Sophists was very different from the caricature. In this essay, I draw parallels between trends in modern medical educational practice and the thought of the Sophists. Specific areas discussed are the professionalisation of medical education, the teaching of higher-order characterological attributes such as personal development skills, and evidence-based medical education. Using the specific example of the Sophist Protagoras, it is argued that the Sophists were precursors of philosophical approaches and practices of enquiry underlying modern medical education.
NASA Astrophysics Data System (ADS)
Dufaux, Frederic
2011-06-01
The issue of privacy in video surveillance has drawn a lot of interest lately. However, thorough performance analysis and validation is still lacking, especially regarding the fulfillment of privacy-related requirements. In this paper, we first review recent Privacy Enabling Technologies (PET). Next, we discuss pertinent evaluation criteria for effective privacy protection. We then put forward a framework to assess the capacity of PET solutions to hide distinguishing facial information and to conceal identity. We conduct comprehensive and rigorous experiments to evaluate the performance of face recognition algorithms applied to images altered by PET. Results show the ineffectiveness of naïve PET such as pixelization and blur. Conversely, they demonstrate the effectiveness of more sophisticated scrambling techniques to foil face recognition.
3-d finite element model development for biomechanics: a software demonstration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollerbach, K.; Hollister, A.M.; Ashby, E.
1997-03-01
Finite element analysis is becoming an increasingly important part of biomechanics and orthopedic research, as computational resources become more powerful, and data handling algorithms become more sophisticated. Until recently, tools with sufficient power did not exist or were not accessible to adequately model complicated, three-dimensional, nonlinear biomechanical systems. In the past, finite element analyses in biomechanics have often been limited to two-dimensional approaches, linear analyses, or simulations of single tissue types. Today, we have the resources to model fully three-dimensional, nonlinear, multi-tissue, and even multi-joint systems. The authors will present the process of developing these kinds of finite element models,more » using human hand and knee examples, and will demonstrate their software tools.« less
Aerosol Polarimetry Sensor (APS): Design Summary, Performance and Potential Modifications
NASA Technical Reports Server (NTRS)
Cairns, Brian
2014-01-01
APS is a mature design that has already been built and has a TRL of 7. Algorithmic and retrieval capabilities continue to improve and make better and more sophisticated used of the data. Adjoint solutions, both in one dimensional and three dimensional are computationally efficient and should be the preferred implementation for the calculation of Jacobians (one dimensional), or cost-function gradients (three dimensional). Adjoint solutions necessarily provide resolution of internal fields and simplify incorporation of active measurements in retrievals, which will be necessary for a future ACE mission. Its best to test these capabilities when you know the answer: OSSEs that are well constrained observationally provide the best place to test future multi-instrument platform capabilities and ensure capabilities will meet scientific needs.
Beyond the proteome: Mass Spectrometry Special Interest Group (MS-SIG) at ISMB/ECCB 2013
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryu, Soyoung; Payne, Samuel H.; Schaab, Christoph
2014-07-02
Mass spectrometry special interest group (MS-SIG) aims to bring together experts from the global research community to discuss highlights and challenges in the field of mass spectrometry (MS)-based proteomics and computational biology. The rapid echnological developments in MS-based proteomics have enabled the generation of a large amount of meaningful information on hundreds to thousands of proteins simultaneously from a biological sample; however, the complexity of the MS data require sophisticated computational algorithms and software for data analysis and interpretation. This year’s MS-SIG meeting theme was ‘Beyond the Proteome’ with major focuses on improving protein identification/quantification and using proteomics data tomore » solve interesting problems in systems biology and clinical research.« less
DYNA3D: A computer code for crashworthiness engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hallquist, J.O.; Benson, D.J.
1986-09-01
A finite element program with crashworthiness applications has been developed at LLNL. DYNA3D, an explicit, fully vectorized, finite deformation structural dynamics program, has four capabilities that are critical for the efficient and realistic modeling crash phenomena: (1) fully optimized nonlinear solid, shell, and beam elements for representing a structure; (2) a broad range of constitutive models for simulating material behavior; (3) sophisticated contact algorithms for impact interactions; (4) a rigid body capability to represent the bodies away from the impact region at a greatly reduced cost without sacrificing accuracy in the momentum calculations. Basic methodologies of the program are brieflymore » presented along with several crashworthiness calculations. Efficiencies of the Hughes-Liu and Belytschko-Tsay shell formulations are considered.« less
Using mobile location data in biomedical research while preserving privacy.
Goldenholz, Daniel M; Goldenholz, Shira R; Krishnamurthy, Kaarkuzhali B; Halamka, John; Karp, Barbara; Tyburski, Matthew; Wendler, David; Moss, Robert; Preston, Kenzie L; Theodore, William
2018-06-07
Location data are becoming easier to obtain and are now bundled with other metadata in a variety of biomedical research applications. At the same time, the level of sophistication required to protect patient privacy is also increasing. In this article, we provide guidance for institutional review boards (IRBs) to make informed decisions about privacy protections in protocols involving location data. We provide an overview of some of the major categories of technical algorithms and medical-legal tools at the disposal of investigators, as well as the shortcomings of each. Although there is no "one size fits all" approach to privacy protection, this article attempts to describe a set of practical considerations that can be used by investigators, journal editors, and IRBs.
A Formal Basis for Safety Case Patterns
NASA Technical Reports Server (NTRS)
Denney, Ewen; Pai, Ganesh
2013-01-01
By capturing common structures of successful arguments, safety case patterns provide an approach for reusing strategies for reasoning about safety. In the current state of the practice, patterns exist as descriptive specifications with informal semantics, which not only offer little opportunity for more sophisticated usage such as automated instantiation, composition and manipulation, but also impede standardization efforts and tool interoperability. To address these concerns, this paper gives (i) a formal definition for safety case patterns, clarifying both restrictions on the usage of multiplicity and well-founded recursion in structural abstraction, (ii) formal semantics to patterns, and (iii) a generic data model and algorithm for pattern instantiation. We illustrate our contributions by application to a new pattern, the requirements breakdown pattern, which builds upon our previous work
Meteorological data fields 'in perspective'
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Pierce, H.; Morris, K. R.; Dodge, J.
1985-01-01
Perspective display techniques can be applied to meteorological data sets to aid in their interpretation. Examples of a perspective display procedure applied to satellite and aircraft visible and infrared image pairs and to stereo cloud-top height analyses are presented. The procedure uses a sophisticated shading algorithm that produces perspective images with greatly improved comprehensibility when compared with the wire-frame perspective displays that have been used in the past. By changing the 'eye-point' and 'view-point' inputs to the program in a systematic way, movie loops that give the impression of flying over or through the data field have been made. This paper gives examples that show how several kinds of meteorological data fields are more effectively illustrated using the perspective technique.
Fast neutron flux analyzer with real-time digital pulse shape discrimination
NASA Astrophysics Data System (ADS)
Ivanova, A. A.; Zubarev, P. V.; Ivanenko, S. V.; Khilchenko, A. D.; Kotelnikov, A. I.; Polosatkin, S. V.; Puryga, E. A.; Shvyrev, V. G.; Sulyaev, Yu. S.
2016-08-01
Investigation of subthermonuclear plasma confinement and heating in magnetic fusion devices such as GOL-3 and GDT at the Budker Institute (Novosibirsk, Russia) requires sophisticated equipment for neutron-, gamma- diagnostics and upgrading data acquisition systems with online data processing. Measurement of fast neutron flux with stilbene scintillation detectors raised the problem of discrimination of the neutrons (n) from background cosmic particles (muons) and neutron-induced gamma rays (γ). This paper describes a fast neutron flux analyzer with real-time digital pulse-shape discrimination (DPSD) algorithm FPGA-implemented for the GOL-3 and GDT devices. This analyzer was tested and calibrated with the help of 137Cs and 252Cf radiation sources. The Figures of Merit (FOM) calculated for different energy cuts are presented.
Digital video steganalysis exploiting collusion sensitivity
NASA Astrophysics Data System (ADS)
Budhia, Udit; Kundur, Deepa
2004-09-01
In this paper we present an effective steganalyis technique for digital video sequences based on the collusion attack. Steganalysis is the process of detecting with a high probability and low complexity the presence of covert data in multimedia. Existing algorithms for steganalysis target detecting covert information in still images. When applied directly to video sequences these approaches are suboptimal. In this paper, we present a method that overcomes this limitation by using redundant information present in the temporal domain to detect covert messages in the form of Gaussian watermarks. Our gains are achieved by exploiting the collusion attack that has recently been studied in the field of digital video watermarking, and more sophisticated pattern recognition tools. Applications of our scheme include cybersecurity and cyberforensics.
Computational protein design with backbone plasticity
MacDonald, James T.; Freemont, Paul S.
2016-01-01
The computational algorithms used in the design of artificial proteins have become increasingly sophisticated in recent years, producing a series of remarkable successes. The most dramatic of these is the de novo design of artificial enzymes. The majority of these designs have reused naturally occurring protein structures as ‘scaffolds’ onto which novel functionality can be grafted without having to redesign the backbone structure. The incorporation of backbone flexibility into protein design is a much more computationally challenging problem due to the greatly increased search space, but promises to remove the limitations of reusing natural protein scaffolds. In this review, we outline the principles of computational protein design methods and discuss recent efforts to consider backbone plasticity in the design process. PMID:27911735
Beyond mind-reading: multi-voxel pattern analysis of fMRI data.
Norman, Kenneth A; Polyn, Sean M; Detre, Greg J; Haxby, James V
2006-09-01
A key challenge for cognitive neuroscience is determining how mental representations map onto patterns of neural activity. Recently, researchers have started to address this question by applying sophisticated pattern-classification algorithms to distributed (multi-voxel) patterns of functional MRI data, with the goal of decoding the information that is represented in the subject's brain at a particular point in time. This multi-voxel pattern analysis (MVPA) approach has led to several impressive feats of mind reading. More importantly, MVPA methods constitute a useful new tool for advancing our understanding of neural information processing. We review how researchers are using MVPA methods to characterize neural coding and information processing in domains ranging from visual perception to memory search.
NASA Astrophysics Data System (ADS)
Palacios, S. L.; Thompson, D. R.; Kudela, R. M.; Negrey, K.; Guild, L. S.; Gao, B. C.; Green, R. O.; Torres-Perez, J. L.
2015-12-01
There is a need in the ocean color community to discriminate among phytoplankton groups within the bulk chlorophyll pool to understand ocean biodiversity, to track energy flow through ecosystems, and to identify and monitor for harmful algal blooms. Imaging spectrometer measurements enable use of sophisticated spectroscopic algorithms for applications such as differentiating among coral species, evaluating iron stress of phytoplankton, and discriminating phytoplankton taxa. These advanced algorithms rely on the fine scale, subtle spectral shape of the atmospherically corrected remote sensing reflectance (Rrs) spectrum of the ocean surface. As a consequence, these algorithms are sensitive to inaccuracies in the retrieved Rrs spectrum that may be related to the presence of nearby clouds, inadequate sensor calibration, low sensor signal-to-noise ratio, glint correction, and atmospheric correction. For the HyspIRI Airborne Campaign, flight planning considered optimal weather conditions to avoid flights with significant cloud/fog cover. Although best suited for terrestrial targets, the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has enough signal for some coastal chlorophyll algorithms and meets sufficient calibration requirements for most channels. However, the coastal marine environment has special atmospheric correction needs due to error that may be introduced by aerosols and terrestrially sourced atmospheric dust and riverine sediment plumes. For this HyspIRI campaign, careful attention has been given to the correction of AVIRIS imagery of the Monterey Bay to optimize ocean Rrs retrievals for use in estimating chlorophyll (OC3 algorithm) and phytoplankton functional type (PHYDOTax algorithm) data products. This new correction method has been applied to several image collection dates during two oceanographic seasons - upwelling and the warm, stratified oceanic period for 2013 and 2014. These two periods are dominated by either diatom blooms (occasionally toxic) or red tides. Results presented include chlorophyll and phytoplankton community structure and in-water validation data for these dates during these two seasons.
Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen
2002-12-10
Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the superresolution iterations. A quantitative evaluation of the performance of these algorithms for restoring and superresolving various imagery data captured by diffraction-limited sensing operations are also presented.
Zhang, Wei; Bao, Zhangmin; Jiang, Shan; He, Jingjing
2016-01-01
In the aerospace and aviation sectors, the damage tolerance concept has been applied widely so that the modeling analysis of fatigue crack growth has become more and more significant. Since the process of crack propagation is highly nonlinear and determined by many factors, such as applied stress, plastic zone in the crack tip, length of the crack, etc., it is difficult to build up a general and flexible explicit function to accurately quantify this complicated relationship. Fortunately, the artificial neural network (ANN) is considered a powerful tool for establishing the nonlinear multivariate projection which shows potential in handling the fatigue crack problem. In this paper, a novel fatigue crack calculation algorithm based on a radial basis function (RBF)-ANN is proposed to study this relationship from the experimental data. In addition, a parameter called the equivalent stress intensity factor is also employed as training data to account for loading interaction effects. The testing data is then placed under constant amplitude loading with different stress ratios or overloads used for model validation. Moreover, the Forman and Wheeler equations are also adopted to compare with our proposed algorithm. The current investigation shows that the ANN-based approach can deliver a better agreement with the experimental data than the other two models, which supports that the RBF-ANN has nontrivial advantages in handling the fatigue crack growth problem. Furthermore, it implies that the proposed algorithm is possibly a sophisticated and promising method to compute fatigue crack growth in terms of loading interaction effects. PMID:28773606
Zhang, Wei; Bao, Zhangmin; Jiang, Shan; He, Jingjing
2016-06-17
In the aerospace and aviation sectors, the damage tolerance concept has been applied widely so that the modeling analysis of fatigue crack growth has become more and more significant. Since the process of crack propagation is highly nonlinear and determined by many factors, such as applied stress, plastic zone in the crack tip, length of the crack, etc. , it is difficult to build up a general and flexible explicit function to accurately quantify this complicated relationship. Fortunately, the artificial neural network (ANN) is considered a powerful tool for establishing the nonlinear multivariate projection which shows potential in handling the fatigue crack problem. In this paper, a novel fatigue crack calculation algorithm based on a radial basis function (RBF)-ANN is proposed to study this relationship from the experimental data. In addition, a parameter called the equivalent stress intensity factor is also employed as training data to account for loading interaction effects. The testing data is then placed under constant amplitude loading with different stress ratios or overloads used for model validation. Moreover, the Forman and Wheeler equations are also adopted to compare with our proposed algorithm. The current investigation shows that the ANN-based approach can deliver a better agreement with the experimental data than the other two models, which supports that the RBF-ANN has nontrivial advantages in handling the fatigue crack growth problem. Furthermore, it implies that the proposed algorithm is possibly a sophisticated and promising method to compute fatigue crack growth in terms of loading interaction effects.
High content analysis of phagocytic activity and cell morphology with PuntoMorph.
Al-Ali, Hassan; Gao, Han; Dalby-Hansen, Camilla; Peters, Vanessa Ann; Shi, Yan; Brambilla, Roberta
2017-11-01
Phagocytosis is essential for maintenance of normal homeostasis and healthy tissue. As such, it is a therapeutic target for a wide range of clinical applications. The development of phenotypic screens targeting phagocytosis has lagged behind, however, due to the difficulties associated with image-based quantification of phagocytic activity. We present a robust algorithm and cell-based assay system for high content analysis of phagocytic activity. The method utilizes fluorescently labeled beads as a phagocytic substrate with defined physical properties. The algorithm employs statistical modeling to determine the mean fluorescence of individual beads within each image, and uses the information to conduct an accurate count of phagocytosed beads. In addition, the algorithm conducts detailed and sophisticated analysis of cellular morphology, making it a standalone tool for high content screening. We tested our assay system using microglial cultures. Our results recapitulated previous findings on the effects of microglial stimulation on cell morphology and phagocytic activity. Moreover, our cell-level analysis revealed that the two phenotypes associated with microglial activation, specifically cell body hypertrophy and increased phagocytic activity, are not highly correlated. This novel finding suggests the two phenotypes may be under the control of distinct signaling pathways. We demonstrate that our assay system outperforms preexisting methods for quantifying phagocytic activity in multiple dimensions including speed, accuracy, and resolution. We provide a framework to facilitate the development of high content assays suitable for drug screening. For convenience, we implemented our algorithm in a standalone software package, PuntoMorph. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almansouri, Hani; Venkatakrishnan, Singanallur V.; Clayton, Dwight A.
One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials beingmore » imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.« less
NASA Astrophysics Data System (ADS)
Fleischer, Christian; Waag, Wladislaw; Heyn, Hans-Martin; Sauer, Dirk Uwe
2014-09-01
Lithium-ion battery systems employed in high power demanding systems such as electric vehicles require a sophisticated monitoring system to ensure safe and reliable operation. Three major states of the battery are of special interest and need to be constantly monitored. These include: battery state of charge (SoC), battery state of health (capacity fade determination, SoH), and state of function (power fade determination, SoF). The second paper concludes the series by presenting a multi-stage online parameter identification technique based on a weighted recursive least quadratic squares parameter estimator to determine the parameters of the proposed battery model from the first paper during operation. A novel mutation based algorithm is developed to determine the nonlinear current dependency of the charge-transfer resistance. The influence of diffusion is determined by an on-line identification technique and verified on several batteries at different operation conditions. This method guarantees a short response time and, together with its fully recursive structure, assures a long-term stable monitoring of the battery parameters. The relative dynamic voltage prediction error of the algorithm is reduced to 2%. The changes of parameters are used to determine the states of the battery. The algorithm is real-time capable and can be implemented on embedded systems.
Limited data tomographic image reconstruction via dual formulation of total variation minimization
NASA Astrophysics Data System (ADS)
Jang, Kwang Eun; Sung, Younghun; Lee, Kangeui; Lee, Jongha; Cho, Seungryong
2011-03-01
The X-ray mammography is the primary imaging modality for breast cancer screening. For the dense breast, however, the mammogram is usually difficult to read due to tissue overlap problem caused by the superposition of normal tissues. The digital breast tomosynthesis (DBT) that measures several low dose projections over a limited angle range may be an alternative modality for breast imaging, since it allows the visualization of the cross-sectional information of breast. The DBT, however, may suffer from the aliasing artifact and the severe noise corruption. To overcome these problems, a total variation (TV) regularized statistical reconstruction algorithm is presented. Inspired by the dual formulation of TV minimization in denoising and deblurring problems, we derived a gradient-type algorithm based on statistical model of X-ray tomography. The objective function is comprised of a data fidelity term derived from the statistical model and a TV regularization term. The gradient of the objective function can be easily calculated using simple operations in terms of auxiliary variables. After a descending step, the data fidelity term is renewed in each iteration. Since the proposed algorithm can be implemented without sophisticated operations such as matrix inverse, it provides an efficient way to include the TV regularization in the statistical reconstruction method, which results in a fast and robust estimation for low dose projections over the limited angle range. Initial tests with an experimental DBT system confirmed our finding.
The Next Era: Deep Learning in Pharmaceutical Research.
Ekins, Sean
2016-11-01
Over the past decade we have witnessed the increasing sophistication of machine learning algorithms applied in daily use from internet searches, voice recognition, social network software to machine vision software in cameras, phones, robots and self-driving cars. Pharmaceutical research has also seen its fair share of machine learning developments. For example, applying such methods to mine the growing datasets that are created in drug discovery not only enables us to learn from the past but to predict a molecule's properties and behavior in future. The latest machine learning algorithm garnering significant attention is deep learning, which is an artificial neural network with multiple hidden layers. Publications over the last 3 years suggest that this algorithm may have advantages over previous machine learning methods and offer a slight but discernable edge in predictive performance. The time has come for a balanced review of this technique but also to apply machine learning methods such as deep learning across a wider array of endpoints relevant to pharmaceutical research for which the datasets are growing such as physicochemical property prediction, formulation prediction, absorption, distribution, metabolism, excretion and toxicity (ADME/Tox), target prediction and skin permeation, etc. We also show that there are many potential applications of deep learning beyond cheminformatics. It will be important to perform prospective testing (which has been carried out rarely to date) in order to convince skeptics that there will be benefits from investing in this technique.
Identification of functional modules using network topology and high-throughput data.
Ulitsky, Igor; Shamir, Ron
2007-01-26
With the advent of systems biology, biological knowledge is often represented today by networks. These include regulatory and metabolic networks, protein-protein interaction networks, and many others. At the same time, high-throughput genomics and proteomics techniques generate very large data sets, which require sophisticated computational analysis. Usually, separate and different analysis methodologies are applied to each of the two data types. An integrated investigation of network and high-throughput information together can improve the quality of the analysis by accounting simultaneously for topological network properties alongside intrinsic features of the high-throughput data. We describe a novel algorithmic framework for this challenge. We first transform the high-throughput data into similarity values, (e.g., by computing pairwise similarity of gene expression patterns from microarray data). Then, given a network of genes or proteins and similarity values between some of them, we seek connected sub-networks (or modules) that manifest high similarity. We develop algorithms for this problem and evaluate their performance on the osmotic shock response network in S. cerevisiae and on the human cell cycle network. We demonstrate that focused, biologically meaningful and relevant functional modules are obtained. In comparison with extant algorithms, our approach has higher sensitivity and higher specificity. We have demonstrated that our method can accurately identify functional modules. Hence, it carries the promise to be highly useful in analysis of high throughput data.
Exploring the performance of large-N radio astronomical arrays
NASA Astrophysics Data System (ADS)
Lonsdale, Colin J.; Doeleman, Sheperd S.; Cappallo, Roger J.; Hewitt, Jacqueline N.; Whitney, Alan R.
2000-07-01
New radio telescope arrays are currently being contemplated which may be built using hundreds, or even thousands, of relatively small antennas. These include the One Hectare Telescope of the SETI Institute and UC Berkeley, the LOFAR telescope planned for the New Mexico desert surrounding the VLA, and possibly the ambitious international Square Kilometer Array (SKA) project. Recent and continuing advances in signal transmission and processing technology make it realistic to consider full cross-correlation of signals from such a large number of antennas, permitting the synthesis of an aperture with much greater fidelity than in the past. In principle, many advantages in instrumental performance are gained by this 'large-N' approach to the design, most of which require the development of new algorithms. Because new instruments of this type are expected to outstrip the performance of current instruments by wide margins, much of their scientific productivity is likely to come from the study of objects which are currently unknown. For this reason, instrumental flexibility is of special importance in design studies. A research effort has begun at Haystack Observatory to explore large-N performance benefits, and to determine what array design properties and data reduction algorithms are required to achieve them. The approach to these problems, involving a sophisticated data simulator, algorithm development, and exploration of array configuration parameter space, will be described, and progress to date will be summarized.
NASA Technical Reports Server (NTRS)
Leptoukh, Gregory G.
2005-01-01
The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is one of the major Distributed Active Archive Centers (DAACs) archiving and distributing remote sensing data from the NASA's Earth Observing System. In addition to providing just data, the GES DISC/DAAC has developed various value-adding processing services. A particularly useful service is data processing a t the DISC (i.e., close to the input data) with the users' algorithms. This can take a number of different forms: as a configuration-managed algorithm within the main processing stream; as a stand-alone program next to the on-line data storage; as build-it-yourself code within the Near-Archive Data Mining (NADM) system; or as an on-the-fly analysis with simple algorithms embedded into the web-based tools (to avoid downloading unnecessary all the data). The existing data management infrastructure at the GES DISC supports a wide spectrum of options: from data subsetting data spatially and/or by parameter to sophisticated on-line analysis tools, producing economies of scale and rapid time-to-deploy. Shifting processing and data management burden from users to the GES DISC, allows scientists to concentrate on science, while the GES DISC handles the data management and data processing at a lower cost. Several examples of successful partnerships with scientists in the area of data processing and mining are presented.
Keivanian, Farshid; Mehrshad, Nasser; Bijari, Abolfazl
2016-01-01
D Flip-Flop as a digital circuit can be used as a timing element in many sophisticated circuits. Therefore the optimum performance with the lowest power consumption and acceptable delay time will be critical issue in electronics circuits. The newly proposed Dual-Edge Triggered Static D Flip-Flop circuit layout is defined as a multi-objective optimization problem. For this, an optimum fuzzy inference system with fuzzy rules is proposed to enhance the performance and convergence of non-dominated sorting Genetic Algorithm-II by adaptive control of the exploration and exploitation parameters. By using proposed Fuzzy NSGA-II algorithm, the more optimum values for MOSFET channel widths and power supply are discovered in search space than ordinary NSGA types. What is more, the design parameters involving NMOS and PMOS channel widths and power supply voltage and the performance parameters including average power consumption and propagation delay time are linked. To do this, the required mathematical backgrounds are presented in this study. The optimum values for the design parameters of MOSFETs channel widths and power supply are discovered. Based on them the power delay product quantity (PDP) is 6.32 PJ at 125 MHz Clock Frequency, L = 0.18 µm, and T = 27 °C.
NASA Astrophysics Data System (ADS)
Almansouri, Hani; Venkatakrishnan, Singanallur; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector
2018-04-01
One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials being imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.
When drug discovery meets web search: Learning to Rank for ligand-based virtual screening.
Zhang, Wei; Ji, Lijuan; Chen, Yanan; Tang, Kailin; Wang, Haiping; Zhu, Ruixin; Jia, Wei; Cao, Zhiwei; Liu, Qi
2015-01-01
The rapid increase in the emergence of novel chemical substances presents a substantial demands for more sophisticated computational methodologies for drug discovery. In this study, the idea of Learning to Rank in web search was presented in drug virtual screening, which has the following unique capabilities of 1). Applicable of identifying compounds on novel targets when there is not enough training data available for these targets, and 2). Integration of heterogeneous data when compound affinities are measured in different platforms. A standard pipeline was designed to carry out Learning to Rank in virtual screening. Six Learning to Rank algorithms were investigated based on two public datasets collected from Binding Database and the newly-published Community Structure-Activity Resource benchmark dataset. The results have demonstrated that Learning to rank is an efficient computational strategy for drug virtual screening, particularly due to its novel use in cross-target virtual screening and heterogeneous data integration. To the best of our knowledge, we have introduced here the first application of Learning to Rank in virtual screening. The experiment workflow and algorithm assessment designed in this study will provide a standard protocol for other similar studies. All the datasets as well as the implementations of Learning to Rank algorithms are available at http://www.tongji.edu.cn/~qiliu/lor_vs.html. Graphical AbstractThe analogy between web search and ligand-based drug discovery.
Transient heat transfer in viscous rarefied gas between concentric cylinders. Effect of curvature
NASA Astrophysics Data System (ADS)
Gospodinov, P.; Roussinov, V.; Dankov, D.
2015-10-01
The thermoacoustic waves arising in cylindrical or planar Couette rarefied gas flow between rotating cylinders is studied in the cases of suddenly cylinder (active) wall velocity direction turn on. An unlimited increase in the radius of the inner cylinder flow can be interpreted as Couette flow between the two flat plates. Based on the developed in previous publications Navier-Stockes-Fourier (NSF) model and Direct Simulation Monte Carlo (DSMC) method and their numerical solutions, are considered transient processes in the gas phase. Macroscopic flow characteristics (velocity, density, temperature) are received. The cylindrical flow cases for fixed velocity and temperature of the both walls are considered. The curvature effects over the wave's distribution and attenuation are studied numerically.
Rarefaction effects on Galileo probe aerodynamics
NASA Technical Reports Server (NTRS)
Moss, James N.; LeBeau, Gerald J.; Blanchard, Robert C.; Price, Joseph M.
1996-01-01
Solutions of aerodynamic characteristics are presented for the Galileo Probe entering Jupiter's hydrogen-helium atmosphere at a nominal relative velocity of 47.4 km/s. Focus is on predicting the aerodynamic drag coefficient during the transitional flow regime using the direct simulation Monte Carlo (DSMC) method. Accuracy of the probe's drag coefficient directly impacts the inferred atmospheric properties that are being extracted from the deceleration measurements made by onboard accelerometers as part of the Atmospheric Structure Experiment. The range of rarefaction considered in the present study extends from the free molecular limit to continuum conditions. Comparisons made with previous calculations and experimental measurements show the present results for drag to merge well with Navier-Stokes and experimental results for the least rarefied conditions considered.
NASA Astrophysics Data System (ADS)
Huang, Z.; Jia, X.; Rubin, M.; Fougere, N.; Gombosi, T. I.; Tenishev, V.; Combi, M. R.; Bieler, A. M.; Toth, G.; Hansen, K. C.; Shou, Y.
2014-12-01
We study the plasma environment of the comet Churyumov-Gerasimenko, which is the target of the Rosetta mission, by performing large scale numerical simulations. Our model is based on BATS-R-US within the Space Weather Modeling Framework that solves the governing multifluid MHD equations, which describe the behavior of the cometary heavy ions, the solar wind protons, and electrons. The model includes various mass loading processes, including ionization, charge exchange, dissociative ion-electron recombination, as well as collisional interactions between different fluids. The neutral background used in our MHD simulations is provided by a kinetic Direct Simulation Monte Carlo (DSMC) model. We will simulate how the cometary plasma environment changes at different heliocentric distances.
A particle-particle hybrid method for kinetic and continuum equations
NASA Astrophysics Data System (ADS)
Tiwari, Sudarshan; Klar, Axel; Hardt, Steffen
2009-10-01
We present a coupling procedure for two different types of particle methods for the Boltzmann and the Navier-Stokes equations. A variant of the DSMC method is applied to simulate the Boltzmann equation, whereas a meshfree Lagrangian particle method, similar to the SPH method, is used for simulations of the Navier-Stokes equations. An automatic domain decomposition approach is used with the help of a continuum breakdown criterion. We apply adaptive spatial and time meshes. The classical Sod's 1D shock tube problem is solved for a large range of Knudsen numbers. Results from Boltzmann, Navier-Stokes and hybrid solvers are compared. The CPU time for the hybrid solver is 3-4 times faster than for the Boltzmann solver.
DSMC analysis of species separation in rarefied nozzle flows
NASA Technical Reports Server (NTRS)
Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren; Penko, Paul F.
1992-01-01
The direct-simulation Monte Carlo method has been used to investigate the behavior of a small amount of a harmful species in the plume and the backflow region of nuclear thermal propulsion rockets. Species separation due to pressure diffusion and nonequilibrium effects due to rapid expansion into a surrounding low-density environment are the most important factors in this type of flow. It is shown that a relatively large amount of the lighter species is scattered into the backflow region and the heavier species becomes negligible in this region due to the extreme separation between species. It is also shown that the type of molecular interaction between the species can have a substantial effect on separation of the species.
Theory of Mind: Did Evolution Fool Us?
Devaine, Marie; Hollard, Guillaume; Daunizeau, Jean
2014-01-01
Theory of Mind (ToM) is the ability to attribute mental states (e.g., beliefs and desires) to other people in order to understand and predict their behaviour. If others are rewarded to compete or cooperate with you, then what they will do depends upon what they believe about you. This is the reason why social interaction induces recursive ToM, of the sort “I think that you think that I think, etc.”. Critically, recursion is the common notion behind the definition of sophistication of human language, strategic thinking in games, and, arguably, ToM. Although sophisticated ToM is believed to have high adaptive fitness, broad experimental evidence from behavioural economics, experimental psychology and linguistics point towards limited recursivity in representing other’s beliefs. In this work, we test whether such apparent limitation may not in fact be proven to be adaptive, i.e. optimal in an evolutionary sense. First, we propose a meta-Bayesian approach that can predict the behaviour of ToM sophistication phenotypes who engage in social interactions. Second, we measure their adaptive fitness using evolutionary game theory. Our main contribution is to show that one does not have to appeal to biological costs to explain our limited ToM sophistication. In fact, the evolutionary cost/benefit ratio of ToM sophistication is non trivial. This is partly because an informational cost prevents highly sophisticated ToM phenotypes to fully exploit less sophisticated ones (in a competitive context). In addition, cooperation surprisingly favours lower levels of ToM sophistication. Taken together, these quantitative corollaries of the “social Bayesian brain” hypothesis provide an evolutionary account for both the limitation of ToM sophistication in humans as well as the persistence of low ToM sophistication levels. PMID:24505296